The invention relates to computer networks and, more particularly, to routing packets within computer networks.
A computer network is a collection of interconnected computing devices that can exchange data and share resources. In a packet-based network, such as an Ethernet network, the computing devices communicate data by dividing the data into variable-length blocks called packets, which are individually routed across the network from a source device to a destination device. The destination device extracts the data from the packets and assembles the data into its original form.
Certain devices, referred to as routers, maintain routing information representative of a topology of the network. The routers exchange routing information so as to maintain an accurate representation of available routes through the network. A “route” can generally be defined as a path between two locations on the network. Upon receiving an incoming data packet, a router examines information within the packet, often referred to as a “key,” to select an appropriate next hop to which to forward the packet in accordance with the routing information.
A variety of routers exist within the Internet. Network Service Providers (NSPs), for example, maintain “edge routers” to provide Internet access and other services to the customers. Examples of services that the NSP may provide include Voice over IP (VOIP), access for Asynchronous Transfer Mode (ATM) or frame relay communications, Internet protocol (IP) data services, and multimedia services, such as video streaming. The edge routers of the NSPs often communicate network traffic to high-speed “core routers,” which may be generally viewed as forming the backbone of the Internet. These core routers often include substantially more processing resources than the edge routers, and are designed to handle high volumes of network traffic.
NSPs often desire to isolate the forwarding functions and other networks services for customers from one another for purposes of reliability and security. As a result, in some environments an NSP may implement many dedicated routers and other networking devices for each different enterprise customer. However, the complexities associated with maintenance and management of separate routers and other networking equipment can be significant.
To address these concerns, some conventional routers allow an NSP to configure and operate multiple logical software routers within the same physical routing device. These software routers are logically isolated in the sense that they achieve operational and organizational isolation within the routing device without requiring the use of additional or redundant hardware, e.g., additional hardware-based routing controllers. That is, the software routers share the hardware components of the physical routing device, such as the packet forwarding engine and interface cards. However, this solution has limitations and may be undesirable in certain situations. For example, multiple software routers executing within the same physical routing system have scaling limitations as each software logical router is affected by the scaling requirements of every other software logical router in the system. That is, since the software routers share the same hardware, kernel, and forwarding components, any increase in state (e.g., routing information and forwarding tables) for one of software routers may degrade the performance of the other software routers. Thus, the software routers cannot be scaled independently from one another as the needs of one customer grows while the needs of the other customers may remain unchanged. Software logical routers have other limitations such as fate sharing of the common kernel and forwarding components, and the limitation that the routers must inherently use the same version of any shared hardware or software component.
In general, a multi-router system is described in which hardware and software components of one or more standalone routers can be partitioned into multiple logical routers. The multiple logical routers are isolated from each other in terms of routing and forwarding functions yet allow network interfaces to be shared between the logical routers. Moreover, different logical routers can share network interfaces without impacting the ability of any of the logical routers to be independently scaled to meet the bandwidth demands of the customers serviced by the logical router.
In one example, one or more standalone routers are physically coupled to a control system that provides a plurality of hardware-independent routing engines. The forwarding components (e.g., packet forwarding engines and interface cards) of the standalone routers are logically partitioned into multiple groups, and each group is assigned to a different one of the routing engines of the control system to form a separate “protected system domain” (PSD). Each of the PSDs operates and participates as a different standalone router within the network. Each of the PSDs, for example, participates in separate peering sessions with other routers to exchange routing information and maintain separate forwarding information. Each PSD thus provides the same “look and feel” of a physical router with its own dedicated resources and configuration data, and is administered as a separate router from other PSDs.
Each PSD exclusively controls the set of interface cards assigned to its partition, where each of the interface cards typically have one or more physical network interfaces (ports). This allows a standalone router to be partitioned into PSDs for use by multiple administrative entities or organizations with little need for coordination between the entities while continuing to be fully isolated from each other.
The routing engine of the standalone router is referred to as a “root system domain” (RSD) and maintains control over any remaining forwarding components of the standalone router that are not assigned to one of the PSDs. Moreover, any network interface card and its network ports not assigned to a PSD is controlled exclusively by the RSD of that standalone router and may be designated as a shared interface that is reachable by different PSDs at the logical interface layer. The shared network interfaces may be implemented through the use of tunnel interface cards installed within the standalone router and assigned to the PSDs that utilize a shared interface of an RSD. For each PSD, one or more logical tunnel interfaces are assigned to the tunnel interface card of that PSD, and the logical tunnel interfaces appear in the routing table of that PSD as fully-routable ingress and egress interfaces. At the RSD, multiple logical interfaces may be defined for the same shared physical interface, and each logical interface may be assigned to a different PSD. The tunnel interface card of each PSD may be used to establish a layer two (L2) pseudo-wire connection within the standalone router that terminates at the shared interface card. All layer-three (L3) route data and next hop forwarding information may be kept local to the PSDs. This allows different PSDs, optionally running different software versions, to share an interface card without having to share routing and forwarding state. Other embodiments need not use a tunnel interface card as the functions ascribed thereto may be integrated into a packet forwarding engine or other component of the standalone router.
In one example embodiment, a multi-router system comprises at least one standalone router. The standalone router comprises a routing engine, a plurality of packet forwarding engine to forward network packets in accordance with forwarding information, and a set of network interface cards coupled to each of the packet forwarding engines by a switch fabric that forwards packets between the plurality of packet forwarding engines of the standalone router. The standalone router further includes at least two tunnel interface cards to form network tunnels. A control system comprising a plurality of routing engines is coupled to the standalone router. Each of the routing engines of the control system is associated with a different partition of the packet forwarding engines and the network interface cards of the standalone router, including at least one of the tunnel interface cards, to form a plurality of hardware logical routers. At least two of the plurality of hardware logical routers are configured to communicate with a shared one of the interface cards of the standalone router via the tunnel interface cards.
In another example embodiment, a method comprises partitioning forwarding components of a standalone router into a first group, a second group and a remaining group. Each of the first and second groups of the forwarding components of the standalone router includes a packet forwarding engine coupled to a set of physical interfaces and a tunnel interface card. The remaining group of the forwarding components of the standalone router includes at least one shared network interface. The method further comprises associating each of the first and second groups of forwarding components with a respective routing engine of a control system to form a plurality of hardware logical routers, and, with the tunnel interface cards, communicating packets between the forwarding components of the hardware logical routers and the shared interface of the standalone router by one or more tunnels internal to the standalone router.
Embodiments of the invention may provide one or more advantages. For example, the described techniques allow one or more standalone router to be partitioned into separate, isolated logical routers that do not share the same fate with respect to any routing protocols, operating system kernel, or forwarding engine. The logical routers may maintain hardware and software isolation from each other, can run different software versions, which can be upgraded independently of each other and administered separately.
Moreover, the techniques described herein allow the logical routers to share network interfaces without impacting the ability of any of the logical routers to be independently scaled to meet the bandwidth demands of the customers serviced by the logical router. For example, each of the PSDs can scale to the capacity of a complete standalone router without impacting the performance of the other logical routers. Further, forwarding tables associated with any given PSD are maintained only by resources associated with that PSD. Therefore, growth in the forwarding tables for one PSD (e.g., the addition of thousands of routes to one PSD) does not impact the performance and scalability of the other PSDs even though the PSDs may be logical partitions of the same standalone router. For example, the RSD of the standalone router does not need to maintain any routing information or forwarding tables for any of the PSDs that are associated with components of the RSD or that share network interfaces owned by that RSD even though the RSD is able to send and receive packets through the shared interface on behalf of the PSDs. In this way, the techniques described herein may reduce the amount of forwarding state that need be shared between PSDs and RSD.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
For purposes of example, the principles of the invention are described with respect to a simplified network environment 2 of
In this way, service provider network 4 may thus form part of a large-scale public network infrastructure, e.g., the Internet. Consequently, customer networks 14, 16 may be viewed as edge networks of the Internet. Service provider network 4 may provide computing devices within customer networks 14, 16 with access to the Internet and may provide other network services. Examples of services that PSD logical routers 12 may provide include, for example, Voice over IP (VOIP), access for Asynchronous Transfer Mode (ATM) or frame relay communications, Internet protocol (IP) data services, and multimedia distribution services, such as video streaming. End users within customer networks 14, 16 access PSD logical routers 12 with computing devices or other network-enabled devices. In some cases the end users may not be associated with large enterprises but instead may access service provider network 4 via cable modems, digital subscriber line (DSL) modems or other network access devices. In another example, service provider network 4 and multi-router system 6 may provide network services within the core of the Internet and may not be directly coupled to customer networks. In either case, service provider network 6 may include a variety of network devices (not shown) other than multi-chassis router 4 and edge routers 5, such as additional routers, switches, servers, or other devices.
Although PSD logical routers 12 are implemented on one or more partitioned standalone routers, the PSD logical routers are isolated from each other in terms of routing and forwarding components yet allow network interfaces to be shared between the logical routers. In the example of
As described in further detail below, multi-router system 6 includes one or more standalone routers that are physically coupled to a management system that provides a plurality of hardware-independent routing engines. Each of the standalone routers may include a forwarding engine and interface cards that can be logically partitioned into multiple groups, and each group is assigned to a different one of the routing engines of the control system to form PSD logical routers 12. The routing engine of the standalone router is referred to as a “root system domain” (RSD) and maintains control over any remaining forwarding components of the standalone router that are not assigned to either of PSD logical routers 12. In this way, the one or more standalone routers may be partitioned into separate, isolated PSD logical routers 12 that do not share the same fate with respect to any routing protocols, operating system kernel, or forwarding engine. PSD logical routers 12 maintain hardware and software isolation from each other, can run different software versions, and can be administered independently.
In accordance with techniques described herein, each of PSD logical routers 12 exclusively controls a set of interface cards assigned to its partition, each of the interface cards having one or more network interfaces (ports). In this example, PSD logical router 12A exclusively owns a set of interface cards having network interfaces (ports), including a network interface for communicating with edge router 5A via link 7B. Similarly, PSD logical router 12B exclusively owns a set of interface cards having network interfaces, including network interfaces for communicating with edge routers 5B, 5C via links 7C, 7D, respectively. Any network interface card and its network ports not assigned to either of PSD logical routers 12 is controlled exclusively by the RSD of that standalone router and may be designated as a shared interface that is reachable by different PSDs at the logical interface layer. Thus, in the example of
As described herein, the techniques allow PSD logical routers 12 to share network interfaces without impacting the ability of each of the PSD logical routers to be independently scaled to meet the bandwidth demands of the respective customer networks 14, 16 serviced by the logical router. For example, each of the PSD logical routers 12 can in theory scale to the same capacity to which a similarly configured standalone router could scale without impacting the performance of the other PSD logical router. For example, a routing information base and forwarding tables associated with each of the PSD logical routers 12 are maintained only by resources exclusively associated with that PSD. Therefore, in the event the demands and size of customer networks 16 significantly increases, any growth in the forwarding tables and bandwidth consumption for PSD logical router 12B does not impact the performance and scalability of PSD logical router 12A even though the PSD logical routers may be logical partitions of the same standalone router and share at least one network interface, i.e., the network interface to reach border router 8 in this example.
As shown in
In this example, control system 42 includes two high-speed communications ports (e.g., optical Ethernet ports) that connect the control system to two substantially similar standalone routers 48A-48B (“routers 48”). In this way, each of REs 46 may communicate with routing engines 51A-51B (“routing engines 51”) and other components of standalone routers 48. In other embodiments, a multi-router system may include fewer (e.g., one) or more standalone routers connected to control system 42.
Standalone routers 48 each include a routing engine 51 that provides full control-plane operations when operating as a standalone router. In this example, each of routers 48 may each be configured with a set of flexible packet interface card concentrators (FPCs) 50, each of which may include a packet forwarding engine (PFE) and a set of one or more individual interface cards (IFCs) (not shown) for inbound and outbound network communication via network links 54. Each of routers 48 also contains electronics for implementing an internal switch fabric 52 that provides a switching mechanism between the packet forwarding engines of the FPCs internal to the respective router. For example, router 48A includes internal switch fabric 52A as a switching mechanism between interface cards of FPCs 50A. Similarly, router 48B includes internal switch fabric 52B as a switching mechanism between interface cards of FPCs 50B. Although routers 48 are coupled to control system 42 for control plane communications, transit network packets typically cannot be directly and internally forwarded between routers 48. Each of switch fabrics 52 may be implemented as a multi-stage switch fabric or as a full-mesh, single-stage switch fabric.
By way of example, multi-router system 20 and stand-alone routers 48 may be partitioned into four protected system domains (PSDs) that each operate as an independent hardware logical router. That is, FPCs 50 may be individually assigned to a different one of the four PSDs, and each of the four PSDs exclusively owns the interface cards of the FPCs assigned to the PSD. Each of routing engines 46 of control system 42 is assigned to a different PSD and controls packet forwarding functions for the PSD. For example, routing protocols executing on each routing engine 46 communicate with other routers within the network via routing sessions to exchange topology information and learn routing information for the network. For example, the routing information may include route data that describes various routes through the network, and also next hop data indicating appropriate neighboring devices within the network for each of the routes. Example routing protocols include the Border Gateway Protocol (BGP), the Intermediate System to Intermediate System (ISIS) protocol, the Open Shortest Path First (OSPF) protocol, and the Routing Information Protocol (RIP). Each routing engine 46 maintains separate routing information using the hardware resources of control system 42, e.g., a separate computing blade, so as to achieve software and hardware isolation. Routing engines 46 update their respective routing information to accurately reflect the current network topology.
Routing engines 46 also use the routing information to derive forwarding information bases (FIBs) for the respective PSDs to which the routing engine is assigned. Each of routing engines 46 installs the FIBs in each of FPCs 50 that are logically assigned to its PSD. In this way, each FPC 50 only includes forwarding state for the PSD to which it is assigned. Thus, a FIB for one of FPCs 50A may be the same or different than a FIB for a different one of the FPCs for router 48A if the FPCs are assigned to different PSDs. Routing engines 46 may communicate with FPCs 50 via cables 41 to coordinate direct FIB installation on the standalone routers 48 using inter-process communications (IPCs) or other communication techniques. Because cables 41 provide a dedicated connection, i.e., separate from a data packet forwarding connection provided by switch fabrics 52, FIBs in FPCs 50 can be updated without interrupting packet forwarding performance of multi-router system 20.
Each of routing engines 51 maintains control over any FPCs 50 and their interface cards that are not otherwise assigned to a PSD. For example, routing engine 51A maintains exclusive control over any of FPCs 50A of router 48 that are not assigned to any PSD. In one embodiment, routing engine 51A may still operate as an independent, standalone router within the network and may maintain routing information for any unassigned and unshared forwarding component based on its peering sessions with other routers. Moreover, routing engine 51A generates a FIB based on its locally maintained routing information and programs the FIB (forwarding information) into any of FPCs 50A that is owns due to the FPCs not being assigned to a PSD.
As described herein, the techniques allow any network interface of an RSD to be shared between PSDs of multi-router system 20 without requiring that the resources of the PSD be burdened with the forwarding information of the other PSD. Thus, network interfaces may be shared between PSDs without impacting the ability of each of the PSDs to independently scale.
Multi-router system 20 and, in particular, routers 48 may include hardware, software, firmware, and may include processors, control units, discrete hardware circuitry, or other logic for executing instructions fetched from computer-readable media, e.g., computer-readable storage media. Examples of such media include hard disks, Flash memory, random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, and the like.
Similarly, PSD 64 is formed by assigning FPCs 50A3 and 50A4 to routing engine 46B of control system 42. PSD 64 has exclusive ownership of FPCs 50A3, 50A43, their PFEs 60, 69, IFCs 63, 65 and tunnel PIC 77. Routing engine 51A relinquishes logical ownership of FPCs 50A3-50A4 and removes IFCS 63, 65 and their interfaces from its forwarding information and list of available interfaces.
In addition, in the example of
In one embodiment, shared interfaces of RSD 66 are realized through the use of tunnel PICs 75, 77 installed within PSDs 62, 64, respectively, in conjunction with a shared one of more network interfaces of IFCs 67 within RSD 66. Each of tunnel PICs 75, 77 in conjunction with a shared IFC 67 form endpoints of what can be viewed as a circuit-cross-connect between the PSDs and RSD across the shared switch fabric 52A in router 48A. PFEs 50A1-50A5 are uniquely numbered within standalone router 48A, identifiable across the router, and addressable from every PFE within the router. This association between a tunnel PIC 75, 77 of PSDs 62, 64 and a shared network interface of RSD 66 becomes an internal point-to-point pseudo-wire connection between the PSD and the RSD, so that, from the point of view of the PSD, the shared interface appears to be local to the PSD. Other embodiments need not use tunnel PICs as the functions ascribed thereto may be integrated into the PFEs 50 or other component of the standalone router 48A.
For example, when a network interface of IFCs 67 is designated to be a shared interface by two or more PSDs, a pseudo interface (i.e., logical interface) is created within the interface list of PSDs. For example, in one embodiment, an administrator provides configuration statements that identify the interface of IFCs 67 to be shared, the PSDs 62, 64 to share the interface, the respective logical interfaces within the PSDs to represent the shared interface, which become the endpoints of an internal tunnel used across the switch fabric. In some cases, routing engine 51A of RSD is configured to associate a plurality of logical interfaces with the same physical interface of a shared interface card 67, and each of PSDs 62, 64 may be configured with a respective logical interface that is assigned as a peer interface to one of the logical interfaces of the RSD. Using the control plane links 41, a software daemon on routing engine 51A of RSD 66 exchanges this information with a pair of software daemons on routing engines 46A, 46B of PSDs 62, 64. This information may also include forwarding information associated with the respective point-to-point pseudowire connection between RSD 66 and each of PSDs 62, 64, and may take the form of tokens for each other's use when forwarding packets through the appropriate point-to-point pseudowire to and from the shared network interface.
In the example of
PFE 59 receives outbound packet 82′ from switch fabric 52A and adds L2 encapsulation needed for network interface 79, e.g., frame relay in this example. That is, PFE 59 views packet 82′ as an outbound packet destined for an outbound interface and, therefore, fully forms an outbound L2 packet 84 for output to network interface 79 as if the network interface was a local network interface. In addition, when forming outbound L2 packet 84, PFE 59 adds a logical tunnel cookie to the L2 packet, where the logical tunnel cookie provides identification data associated with an internal tunnel through tunnel PIC 75. Thus, from the perspective of PFE 59, the tunnel appears similar to any other external, network tunnel having an ingress at FPC 50A2. PFE 59 places the fully-formed outbound packet 84 in packet buffer 91 for output to tunnel interface 83 as an egress interface, which is viewed as a network destination for that PSD.
Tunnel PIC 75 receives outbound packet 84 and loops the packet back to the PFE 59 as inbound packet 86. Thus, PFE 59 receives inbound packet 86 as if the packet was received from an external network tunnel that terminates at PFE 59. Thus, in other words, tunnel PIC 75 is used to receive outbound packet 84 at the tunnel ingress and loop back the tunneled packet, where the tunneled packet has a payload that encapsulates a fully-formed packet 82′ destined for the shared interface 79.
PFE 59 receives inbound packet 86 from tunnel PIC 75 as an inbound packet, i.e., a packet on the PFE's network-facing inbound side as if the packet was received by router the PSD from an external tunnel. PFE 59 removes the tunnel cookie from the packet and route lookup module (RL) 94 performs a route lookup on inbound packet 86 and determines that the packet must be sent over switch fabric 52A to PFE 71 that hosts network interface 79. That is, forwarding information programmed within RL 94 by RE 46A of PSD 62 maps keying information within inbound packet 86 to next hop data identifying network interface 79 as the egress interface to which the packet must be sent. As a result, PFE 59 places inbound packet 86 within inbound packet buffer 93 to be directed across switch fabric 52A to PFE 71. Routing engine 51A of RSD 66 typically has previously communicated layer-two (L2) forwarding information to routing engine 46A of PSD 62, e.g., by way of a cross-connect peering session, where the forwarding information includes a token (connection end-point identifiers for the pseudowire) and a physical link state (e.g., up or down) of shared interface so-7/0/0. Routing engine 46A creates a mapping that associates interface so-7/0/0 with tunnel interface ut-1/3/0 and installs the mapping within RL 94. In this way, routing engines 46A and 51A create a pseudowire connection having logical tunnel interface 83 of tunnel PIC 75 and shared network interface 79 of IFC 67 as endpoints.
PFE 71 of RSD 66 receives outbound packet 97 from switch fabric 52A and places the outbound packet 99 within buffer 101 for output via out shared interface 79. No further changes need be made since the outbound packet is already fully formed for the frame relay encapsulation required by so-7/0/0, i.e., network interface 79.
Upon selecting the PSD to which to forward the inbound packet 110, PFE 71 of RSD 66 places inbound packet 110 in packet buffer 103 for transmission across switch fabric 52A toward tunnel PIC 75 of that PSD. PFE 59 receives the packet from its switch fabric side as an outbound packet 110′. That is, PFE 59 views packet 110 as an outbound packet destined for logical tunnel interface 83 as an outbound interface. Moreover, PFE 59 views logical tunnel interface as providing an ingress to a network tunnel. Accordingly, PFE 59 adds a logical tunnel cookie to the outbound packet 110′ to form tunnel packet 112, where the logical tunnel cookie provides identification data associated with a tunnel-like pseudowire having logical tunnel interface 83 of tunnel PIC 75 as an ingress interface and network interface 81 of IFC 57 as an egress interface. Thus, from the perspective of PFE 59, the tunnel appears similar to any other external, network tunnel having an ingress interface at FPC 50A2. PFE 59 places the outbound tunnel packet 112 in packet buffer 91 for output to tunnel interface 83 as an egress interface, which is viewed as a network destination for that PSD.
Tunnel PIC 75 receives outbound tunnel packet 112 and loops the packet back to the PFE 59 as inbound packet 114. Thus, PFE 59 receives inbound packet 114 as if the packet was received from an external network tunnel. Thus, in other words, tunnel PIC 75 is used to receive outbound packet 112 at the tunnel ingress and loop back the tunneled packet as if the packet egressed the tunnel so that PFE 59 receives an inbound L2 packet 110 destined for the shared interface 79.
Since tunnel interface 83 is configured as an egress interface for this tunnel, PFE 59 strips the tunnel cookie from packet 114, and RL 94 performs a route lookup on packet 110 as if interface 79 (so-7/0/0) were a locally connected sonet interface. As a result, RL 94 determines that the packet must be sent over switch fabric 52A to PFE 58 that hosts output interface 57. That is, forwarding information programmed within RL 94 by RE 46A of PSD 62 maps keying information within inbound packet 114 to next hop data identifying network interface 81 as the egress interface to which the packet must be sent. As a result, PFE 59 places inbound packet 114 within buffer 93 to be directed across switch fabric 52A to PFE 58 with an appropriate token to identify the outbound shared interface.
PFE 58 receives packet 114 from switch fabric 52A, adds any L2 frame relay encapsulation that may be necessary to form an output packet 116, and places the output packet in packet buffer 94 for transmission via network interface 81.
Although described with respect to packets originating from external network interfaces, packets exchanged between from RE 46 of PSD 62 and shared network interface 79 are processed in a similar manner, thus allowing the routing engine of the PSD to similarly communicate via the shared interface of the RSD.
The following sections illustrate exemplary configuration data provided by an administrator for defining a shared sonet interface (so-0/0/0) having frame relay encapsulation within an RSD. In particular, the following example configuration data defines frame relay identifiers (DLCIs) 100, 101 and 102 for the uplink, and assigns DLCI 100 and 101 with a PSD1 and shares DLCI 102 with PSD2.
In addition, the following example configuration data defines interfaces that include a logical tunnel interface for each PSD. See, for example, ut-1/0/0 and ut-2/0/0 defined for PSD1 and PSD2, respectively. The configuration data further defines the interface list to include a pseudo interface (so-0/0/0) for the shared interface of the RSD. Moreover, the configuration data specifically identifies the interface as corresponding to a shared physical interface by way of the command “shared-uplink.” The command “peer interface” establishes the pseudo interface as a peer interface to logical tunnel interface defined within that PSD, thereby forming the internal tunnel between the two interfaces as described herein.
Various embodiments of the invention have been described. As described herein, the hardware logical routers allow isolation of routing engines, their underlying operating system kernel, as well as forwarding resources. Each PSD has its own routing engine, maintains its own kernel state, configuration, and its own forwarding state that is isolated from those of other PSDs. Each PSD can be administered separately, runs its own software version, and can be rebooted independently of other PSDs and the RSD. The multiple logical routers are isolated from each other in terms of routing and forwarding functions yet allow network interfaces to be shared between the logical routers. Moreover, different logical routers can share network interfaces without impacting the ability of any of the logical routers to be independently scaled to meet the bandwidth demands of the customers serviced by the logical router.
Various modifications to the described embodiments may be made within the scope of the invention. For example, the techniques may be readily applied to allow a PSD to share multiple interfaces on an RSD over the same tunnel interface at the PSD. Alternatively, a PSD may use multiple tunnel interfaces to peer with (map to) the same shared interface on the RSD. Together, these allow a system administrator to choose a desired relationship for mapping tunnel interfaces to shared interfaces based on bandwidth requirements, logical interface requirements of other factors for each PSD of a multi-router system.
These and other embodiments are within the scope of the following claims.
This application is a continuation of U.S. application Ser. No. 12/618,536, filed Nov. 13, 2009, the entire content of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3962681 | Requa et al. | Jun 1976 | A |
4032899 | Jenny et al. | Jun 1977 | A |
4600319 | Everett, Jr. | Jul 1986 | A |
5375216 | Moyer et al. | Dec 1994 | A |
5408539 | Finlay et al. | Apr 1995 | A |
5490252 | Macera et al. | Feb 1996 | A |
5509123 | Dobbins et al. | Apr 1996 | A |
5530958 | Agarwal et al. | Jun 1996 | A |
5568471 | Hershey et al. | Oct 1996 | A |
6011795 | Varghese et al. | Jan 2000 | A |
6018765 | Durana et al. | Jan 2000 | A |
6148335 | Haggard et al. | Nov 2000 | A |
6148765 | Lilleland et al. | Nov 2000 | A |
6182146 | Graham-Cumming, Jr. | Jan 2001 | B1 |
6226748 | Bots | May 2001 | B1 |
6321338 | Porras et al. | Nov 2001 | B1 |
6392996 | Hjalmtysson | May 2002 | B1 |
6499088 | Wexler et al. | Dec 2002 | B1 |
6501752 | Kung et al. | Dec 2002 | B1 |
6563796 | Saito | May 2003 | B1 |
6584548 | Bourne et al. | Jun 2003 | B1 |
6590898 | Uzun | Jul 2003 | B1 |
6594268 | Aukia et al. | Jul 2003 | B1 |
6598034 | Kloth | Jul 2003 | B1 |
6651098 | Carroll et al. | Nov 2003 | B1 |
6735201 | Mahajan et al. | May 2004 | B1 |
6751663 | Farrell et al. | Jun 2004 | B1 |
6807523 | Wensink et al. | Oct 2004 | B1 |
6826713 | Beesley et al. | Nov 2004 | B1 |
6870817 | Dolinar et al. | Mar 2005 | B2 |
6889181 | Kerr et al. | May 2005 | B2 |
6910148 | Ho et al. | Jun 2005 | B1 |
6970943 | Subramanian et al. | Nov 2005 | B1 |
6973066 | Gutowski | Dec 2005 | B2 |
6975628 | Johnson et al. | Dec 2005 | B2 |
6983294 | Jones et al. | Jan 2006 | B2 |
6985956 | Luke et al. | Jan 2006 | B2 |
7031304 | Arberg et al. | Apr 2006 | B1 |
7055174 | Cope et al. | May 2006 | B1 |
7058974 | Maher, III et al. | Jun 2006 | B1 |
7099669 | Sheffield | Aug 2006 | B2 |
7114008 | Jungck et al. | Sep 2006 | B2 |
7117241 | Bloch et al. | Oct 2006 | B2 |
7120931 | Cheriton | Oct 2006 | B1 |
7139242 | Bays | Nov 2006 | B2 |
7185103 | Jain | Feb 2007 | B1 |
7185368 | Copeland, III | Feb 2007 | B2 |
7203740 | Putzolu et al. | Apr 2007 | B1 |
7231459 | Saraph et al. | Jun 2007 | B2 |
7251215 | Turner et al. | Jul 2007 | B1 |
7254114 | Turner et al. | Aug 2007 | B1 |
7263091 | Woo et al. | Aug 2007 | B1 |
7292573 | LaVinge et al. | Nov 2007 | B2 |
7313100 | Turner et al. | Dec 2007 | B1 |
7318179 | Fernandes | Jan 2008 | B1 |
7362763 | Wybenga et al. | Apr 2008 | B2 |
7369557 | Sinha | May 2008 | B1 |
7376125 | Hussain et al. | May 2008 | B1 |
7383541 | Banks | Jun 2008 | B1 |
7386108 | Zave et al. | Jun 2008 | B1 |
7406030 | Rijsman | Jul 2008 | B1 |
7420929 | Mackie | Sep 2008 | B1 |
7433966 | Charny et al. | Oct 2008 | B2 |
7443805 | Bynum | Oct 2008 | B1 |
7492713 | Turner et al. | Feb 2009 | B1 |
7496650 | Previdi et al. | Feb 2009 | B1 |
7496955 | Akundi et al. | Feb 2009 | B2 |
7561569 | Thiede | Jul 2009 | B2 |
7580356 | Mishra et al. | Aug 2009 | B1 |
7606241 | Raghunathan | Oct 2009 | B1 |
7630358 | Lakhani | Dec 2009 | B1 |
7633944 | Chang et al. | Dec 2009 | B1 |
7660265 | Kreuk | Feb 2010 | B2 |
7664855 | Freed et al. | Feb 2010 | B1 |
7738396 | Turner et al. | Jun 2010 | B1 |
7747737 | Apte et al. | Jun 2010 | B1 |
7802000 | Huang | Sep 2010 | B1 |
7809827 | Apte et al. | Oct 2010 | B1 |
7856014 | Kreuk | Dec 2010 | B2 |
7869352 | Turner et al. | Jan 2011 | B1 |
8031715 | Chang et al. | Oct 2011 | B1 |
8037175 | Apte et al. | Oct 2011 | B1 |
8089895 | Mackie | Jan 2012 | B1 |
8340090 | Bettink et al. | Dec 2012 | B1 |
8369345 | Raghunathan et al. | Feb 2013 | B1 |
9032095 | Traina | May 2015 | B1 |
20020095492 | Kaashoek et al. | Jul 2002 | A1 |
20020126621 | Johnson et al. | Sep 2002 | A1 |
20020141343 | Bays et al. | Oct 2002 | A1 |
20020163932 | Fischer et al. | Nov 2002 | A1 |
20030005145 | Bullard | Jan 2003 | A1 |
20030051048 | Watson | Mar 2003 | A1 |
20030097557 | Tarquini et al. | May 2003 | A1 |
20030105851 | Metzger | Jun 2003 | A1 |
20030106067 | Hoskins | Jun 2003 | A1 |
20030120769 | McCollom et al. | Jun 2003 | A1 |
20030145232 | Poletto et al. | Jul 2003 | A1 |
20030165144 | Wang | Sep 2003 | A1 |
20030169747 | Wang | Sep 2003 | A1 |
20030214913 | Kan et al. | Nov 2003 | A1 |
20030223361 | Hussain et al. | Dec 2003 | A1 |
20030228147 | Brahim | Dec 2003 | A1 |
20040037279 | Zelig | Feb 2004 | A1 |
20040059831 | Chu et al. | Mar 2004 | A1 |
20040066782 | Nassar et al. | Apr 2004 | A1 |
20040153573 | Kim | Aug 2004 | A1 |
20040165581 | Oogushi | Aug 2004 | A1 |
20040186701 | Aubin | Sep 2004 | A1 |
20040260834 | Lindholm et al. | Dec 2004 | A1 |
20040264465 | Dunk | Dec 2004 | A1 |
20050027782 | Jalan | Feb 2005 | A1 |
20050041665 | Weyman | Feb 2005 | A1 |
20050074009 | Kanetake | Apr 2005 | A1 |
20050088965 | Atlas | Apr 2005 | A1 |
20050160289 | Shay | Jul 2005 | A1 |
20050169281 | Ko | Aug 2005 | A1 |
20050190719 | Lee et al. | Sep 2005 | A1 |
20050257256 | Supnik et al. | Nov 2005 | A1 |
20050265308 | Barbir | Dec 2005 | A1 |
20060062206 | Krishnaswamy | Mar 2006 | A1 |
20060089994 | Hayes | Apr 2006 | A1 |
20060090008 | Guichard | Apr 2006 | A1 |
20060153067 | Vasseur | Jul 2006 | A1 |
20060168274 | Aloni et al. | Jul 2006 | A1 |
20060182122 | Davie et al. | Aug 2006 | A1 |
20060203820 | Coluccio | Sep 2006 | A1 |
20060268682 | Vasseur | Nov 2006 | A1 |
20060268877 | Gollamudi | Nov 2006 | A1 |
20070016702 | Pione et al. | Jan 2007 | A1 |
20070025241 | Nadeau et al. | Feb 2007 | A1 |
20070058558 | Cheung et al. | Mar 2007 | A1 |
20070076658 | Park et al. | Apr 2007 | A1 |
20070083672 | Shima et al. | Apr 2007 | A1 |
20070086448 | Hu | Apr 2007 | A1 |
20070091794 | Filsfils | Apr 2007 | A1 |
20070115899 | Ovadia | May 2007 | A1 |
20070121812 | Strange et al. | May 2007 | A1 |
20070127382 | Hussain et al. | Jun 2007 | A1 |
20070140235 | Aysan | Jun 2007 | A1 |
20070162783 | Talaugon et al. | Jul 2007 | A1 |
20070174685 | Banks et al. | Jul 2007 | A1 |
20070291764 | Wu | Dec 2007 | A1 |
20070294369 | Ginter et al. | Dec 2007 | A1 |
20080019383 | Wainwright | Jan 2008 | A1 |
20080043764 | Ishizaki et al. | Feb 2008 | A1 |
20080049664 | Austin | Feb 2008 | A1 |
20080069100 | Weyman | Mar 2008 | A1 |
20080080508 | Das | Apr 2008 | A1 |
20080092229 | Khanna | Apr 2008 | A1 |
20080101350 | Kreuk | May 2008 | A1 |
20080148386 | Kreuk | Jun 2008 | A1 |
20080151882 | Sanjay | Jun 2008 | A1 |
20080159277 | Vobbilisetty | Jul 2008 | A1 |
20080170578 | Ould-Brahim | Jul 2008 | A1 |
20080205271 | Aissaoui et al. | Aug 2008 | A1 |
20080205395 | Boddapati | Aug 2008 | A1 |
20080225852 | Raszuk | Sep 2008 | A1 |
20080285466 | Salam et al. | Nov 2008 | A1 |
20080304476 | Pirbhai | Dec 2008 | A1 |
20080310433 | Retana | Dec 2008 | A1 |
20090031041 | Clemmensen | Jan 2009 | A1 |
20090041038 | Martini et al. | Feb 2009 | A1 |
20090092137 | Haigh et al. | Apr 2009 | A1 |
20090129385 | Wray | May 2009 | A1 |
20090175280 | Berechva et al. | Jul 2009 | A1 |
20090185506 | Watson et al. | Jul 2009 | A1 |
20100046531 | Louati et al. | Feb 2010 | A1 |
20100061380 | Barach et al. | Mar 2010 | A1 |
20100202295 | Smith et al. | Aug 2010 | A1 |
20100214913 | Kompella | Aug 2010 | A1 |
20100272110 | Allan et al. | Oct 2010 | A1 |
20100309907 | Proulx et al. | Dec 2010 | A1 |
20110075664 | Lambeth et al. | Mar 2011 | A1 |
20110119748 | Edwards | May 2011 | A1 |
Number | Date | Country |
---|---|---|
WO 9836532 | Aug 1998 | WO |
WO 02084920 | Oct 2002 | WO |
Entry |
---|
PCI Technology Overview, Feb. 2003, www.cs.unc.edu/Research/stc/FAQs/pcioverview. pdf, 22 pp. |
“The CAIDA Web Site,” www.caida.org/, 2002, 1 pg. |
“About Endace,” www.endace.com/, 2002, 1 pg. |
“Cisco IOS NetFlow,” www.cisco.com/warp/public/732/Tech/nmp/netflow/index.shtml, 2002, 1 pg. |
Weaver, A.C. et al., “A Real-Time Monitor for Token Ring Networks,” Military Communications Conference, 1989, MILCOM '89, Oct. 1989, vol. 3, pp. 794-798. |
Dini, P. et al., “Performance Evaluation for Distributed System Components,” Proceedings of IEEE Second International Workshop on Systems Management, Jun. 1996, pp. 20-29. |
Integrated Services Adapter, 2000, Cisco Systems, Data Sheet, pp. 1-6, http://www.cisco.com/warp/public/cc/pd/ifaa/svaa/iasvaa/prodlit/ism2—ds.pdf. |
“Well-Known TCP Port Number,” www.webopedia.com, 2004, 3 pgs. |
“TCP Packet Field Descriptions,” www.ipanalyser.co.uk, Analyser Sales Ltd., Copyright 2003, 2 pages. |
Michael Egan, “Decomposition of a TCP Packet,” www.passwall.com, 3 pages, Aug. 7, 2000. |
Mark Gibbs, “A Guide to Original SYN,” www.nwfusion.com, Network World, Nov. 2000, 4 pages. |
“Sample TCP/IP Packet,” www.passwall.com, Version 0.0.0 @ 03:55/Aug. 7, 2000, Copyright 2002, 6 pp. |
D.J. Bernstein, “SYN Cookies,” http://cr.yp.to/syncookies.html, Oct. 2003, 3 pages. |
Jonathan Lemon, “Resisting SYN Flood DoS Attacks with a SYN Cache,” http://people.freebsd.org/˜jlemon/papers/syncache.pdf, 9 pages. |
Stuart Staniford, et al., “Practical Automated Detection of Stealthy Portscans,” http://downloads.securityfocus.com/library/spice-ccs2000.pdf, 16 pages. |
Juniper Networks, Inc., “Combating Bots and Mitigating DDos Attacks,” Juniper Networks, Inc. 2008, Entire document, http://www.juniper.net/solutions/literature/solutionbriefs/351198.pdf. |
U.S. Appl. No. 12/897,530, by Manoj Apte, filed Oct. 4, 2010. |
Number | Date | Country | |
---|---|---|---|
Parent | 12618536 | Nov 2009 | US |
Child | 13750926 | US |