U.S. Patent application of David Ball et al. for DISTRIBUTED SOFTWARE ARCHITECTURE FOR IMPLEMENTING BGP, Ser. No. 10/677,797, filed Oct. 2, 2003, now published as U.S. Patent Publication No. 2005/0074003, published Apr. 7, 2005, and assigned to the assignee of this application, the contents of which are incorporated herein by reference.
This invention relates to scheduling of the processing of reachability events in a unit such as a router employing the Border Gateway Protocol (BGP). More particularly it relates to a system in which the unit scans for such events after dynamically adjustable intervals so as to minimize instabilities in this system and yet process these events without undue delay.
A typical network to which the invention applies comprises a large number of network nodes, e.g., work stations, organized in autonomous domains. Communications between bordering (logically) domains are, to some extent, organized by units such as routers that employ the Border Gateway Protocol. With this protocol a router communicates with a peer router in a neighboring domain by means of a connection such as TCP/IP to provide the latter router with the next-hop IP addresses of routers to which data intended for network nodes within the domain, or beyond, should be directed. The Border Gateway Protocol (BGP) is described in RFC 1771. Specifically, a BGP router advertises to its peers updates of the paths over which traffic should be directed to search particular nodes located within the domain or through the domain to another domain.
The present invention relates to the processing of “reachability events”, i.e., changes in the status of units within the domain that may affect paths advertised to the peers. For example, a “next-hop” node within the domain or another node further along the path to the recipient of messages may have failed or otherwise be unavailable; or if previously unavailable has become available. Notifications of many, if not most, of these events are ordinarily received in messages transmitted by other nodes in the domain. To provide certainty, the BGP unit might periodically scan all the next-hop units in the paths advertised by the unit to its peers. However the scanning interval then has to be unduly long to cope with churning.
In accordance with the invention the scanning interval is dynamic. The scanning process is based on the receipt of notices of reachability events. Initially the BGP unit assigns a standard delay interval between the receipt of a notice and the subsequent scan. The interval increases when reachability events are rapidly received and it decreases when the time between received events increases. The rate at which the interval increases or decreases may be exponential or additive or any other desirable function of the rate at which reachability events are received. For example, when it first receives a notification of a reachability event a “penalty” delay is added to the standard interval so that the next scan will not take place until a delay interval equal to the standard interval plus the penalty increment has expired after the first scan. If notification of another event is received before the next scan, another penalty increment is added to the interval after which the second scan can begin. Thus the interval between scans is dynamically increased in accordance with the rapidity with which event notifications are received. Such rapidity is generally an indication of churning. Therefore, by delaying the next scan, the internal BGP waits until the internal system has settled down.
Conversely, if the rate at which the reachability announcements are received by the BGP unit decreases sufficiently, the waiting time for the next scan will decrease until it decays to zero.
This arrangement applies to all reachability event notifications, whether they are received in unsolicited messages from other nodes in the domain are or as the result of scans. Thus, if a scan uncovers one or more reachability events, the next scan will be delayed in accordance with the number of such events. Specifically the penalty interval is applied to each of the events and the next scan is delayed accordingly.
The invention description below refers to the accompanying drawings, of which:
The processors 210 are illustratively route processors (RPs), each having a dedicated memory 230. The memory 230 may comprise storage locations addressable by the processor for storing software programs and data structures associated with the distributed routing protocol architecture. Each processor 210 may comprise processing elements or logic for executing the software programs and manipulating the data structures. A router operating system 232, portions of which are typically resident in memory 230 and executed by the processor, functionally organizes the router by, inter alia, invoking network operations in support of software processes executing on the processor. It will be apparent to those skilled in the art that other processor and memory means, including various computer readable media, may be used for storing and executing program instructions pertaining to the inventive architecture described herein.
In the illustrative embodiment, each RP 210 comprises two central processing units (CPUs 220), e.g., Power-PC 7460 chips, configured as a symmetric multiprocessing (SMP) pair. The CPU SMP pair is adapted to run a single copy of the router operating system 232 and access its memory space 230. As noted, each RP has a memory space that is separate from the other RPs in the router 200. The processors communicate using an interprocess communication (IPC) mechanism. In addition, each line card 260 comprises an interface 270 having a plurality of ports coupled to a receive forwarding processor (FP Rx 280) and a transmit forwarding processor (FP Tx 290). The FP Rx 280 renders a forwarding decision for each packet received at the router on interface 270 of an ingress line card in order to determine to which RP 210 to forward the packet. To that end, the FP Rx renders the forwarding decision using an internal forwarding information base, IFIB, of a FIB 275. Likewise, the FP Tx 290 performs lookup operations (using FIB 275) on a packet transmitted from the router via interface 270 of an egress line card. In accordance with the invention, each FP Tx 290 also includes an adaptive timing unit 292 described below.
A key function of the interdomain router 200 is determining the next node to which a packet is sent; in order to accomplish such “routing,” the interdomain routers cooperate to determine best paths through the computer network 100. The routing function is preferably performed by an internetwork layer of a conventional protocol stack within each router.
The lower network interface layer 308 is generally standardized and implemented in hardware and firmware, whereas the higher layers are typically implemented in the form of software. The primary internetwork layer protocol of the Internet architecture is the IP protocol. IP is primarily a connectionless protocol that provides for internetwork routing, fragmentation and reassembly of exchanged packets—generally referred to as “datagrams” in an Internet environment—and which relies on transport protocols for end-to-end reliability. An example of such a transport protocol is the TCP protocol, which is implemented by the transport layer 304 and provides connection-oriented services to the upper layer protocols of the Internet architecture. The term TCP/IP is commonly used to denote the Internet architecture.
In particular, the internetwork layer 306 concerns the protocol and algorithms that interdomain routers utilize so that they can cooperate to calculate paths through the computer network 100. An interdomain routing protocol, such as the Border Gateway Protocol version 4 (BGP), is used to perform interdomain routing (for the internetwork layer) through the computer network. The interdomain routers 200 (hereinafter “peer routers”) exchange routing and reachability information among the autonomous systems over a reliable transport layer connection, such as TCP. An adjacency is a relationship formed between selected peer routers for the purpose of exchanging routing messages and abstracting the network topology. The BGP protocol uses the TCP transport layer 304 to ensure reliable communication of routing messages among the peer routers.
In order to perform routing operations in accordance with the BGP protocol, each interdomain router 200 maintains a routing table 800 that lists all feasible paths to a particular network. The routers further exchange routing information using routing update messages 400 when their routing tables change. The routing update messages are generated by an updating router to advertise best paths to each of its neighboring peer routers throughout the computer network. These routing updates allow the BGP routers of the autonomous systems to construct a consistent and up-to-date view of the network topology.
Specifically, the path attributes field 500 comprises a sequence of fields, each describing a path attribute in the form of a triple (i.e., attribute type, attribute length, attribute value).
BGP Architecture
The BGP protocol runs inbound policy on all routes “learned” for each connection 602 and those routes that match are stored in an Adj-RIB-In 610 unique to that connection. Additional inbound policy 650 (filtering) is then applied to those stored routes, with a potentially modified route being installed in the loc-RIB 620. The loc-RIB 620 is generally responsible for selecting the best route per prefix from the union of all policy-modified Adj-RIB-In routes, resulting in routes referred to as “best paths”. The set of best paths is then installed in the global RIB 630, where they may contend with routes from other protocols to become the “optimal” path ultimately selected for forwarding. Thereafter, the set of best paths have outbound policy 660 run on them, the result of which is placed in appropriate Adj-RIB-Outs 640 and announced to the respective peers via the same TCP connections 602 from which routing update messages 400 were learned.
Many of the functions or tasks performed within the BGP protocol are performed on distinct subsets of routing data, independently from one another. These tasks include (1) tracking the state of each peer according to the BGP Finite State Machine (FSM), described in draft-ietf-idr-bgp4-20. txt (Section 8), and responding to FSM events, (2) parsing update messages 400 received from each peer and placing them in an Adj-RIB-In 610 for that peer (Section 3), and (3) applying inbound policy 650 for the peer to filter or modify the received updates in the Adj-RIB-In. The BGP implementation also (4) calculates the best path for each prefix in the set of Adj-RIB-Ins and places those best paths in the loc-RIB 620 (Section 9). As the number of peers increases, the number of paths per-prefix also increases and, hence, this calculation becomes more complex. Additional tasks performed by the BGP implementation include (5) applying outbound policy 660 for each peer on all the selected paths in the loc-RIB to filter or modify those paths, and placing the filtered and modified paths in an Adj-RIB-Out 640 for that peer, as well as (6) formatting and sending update messages 400 to each peer based on the routes in the Adj-RIB-Out for that peer.
Tasks (1), (2), and (3) are defined per peer and operate on routing data learned only from that peer. Performing any of these tasks for a given peer is done independently of performing the same task for any other peers. Task (4) examines all paths from all peers, in order to insert them into the loc-RIB and determine the best path for each prefix. Tasks (5) and (6), like tasks (1), (2) and (3), are defined per peer. While both tasks (5) and (6) must access the set of best paths determined in task (4), they generate routing data for each peer independently of all of the other peers. Thus, the autonomy of each subset of the data and the tasks performed on them lend themselves to distribution across processes or threads in an n-way SMP router, or across nodes in a cluster, so long as each task has access to the required data. The required data includes (i) inbound routes from the peer for tasks (1), (2) and (3); (ii) all paths in all the Adj-RIBs-Ins for task (4); and (iii) a set of best paths for tasks (5) and (6).
The present invention relates to intra-domain notifications received by the local RIB 620 relating to reachability events. As pointed out above, these notifications may be transmitted spontaneously from other nodes within the domain or they may be responses (or non-responses) to scanning of the next-hop nodes by the Internal Border Gateway Protocol (IBGP). Ultimately, these events are processed by the BGP to generate route updates which are then advertised to peers in other domains.
Scanning is scheduled as follows, with reference to
The receipt of a rapid succession of reachability events is an indication of churning and the increasing interval between scans helps to reduce this problem. On the other hand, whenever a scan is performed, the scan delay is reduced, preferably by the amount of the most recent increment that was added to the delay interval. Accordingly, when the rate at which the notifications are received decreases, the scanning interval also decreases. Preferably, the scanning delay also decreases with time. If it ultimately decays to zero, the next scan will be performed immediately upon the next notification of a reachability event.
The invention is easily implemented. Assume that the present scan delay is recorded in a memory location, as is the last delay increment included in the scan delay.
As shown in
As shown in
Number | Name | Date | Kind |
---|---|---|---|
4910733 | Sommani et al. | Mar 1990 | A |
5276680 | Messenger | Jan 1994 | A |
5400329 | Tokura et al. | Mar 1995 | A |
5917820 | Rekhter | Jun 1999 | A |
6006016 | Faigon et al. | Dec 1999 | A |
6101194 | Annapareddy et al. | Aug 2000 | A |
6269099 | Borella et al. | Jul 2001 | B1 |
6339595 | Rekhter et al. | Jan 2002 | B1 |
6463061 | Rekhter et al. | Oct 2002 | B1 |
6553423 | Chen | Apr 2003 | B1 |
6628614 | Okuyama et al. | Sep 2003 | B2 |
6987728 | Deshpande | Jan 2006 | B2 |
6990070 | Aweya et al. | Jan 2006 | B1 |
7006821 | Tee | Feb 2006 | B2 |
7263078 | Krantz et al. | Aug 2007 | B2 |
7280537 | Roy | Oct 2007 | B2 |
20020044549 | Johansson et al. | Apr 2002 | A1 |
20020177910 | Quarterman et al. | Nov 2002 | A1 |
20030023701 | Norman et al. | Jan 2003 | A1 |
20030043796 | Okuyama et al. | Mar 2003 | A1 |
20030156603 | Rakib et al. | Aug 2003 | A1 |
20030202501 | Jang | Oct 2003 | A1 |
20040120278 | Krantz et al. | Jun 2004 | A1 |
20050068968 | Ovadia et al. | Mar 2005 | A1 |
20050074001 | Mattes et al. | Apr 2005 | A1 |
20050074003 | Ball et al. | Apr 2005 | A1 |
20060126495 | Guichard et al. | Jun 2006 | A1 |
20060180438 | Mosli et al. | Aug 2006 | A1 |
20060182038 | Nalawade et al. | Aug 2006 | A1 |
Number | Date | Country |
---|---|---|
06 71 9874 | May 2009 | EP |
Number | Date | Country | |
---|---|---|---|
20060182115 A1 | Aug 2006 | US |