The invention relates to the field of communication networks such as multi-protocol label switching (MPLS) networks and, more particularly but not exclusively, to resource overload detection and management mechanisms.
Multiprotocol Label Switching (MPLS) enables efficient delivery of a wide variety of differentiated, end-to-end services. Multiprotocol Label Switching (MPLS) traffic engineering (TE) provides a mechanism for selecting efficient paths across an MPLS network based on bandwidth considerations and administrative rules. Each label switching router maintains a TE link state database with a current network topology. Once a path is computed, TE is used to maintain a forwarding state along that path.
In the case of Resource Reservation Protocol (RSVP) Inter-Domain TE-LSPs, a router or other network element or node may experience a resource overutilization condition (i.e., insufficient memory, processor, input/output or other resources) in response to receiving a large number of RSVP Packets. Such a condition may result in the RSVP/MPLS process temporarily dropping RSVP packets to conserve resources. If the condition persists, then the node may start tearing down existing MPLS-TE LSPs to release resources, which in turn may lead to service interruption in a Service Provider Network.
Various deficiencies in the prior art are addressed by systems, methods, apparatus, mechanism, telecom network elements and the like for managing MPLS-TE loading, such as by detecting, responding to, and otherwise managing MPLS-TE loading conditions a manner adapted to minimize service impact. Various embodiments provide a mechanism of alerting other routers, network elements or nodes in an MPLS-TE Domain so that they may avoid using an overloaded node in subsequent new MPLS-TE LSP path computations.
Various embodiments are directed toward propagating information indicative of a MPLS TE-Overload condition. In particular, upon detecting such a condition, various MPLS/RSVP tasks inform a routing protocol (e.g., OSPF, IS-IS and the like) about this state. The routing protocol in turn communicates the overload condition to the nodes in the MPLS TE routing domain by inserting a new flag or bit value in an OSPF Router Information Capability TLV (if using OSPF) or an IS-IS Router Capability TLV (if using IS-IS).
A method for managing MPLS-TE loading according to one embodiment comprises: monitoring a utilization level of a label switch router (LSR) associated with one or more label switched path (LSPs); and in response to a said utilization level being indicative of a MPLS-TE overload condition, transmitting an overload message toward an Interior Gateway Protocol (IGP), said overload message adapted to cause said IGP to advertise said overload condition.
The utilization level may be associated with one or more of a memory utilization level, a central processing unit (CPU) utilization level, an input/output utilization level, a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packets and a rate of dropped RSVP packets.
The teachings herein can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
Various embodiments provide systems, methods and/or apparatus for detecting, responding to, and otherwise managing MPLS-TE Overload conditions a manner adapted to minimize service impact.
Generally speaking, various embodiments are directed toward propagating information indicative of a MPLS TE-Overload condition. In particular, upon detecting such a condition, various MPLS/RSVP tasks inform a routing protocol (e.g., OSPF, IS-IS and the like) about this state. The routing protocol in turn communicates the overload condition to the nodes in the MPLS TE routing domain by inserting new flag or bit value in an OSPF Router Information Capability TLV (if using OSPF) or an IS-IS Router Capability TLV (if using IS-IS).
As depicted in
The nodes 110 are configured for transporting traffic within the network 102. The nodes 110 may transport traffic within network 102 using any suitable protocols (e.g., Internet Protocol (IP), MPLS, and the like, as well as various combinations thereof).
The nodes 110 are configured to collect link state information associated with the communication link(s) 120 to which each node 110 is connected. The nodes 110 are further configured to flood the collected link state information within network 102.
In one embodiment, the collection and flooding of link state information is performed using an Interior Gateway Protocol (IGP) supporting link-state, such as Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), or any other suitable protocol. In this manner, each node 110 receives link state information associated with network 102 and, thus, each node 110 is able to maintain a database including information suitable for use in computing paths (e.g., network topology information, link state information, and the like). This type of database is typically referred to as a Traffic Engineering (TE) database. The nodes 110 also may be configured to store link constraints for use in computing paths for network 102.
The link constraints may include any suitable link constraints which may be evaluated within the context of path computation. For example, the link constraints may include one or more of a link utilization for the link, a minimum link capacity required for a link, a maximum link bandwidth allowed for a link, a link cost associated with a link, an administrative constraint associated with the link, and the like, as well as various combinations thereof.
The link constraints may be configured on the nodes 110 in any suitable manner. For example, the link constraints may be pre-configured on the nodes 110 (e.g., automatically and/or by administrators), specified when requesting path computation or establishment, and the like, as well as various combinations thereof. In such embodiments, the link constraints may be provided to the nodes 110, for storage on the nodes 110, from any suitable source(s) of link constraints (e.g., a management system such as MS 130, or any other suitable source).
Although primarily depicted and described herein with respect to embodiments in which link constraints are configured on the nodes 110, in other embodiments the link constraints may not be stored on the nodes 110. For example, in embodiments in which path computation is performed by a device or devices other than nodes 110 (e.g., by a management system, such as MS 130), link constraints may only be available to the device(s) computing the paths.
In network 102, at least a portion of the nodes 110 may be configured to operate as ingress nodes into network 102 and, similarly, at least a portion of the nodes 110 may be configured to operate as egress nodes from network 102. In
As each of the nodes 110 may be configured to operate as an ingress node and/or as an egress node, each node 110 configured to operate as an ingress node may be referred to as an ingress node 110 and each node 110 configured to operate as an egress node may be referred to as an egress node 110.
In one embodiment, the ingress nodes 110 each are configured for computing paths to egress nodes 110, thereby enabling establishment of connections, from the ingress nodes 110 to the egress nodes 110, configured for transporting traffic via the network 102. The ingress nodes 110, in response to path computation requests, compute the requested paths based on the network information (e.g., network topology, link state, and the like, which may be available in a TE database and/or any other suitable database or databases) and link constraints available to the ingress nodes 110, respectively. The ingress nodes 110, upon computation of paths, may then initiate establishment of connections using the computed paths. The ingress nodes 110 may then transmit information to the egress nodes 110 via the established connections, at which point the egress nodes 110 may then forward the information to other networks and devices.
In one embodiment, MS 130 is configured for computing paths from ingress nodes 110 to egress nodes 110, thereby enabling establishing of connections, from the ingress nodes 110 to the egress nodes 110, configured for transporting traffic via the network 102. The MS 130, in response to path computation requests, computes the requested paths based on the network information (e.g., network topology, link state, and the like, which may be available in a TE database and/or any other suitable database or databases) and link constraints available to MS 130. The MS 130, upon computing a path, transmits path configuration information for the computed path to the relevant nodes 110, where the path configuration information may be used to establish a connection via the computed path within network 102. The ingress node 110 of the computed path may then transmit information to the egress node 110 via the connection, at which point the egress node 110 may then forward the information to other networks and devices.
In various embodiments, the network 102 comprises an MPLS network in which nodes 110 are label switching routers (LSRs) operating according to Multi-Protocol Label Switching (MPLS) Label Distribution Protocol (LDP).
At step 210, a LSP is established between an ingress node and an egress node. Referring to box 215, the established LSP further supports upstream and downstream messages between the various LSRs. In particular, RSVP path message is propagated downstream toward the egress node while RSVP Resv messages are propagated upstream toward the ingress node.
At step 220, at one or more of the egress (or transit) nodes or LSRs forming the LSP, resource utilization is monitored to determine if a MPLS-TE overload condition exists or is imminent. Referring to box 225, this determination may be made with respect to memory, CPU, input/output or other resources, a number of received RSVP packets, a rate of RSVP packet reception, a number of dropped RSVP packet, the rate at which RSVP packets are dropped, one or more resource utilization threshold levels and/or other mechanisms. For example, a MPLS/RSVP task processing mechanism at the node or LSR continues polling is system resource utilization and/or RSVP packet reception statistics.
At step 230, in response to a determination that a MPLS-TE overload condition exists or is imminent at a particular node or LSR, the MPLS/RSVP task processing mechanism (or other mechanism) at the node or LSR informs the IGP of the overload condition.
At step 240, the IGP advertises the MPLS-TE overload condition to routers within the MPLS domain. Referring to box 245, such advertising is performed via a new or predefined flag or bit setting within a IGP router capability TLV or sub-TLV, such as a Open Shortest Path First (OSPF) routing protocol, Intermediate System To Intermediate System (IS-IS) routing protocol and the like. Other IGP advertising mechanisms may also be used. Further, other types of IGP may also be used.
Various embodiments described herein utilize IGP advertising mechanisms conforming to IS-IS CAPABILITY TLVs and sub-TLVs such as described in more detail in Internet Engineering Task Force (IETF) document “IS-IS Extensions for Advertising Router Information.” As defined therein, the IS-IS router CAPABILITY TLV is composed of 1 octet specifying the number of bytes in the value field, and a variable length value field, starting with 4 octets of Router ID, indicating the source of the TLV, and followed by 1 octet of flags. A set of optional sub-TLVs may follow the flag field. Sub-TLVs are formatted as described in IETF Request for Comment (RFC) 3784. Various embodiments use assigned or unassigned bits or flags within a value field (or other fields) to indicate an overload condition.
Various embodiments described herein utilize IGP advertising mechanisms conforming to OSPF CAPABILITY TLVs and sub-TLVs such as described in more detail in IETF RFC 4970. The format of the Router Informational Capabilities TLV includes a “value” field comprising a variable length sequence of capability bits rounded to a multiple of 4 octets padded with undefined bits. Various embodiments use assigned or unassigned bits or flags within the value field (or other fields) to indicate an overload condition.
For example, upon determining that a MPLS-TE overload condition exists or is imminent, a MPLS/RSVP task processing mechanism at the node or LSR may inform IGP of the overload condition. The IGP in turn advertises this condition by adapting or setting to a first state a flag or bit setting of a OSPF router info capability TLV, IS-IS router info capability TLV, other TLV, existing LSP attribute and the like.
Various embodiments are adapted to propagating information indicative of a MPLS-TE overload condition upstream to a head-end router (such as an ingress LSP, ABR and the like) adapted to cause the head end router to initiate or trigger a reroute (if desired) of one or more LSPs supported by transit or egress LSRs. A head-end router receiving information indicative of a downstream MPLS-TE overload condition may request re-routing for any existing TE-LSP transiting the overloaded egress (or transit) nodes or LSRs forming the TE-LSP. Suitable mechanisms for requesting rerouting exist, including those described in more detail in various Internet Engineering Task Force (IETF) Request for Comment (RFC), such as RFC5710 (PathErr Message Triggered MPLS and GMPLS LSP Reroutes).
Generally speaking, when performing a path computation for any new TE-LSP, the head-end router should avoid a router advertising an MPLS-TE overload condition if possible. In this manner, for existing or new MPLS-TE LSPs associated with an overloaded router, one or more head-end routers operate to reduce the RSVP load (resource load) associated with the overloaded router. In this manner, the resources of the overloaded router are conserved such that existing RSVP sessions may quickly return to a normal working state.
Thus, in various embodiments, an IGP advertised overload condition operates to inhibit other LSRs from routing new LSPs through an overloaded LSR. Similarly, an IGP advertised non-overload condition operates to enable other LSRs to route existing and new LSPs to the non-overloaded LSR.
At step 250, in response to a determination that a MPLS-TE overload condition no longer exists or is imminent at a particular node or LSR, the MPLS/RSVP task processing mechanism (or other mechanism) at the node or LSR informs the IGP of the non-overload condition. The IGP advertises the MPLS-TE non-overload condition to routers within the MPLS domain in a manner similar to that described above with respect to step 240.
For example, upon determining that a MPLS-TE overload condition no longer exists or is no longer imminent, a MPLS/RSVP task processing mechanism at the node or LSR may inform IGP of the non-overload condition. The IGP in turn advertises this condition by adapting or setting to a second state a flag or bit setting of a OSPF router info capability TLV, IS-IS router info capability TLV, other TLV, existing LSP attribute and the like.
Thus, in one embodiment, a node entering a MPLS-TE Overloaded state informs the IGP of this state such that the IGP advertises the overload state to all the nodes in the in the MPLS-TE domain by, illustratively, setting a MPLS-TE overload flag or bit in a corresponding TLV or sub-TLV. Similarly, a node exiting a MPLS-TE Overloaded state (i.e., returning to a normal state) informs the IGP of this state such that the IGP advertises the normal state to all the nodes in the in the MPLS-TE domain by, illustratively, resetting a MPLS-TE overload flag or bit in a corresponding TLV or sub-TLV.
Thus, the IGP advertised overload condition is adapted to cause other LSRs to reroute existing LSPs around an overloaded LSR and/or routing new LSPs through an overloaded LSR. Similarly, an IGP advertised non-overload condition is adapted to cause other LSRs to again route existing or new LSPs through a previously overloaded LSR is such routing is appropriate in terms of cost constraints, path management criteria and so on.
As depicted in
It will be appreciated that the functions depicted and described herein may be implemented in software and/or in a combination of software and hardware, e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/or any other hardware equivalents. In one embodiment, the cooperating process 305 can be loaded into memory 304 and executed by processor 303 to implement the functions as discussed herein. Thus, cooperating process 305 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, and the like.
It will be appreciated that computing device 300 depicted in
It is contemplated that some of the steps discussed herein as software methods may be implemented within hardware, for example, as circuitry that cooperates with the processor to perform various method steps. Portions of the functions/elements described herein may be implemented as a computer program product wherein computer instructions, when processed by a computing device, adapt the operation of the computing device such that the methods and/or techniques described herein are invoked or otherwise provided. Instructions for invoking the inventive methods may be stored in tangible and non-transitory computer readable medium such as fixed or removable media or memory, transmitted via a tangible or intangible data stream in a broadcast or other signal bearing medium, and/or stored within a memory within a computing device operating according to the instructions.
Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. Thus, while the foregoing is directed to various embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. As such, the appropriate scope of the invention is to be determined according to the claims.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/653,219, filed May 30, 2012, entitled TE-LSP SYSTEMS AND METHODS (Attorney Docket No. 811458-PSP) which application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61653219 | May 2012 | US |