Multiprotocol label switching (MPLS) is a connection oriented routing technique used in data networks for directing data from one node to a next node in the network based on path labels rather than network addresses (e.g., Internet Protocol (IP) traffic routing). Use of the path labels, instead of network addresses, avoids complex routing table lookups.
IP is a connectionless routing technique where routers route packets based on destination and/or source IP addresses. The forwarding decision at the router is made on a hop-by-hop basis without setting up a connection. In a connectionless routing technique, bandwidth typically cannot be reserved for a given packet session. In an IP network, each router routes each received packet by matching the destination IP address with entries in the routing table using a longest prefix matching algorithm.
MPLS, being a connection oriented routing packet forwarding mechanism, forwards packets based on a fixed-length, short label that corresponds to a label switched path (LSP) that has been previously established via signaling between an ingress and egress node in the network, and via intermediate nodes on the path between the ingress and egress nodes. Forwarding attributes (e.g., bandwidth) for the virtual link (i.e., label switch path) are typically negotiated during the connection set-up signaling. MPLS, therefore, introduces signaling overhead to establish the label switch path, but results in subsequent less complicated packet forwarding with less delay relative to IP.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. The following detailed description does not limit the invention.
Exemplary embodiments described herein implement a technique for mid-span manual or automated re-optimization of traffic engineered label switched paths within a network that uses a label switching protocol (e.g., MPLS). Exemplary embodiments described herein, using the mid-span re-optimization of label switched paths, enable a network operator to move traffic quickly in a network using a label switching protocol to prepare for link or nodal maintenance. Once maintenance is complete, the network operator may move the traffic back, restoring the label switch path(s) to the most optimal (e.g., least latency) path(s). Exemplary embodiments described herein, when applied globally to a switch/router, can be used to proactively trigger network re-optimization on a switch/router global basis periodically.
Network administration node 110 may include a network device (e.g., a server) associated with a network administrator or network operator, such as, for example, a network operations center (NOC) network device. A network administrator may send, via network administration node 110, instructions to provider edge (PE) label edge routers (LERs) 150-1 through 150-2 and intermediate label switch routers (LSRs) 160-1 through 160-q. The instructions may include, for example, instructions for manually setting metrics associated with links between the LSRs of network 140, or for sending link failure simulation instructions, as described further herein, to intermediate LSRs 160-1 through 160-q.
Clients 120-1 through 120-(x+1) may each include any type of computing device that has communication capabilities. Client devices 120-1 through 120-(x+1) may each include, for example, a telephone (e.g., a smart phone), a computer (e.g., laptop, palmtop, desktop, or tablet computer), a set-top box (STB), a gaming device, or a personal digital assistant (PDA). Client devices 120-1 through 120-(x+1) may connect with local network 130 or network 140 via wired or wireless links.
As shown in
In the example depicted in
Network 140 may include one or more networks including, for example, a wireless public land mobile network (PLMN) (e.g., a Code Division Multiple Access (CDMA) 2000 PLMN, a Global System for Mobile Communications (GSM) PLMN, a Long Term Evolution (LTE) PLMN and/or other types of PLMNs), a telecommunications network (e.g., Public Switched Telephone Networks (PSTNs)), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), an intranet, the Internet, or a cable network (e.g., an optical cable network). Network 140 may enable clients 120-1 through 120-(x+1) to communicate with one another, or with other network nodes, by data forwarding through one or more label switched paths.
The configuration of network components of network environment 100 shown in
LSR 200 may include one or more ingress interfaces 210-1 through 210-N (generically and individually referred to herein as an “ingress interface 210”), a switch fabric 220, a routing engine 230, and one or more egress interfaces 240-1 through 240-M (generically and individually referred to herein as an “egress interface 240”). Each ingress interface 210 may receive incoming data units via one or more physical links and may forward the received data units through switch fabric 220 to a respective egress interface. Each ingress interface 210 may forward received data units to a respective egress interface 240 using, for example, a forwarding information base (FIB) table received from routing engine 230.
Routing engine 230 may communicate with other nodes (e.g., other LSRs) connected to router 200 to establish LSPs for MPLS data forwarding. Routing engine 230 may create a forwarding table based on signaling received from other LSRs and may download the forwarding table to each ingress interface 210 and each egress interface 240. Routing engine 230 may also perform other general control and monitoring functions for router 200.
Switch fabric 220 may include one or more switching planes to facilitate communication between ingress interface 210-1 through 210-N and egress interfaces 240-1 through 240-M. In one exemplary implementation, each of the switching planes may include a three-stage switch of crossbar elements. Other types of switching planes may, however, be used in switch fabric 220. Egress interfaces 240-1 through 240-M may receive data units from switch fabric 220 and may forward the data units towards destinations in the network via one or more outgoing physical links.
LSR 200 may include fewer, additional and/or different components than are shown in
In networks that use MPLS traffic engineered label switched paths, there is no ability, using existing protocols, outside of network failure events, to instantly influence LSPs which only transit a router/switch. In the example of network 140 shown in
As shown in
During normal network operation (i.e., non-failure operation), it is often beneficial to move traffic off of particular links, or entire routers/switches, in preparation for network maintenance. In traffic engineered LSPs, once the LSP is signaled and established, changes in the cost of links associated with the path of the LSP do not immediately trigger a path update. Therefore, many network operators implement a re-optimization timer, upon the expiration of which, the ingress LSR (e.g., PE1) re-computes the optimal path and re-signals if the computation, based on changes in the costs of links associated with the path of the LSP, results in a different path. While this computation allows the network to move traffic off of the LSP (e.g., between P1 and P2 in
Alternatively, some implementations support a local tool where an operator may trigger the optimization of an LSP. This is, however, only supported at the originating LER (e.g., at PE1 in
Implementations described herein propose the use of a “dummy” fast re-route event to trigger path re-optimization from within the path of an LSP. In this “dummy” fast re-route event, no physical outage or failure actually occurs. The network operator/administrator instructs the LSR to simulate a failure at a logical interface, port, link aggregation, or switch level. However, instead of moving the traffic from LSP “A” to the bypass tunnel, the traffic remains on LSP “A” (shown in
The exemplary process may include the ingress LER signaling the LSP (e.g., LSP “A1”) on a path from the ingress LER to the egress LER, where the signaling includes a request for link or nodal protection (block 600).
Intermediate LSR(s) on LSP “A1,” based on the link or nodal protection request, establishes a bypass tunnel to protect against a link failure between that intermediate LSR and a next intermediate LSR on the LSP (block 610). Each intermediate LSR 160 between ingress LER 150-1 and egress LER 150-2, after receiving the request for link or nodal protection in the signaling for LSP “As”, establishes a bypass tunnel between that intermediate LSR and a next intermediate LSR on the LSP. For example, as shown in
A LSR on LSP “A1” determines whether it has a link failure (block 615). If so (YES—block 615), then the LSR switches the traffic for LSP “A1” onto the bypass tunnel (block 630). The LSR, as a Point of Local Repair (PLR), sends a path error message to the ingress LSR to indicate that LSP “A1” has been locally repaired by moving traffic to the bypass tunnel (block 635).
Returning to block 615, if the LSR on LSP “A1” determines that there has been no link failure (NO—block 615), then the LSR on LSP “As” determines whether it has received a failure simulation instruction (block 620). The LSR may, for example, receive a failure simulation instruction from network admin 110.
If the LSR on LSP “A1” receives a failure simulation instruction (YES—block 620), then the LSR, as a PLR, sends a path error message to the ingress LER to indicate that LSP “A1” is supposedly locally repaired, but, since this is a failure simulation, the LSR does not actually move the traffic to the bypass tunnel (block 625). The exemplary process continues at block 640 in
Subsequent to block 635, or block 625, the ingress LER, based on receipt of a path error message from an intermediate LSR, executes the Constrained Shortest Path First algorithm to determine an alternate path to the egress LER (block 640).
The foregoing description of implementations provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while a series of blocks has been described with respect to
Certain features described above may be implemented as “logic” or a “unit” that performs one or more functions. This logic or unit may include hardware, such as one or more processors, microprocessors, application specific integrated circuits, or field programmable gate arrays, software, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
In the preceding specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
7512063 | Vasseur | Mar 2009 | B2 |
7633860 | Li | Dec 2009 | B2 |
7859993 | Choudhury | Dec 2010 | B1 |
7961626 | Reeve | Jun 2011 | B2 |
9019814 | Mohanty | Apr 2015 | B1 |
9160652 | Taillon | Oct 2015 | B2 |
20020004843 | Andersson | Jan 2002 | A1 |
20030204768 | Fee | Oct 2003 | A1 |
20030236089 | Beyme | Dec 2003 | A1 |
20060126502 | Vasseur | Jun 2006 | A1 |
20070011284 | Le Roux | Jan 2007 | A1 |
20070058568 | Previdi | Mar 2007 | A1 |
20070070909 | Reeve | Mar 2007 | A1 |
20070183317 | Vasseur | Aug 2007 | A1 |
20070189265 | Li | Aug 2007 | A1 |
20080304494 | Yokoyama | Dec 2008 | A1 |
20090219806 | Chen | Sep 2009 | A1 |
20100157807 | Csaszar | Jun 2010 | A1 |
20100177631 | Chen | Jul 2010 | A1 |
20110280123 | Wijnands | Nov 2011 | A1 |
20110286324 | Bellagamba | Nov 2011 | A1 |
20120033542 | Hanif | Feb 2012 | A1 |
20130094355 | Nakash | Apr 2013 | A1 |
20130182559 | Fujioka | Jul 2013 | A1 |
20130235716 | Vasseur | Sep 2013 | A1 |
20130259056 | Kotrabasappa | Oct 2013 | A1 |
20130301403 | Esale | Nov 2013 | A1 |
20130336103 | Vasseur | Dec 2013 | A1 |
20130336107 | Vasseur | Dec 2013 | A1 |
20140064062 | Taillon | Mar 2014 | A1 |
20140328163 | Del Regno | Nov 2014 | A1 |
Number | Date | Country | |
---|---|---|---|
20140328163 A1 | Nov 2014 | US |