Segment routing: PCE driven dynamic setup of forwarding adjacencies and explicit path

Information

  • Patent Grant
  • 10469325
  • Patent Number
    10,469,325
  • Date Filed
    Thursday, December 29, 2016
    7 years ago
  • Date Issued
    Tuesday, November 5, 2019
    4 years ago
Abstract
An apparatus and method for path creation element driven dynamic setup of forwarding adjacencies and explicit path. In one embodiment of the method, a node receives an instruction to create a tunnel between the node and another node. The node creates or initiates the creation of the tunnel in response to receiving the instruction, wherein the tunnel comprises a plurality of nodes in data communication between the node and the other node. The node maps a first identifier (ID) to information relating to the tunnel. The node advertises the first ID to other nodes in a network of nodes.
Description
BACKGROUND

Network nodes forward packets using forwarding tables. Network nodes may take form in one or more routers, one or more bridges, one or more switches, one or more servers, or any other suitable communications processing device. A packet is a formatted unit of data that typically contains control information and payload data. Control information may include: source and destination IP addresses, error detection codes like checksums, sequencing information, etc. Control information is typically found in packet headers and trailers, with payload data in between.


Packet forwarding requires a decision process that, while simple in concept, can be complex. Since packet forwarding decisions are handled by network nodes, the total time required for this can become a major limiting factor in overall network performance.


Multiprotocol Label Switching (MPLS) is one packet forwarding mechanism. MPLS Nodes make packet forwarding decisions based on Label Distribution Protocol (LDP) distributed labels attached to packets and LDP forwarding tables. LDP is a process in which network nodes capable of MPLS exchange LDP labels (hereinafter labels). Packet forwarding based on labels stands in stark contrast to traditional Internet Protocol (IP) routing in which packet forwarding decisions are made by nodes using IP addresses contained within the packet.





BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.



FIG. 1 is a block diagram illustrating certain components of an example network.



FIG. 2 is a block diagram illustrating certain components of an example network.



FIG. 3 is a flow chart illustrating an example process employed by a node of FIG. 2.



FIG. 4 is a flow chart illustrating an example process employed by a node of FIG. 2.



FIG. 5 is a block diagram illustrating certain components of an example network.



FIG. 6 is a block diagram illustrating certain components of an example network.



FIG. 7 is a flow chart illustrating an example process employed by a node of FIG. 6.



FIG. 8 is a block diagram illustrating certain components of an example node that can be employed in the network of FIG. 1, 2, 5 or 6.





DETAILED DESCRIPTION

1. Overview


An apparatus and method is disclosed for path-creation element driven dynamic setup of forwarding adjacencies and explicit path. In one embodiment of the method, a node receives an instruction to create a tunnel between the node and another node. The node creates or initiates the creation of the tunnel in response to receiving the instruction, wherein the tunnel comprises a plurality of nodes in data communication between the node and the other node. The node maps a first identifier (ID) to information relating to the tunnel. The node advertises the first ID to other nodes in a network of nodes.


2. Packet Forwarding Mechanisms


IP routing and MPLS are distinct packet forwarding mechanisms. IP routing uses IP addresses inside packet headers to make packet forwarding decisions. In contrast, MPLS implements packet forwarding decisions based on short path identifiers called labels attached to packets. Segment routing (SR) is yet another packet forwarding mechanism and can be seen as a modification of MPLS. SR is similar to MPLS in many regards and employs many of the data plane functions thereof. For example, like MPLS packet forwarding decisions in SR can be based on short path identifiers called segment IDs attached to packets. While similarities exist between MPLS and SR, substantial differences exist between SR and traditional MPLS as will be more fully described below.

    • 2.1 IP Packet Routing


IP packet routing uses IP forwarding tables, which are created at nodes using routing information distributed between nodes via one or more protocols like the internal gateway protocol (IGP) and/or the border gateway protocol (BGP). In simple terms, IP forwarding tables map destination addresses to the next hops that packets take to reach their destinations. When a node receives a packet, the node can access a forwarding table using the packet's destination IP address and lookup a corresponding egress interface to the next hop. The node then forwards the packet through the egress interface. The next hop that receives the packet performs its own forwarding table lookup using the same destination IP address in the packet, and so on.

    • 2.2 MPLS and LDP


MPLS is commonly employed in provider networks consisting of interconnected LDP nodes. For the purposes of explanation, LDP nodes take form in MPLS enabled nodes that also implement LDP in the control plane. Packets enter an MPLS network via an ingress edge LDP node, travel hop-by-hop along a label-switched path (LSP) that typically includes one or more core LDP nodes, and exit via an egress edge LDP node.


Packets are forwarded along an LSP based on labels and LDP forwarding tables. Labels allow for the use of very fast and simple forwarding engines in the data planes of nodes. Another benefit of MPLS is the elimination of dependence on a particular Open Systems Interconnection (OSI) model data link layer technology to forward packets.


A label is a short, fixed-length, locally significant identifier that can be associated with a forwarding equivalence class (FEC). Packets associated with the same FEC should follow the same LSP through the network. LSPs can be established for a variety of purposes, such as to guarantee a certain level of performance when transmitting packets, to forward packets around network congestion, to create tunnels for such things as network-based virtual private networks, etc. In many ways, LSPs are no different than circuit-switched paths in ATM or Frame Relay networks, except that they are not dependent on a particular Layer 2 technology.


LDP is employed in the control planes of nodes. For purpose of explanation, LDP nodes are those nodes that employ only LDP in their control plane. Two LDP nodes, called LDP peers, can bi-directionally exchange labels on a FEC by FEC basis when setting up an LSP. Some LSPs can be set up using resource reservation protocol (RSVP) tunnel engineering. LDP can be used in a process of building and maintaining LDP forwarding tables that map labels and next hop egress interfaces. These forwarding tables can be used to forward packets through MPLS networks as more fully described below.


When a packet is received by an ingress edge LDP node of an MPLS network, the ingress node may use information in the packet to determine a FEC corresponding to an LSP the packet should take across the network to reach the packet's destination. In one embodiment, the FEC is an identifier of the egress edge node that is closest to the packet's destination IP address. In this embodiment, the FEC may take form in the egress edge node's loopback address.


Characteristics for determining the FEC for a packet can vary, but typically the determination is based on the packet's destination IP address. Quality of Service for the packet or other information may also be used to determine the FEC. Once determined, the ingress edge LDP node can access a table to select a label that is mapped to the FEC. The table may also map a next hop egress interface to the FEC. Before the ingress edge LDP node forwards the packet to the next hop via, the ingress node attaches the label.


When an LDP node receives a packet with an attached label (i.e., the incoming label), the node accesses an LDP forwarding table to read a next hop egress interface and another label (i.e., an outgoing label), both which are mapped to the incoming label. Before the packet is forwarded via the egress interface, the LDP node swaps the incoming label with the outgoing label. The next hop receives the packet with label and may perform the same process. This process is often called hop-by-hop forwarding along a non-explicit path (i.e., the LSP). The penultimate node in the LSP may pop or remove the incoming label before forwarding the packet to an egress edge LDP node in the network, which in turn may forward the packet towards its destination using the packet's destination address and an IP forwarding table. In another embodiment, the last node in the LSP may pop the incoming label before forwarding the packet using the destination address and an IP forwarding table.


To illustrate MPLS aspects, FIG. 1 shows a portion of an example MPLS network 100 that includes LDP nodes 102-122 coupled together via communication links. An LSP from node 102 to node 122 can be created so that all packets of a stream associated with a particular FEC sent from node 102 to node 122 will travel through the same set of nodes. Each LDP node maintains information for the LSP established through it in an LDP forwarding table. Thus, if node 110 knows that node 114 is the next hop along the LSP for all packets received from node 102 that are destined for node 122, node 110 can forward the packets to node 114.

    • 2.3 Segment Routing


Segment routing (SR) is a mechanism in which nodes forward packets using SR forwarding tables and segment IDs. This forwarding mechanism can use the data plane of MPLS with slight modifications as will be more fully described below. As such, the forwarding engines in the data planes of SR nodes are very similar to the forwarding engines in the data planes of SR nodes. Like MPLS, SR enables very fast and simple forwarding engines in the data plane of nodes. SR is not dependent on a particular Open Systems Interconnection (OSI) model data link layer technology to forward packets.


SR nodes (i.e., nodes employing SR but not LDP) make packet forwarding decisions based on segment IDs as opposed to LDP distributed labels, and as a result SR nodes need not employ LDP in their control planes. In one embodiment, segment IDs are substantially shorter than labels. The range for segment IDs may be distinct from the range for labels. Unless otherwise indicated, the SR nodes lack LDP in their control plane.


Packets can enter an SR enabled network via an ingress edge SR node, travel hop-by-hop along a segment path (SP) that includes one or more core SR nodes, and exit the network via an egress edge SR node.


Like labels, segment IDs are short (relative to the length of an IP address or a FEC), fixed-length identifiers. In one embodiment, segment IDs are shorter than labels. Segment IDs may correspond to topological segments of a network, services provided by network nodes, etc. Topological segments represent one hop or multi hop paths to SR nodes. Topological segments act as sub-paths that can be combined to form an SP. Stacks of segment IDs can represent SPs, including sub paths called SR tunnels that are more fully described below. SPs can be associated with FECs as will be more fully described below.


There are several types of segment IDs including nodal-segment IDs, adjacency-segment IDs, forwarding adjacency (FA)-segment IDs, etc. Nodal-segment IDs are assigned to SR nodes so that no two SR nodes belonging to a network domain are assigned the same nodal-segment ID. Nodal-segment IDs can be mapped to unique node identifiers such as node loopback IP addresses (hereinafter loopbacks). In one embodiment, all assigned nodal-segment IDs are selected from a predefined ID range (e.g., [32, 5000]). A nodal-segment ID corresponds to a one-hop or a multi-hop, shortest path (SPT) to an SR node assigned the nodal-segment ID as will be more fully described below.


An adjacency-segment ID represents a direct link between adjacent SR nodes in a network. Links can be uniquely identified. For purposes of explanation only, this disclosure will identify a link using the loopbacks of nodes between which the link is positioned. To illustrate, for a link between two nodes identified by node loopback X and node loopback Y, the link will be identified herein as link XY. Because loopbacks are unique, link IDs are unique. Link IDs should not be confused with adjacency-segment IDs; adjacency-segment IDs may not be unique within a network. This disclosure will presume that only one link exists between nodes in a network, it being understood the present disclosure should not be limited thereto.


Each SR node can assign a distinct adjacency-segment ID for each of the node's links. Adjacency-segment IDs are locally significant; separate SR nodes may assign the same adjacency-segment ID, but the adjacency-segment ID represents distinct links. In one embodiment, adjacency-segment IDs are selected from a predefined range that is outside the predefined range for nodal-segment IDs.


A forwarding adjacency (FA)-segment ID represents an SR tunnel or an MPLS traffic engineering (TE) tunnel between a pair of nodes referred to as the head end and tail end nodes. Head end and/or tail end nodes can create an SR tunnel or a TE tunnel under direction of a path computation (PC) node. The head end and tail end nodes may be SR enabled or both SR and LDP (SR/LDP) enabled as will be more fully described below. Tunnels enable PC nodes to optimize operation of a network.


The SR tunnel constitutes several SR nodes that create a path between the SR head end and tail end nodes. In this regard, the FA-segment ID may correspond to a stack of nodal-segment IDs of the respective SR nodes constituting the path between SR head end and tail end nodes. The FA-segment ID may also correspond to a TE tunnel between SR/LDP head end and tail end nodes. In this latter embodiment, the head end and tail end nodes are both SR and LDP enabled, while the intervening nodes are LDP enabled. All FA-segment IDs are selected from a predefined range of IDs that are outside the ranges for the nodal-segment IDs and adjacency-segment IDs.


SR or SR/LDP nodes can advertise routing information including nodal-segment IDs bound to loopbacks, adjacency-segment IDs mapped to link IDs, FA-segment IDs bound to loopbacks of nodes constituting the SR or TE tunnel, etc., using protocols such as IGP and/or BGP with an appropriate SR extension. Nodes, including path computation (PC) nodes, may use the routing information they receive in order to create topology maps of the network. SR forwarding tables can be created or updated through use of these maps. To illustrate, a node can use the map it creates to identify next hop egress interfaces for shortest paths (SPTs) to respective node loopbacks. The identified SPT or next hop egress interfaces for the loopbacks are then mapped to respective nodal-segment IDs in the forwarding table.


SR nodes can also map their adjacency-segment IDs to egress interfaces for respective links in SR forwarding tables. Because adjacency-segment IDs are locally significant, however, adjacency-segment IDs should only be mapped in SR forwarding tables of the nodes that advertise the adjacency-segment IDs. In other words, an SR node that advertises an adjacency-segment ID should be the only node in the network area that has a SR forwarding table that maps the adjacency-segment ID to an egress interface.


FA-segment IDs are also locally significant. As a result FA-segment IDs should only be mapped in SR nodes or SR/LDP nodes that advertise the FA-segment IDs. FA-segment IDs are not mapped to egress interfaces. Rather, FA-segment IDs can be mapped in SR nodes to stacks of segment IDs, which in turn correspond to SR nodes and/or links constituting respective SR tunnels. Alternatively, the FA-segment IDs can be mapped to labels in SR/LDP nodes, which in turn correspond to respective TE tunnels.


As noted above, SR enables segment paths (SPs), including SR tunnels, through a network. SPs can be associated with FECs. Packets associated with the same FEC normally traverse the same SP to reach their destination. Nodes in SPs make forwarding decisions based on segment IDs, not based on the contents (e.g., destination IP addresses) of packets. As such, packet forwarding in SPs is not dependent on a particular Layer 2 technology.


As noted above, nodes, including PC nodes, can use advertised routing information to create topological maps, and these maps can be used to create SPs including SR tunnels. The nodes can also use advertised routing information such as nodal-segment IDs bound to loopbacks, adjacency-segment IDs mapped to link IDs, FA-segment IDs bound to SR tunnels or TE tunnels, etc., to generate ordered lists of segment IDs (i.e., segment ID stacks) for the paths or tunnels. Individual segment IDs in a stack may correspond to respective segments or sub paths, including SR tunnels or TE tunnels, of a corresponding path.


When an ingress edge SR node receives a packet, the node or a PC node in data communication with the node, can select a segment path (SP) for the packet based on information contained in the packet. In one embodiment, a FEC may be determined for the packet using the packet's destination IP address. Like MPLS, this FEC may take form in an identifier (e.g., loopback) of an egress edge node through which the packet's destination can be reached. The FEC can then be used to select a segment ID stack mapped thereto. The ingress edge node can attach the selected segment ID stack to the packet via a header. The packet with attached segment ID stack is forwarded along and traverses the segments of the SP in an order that corresponds to the list order of the segment IDs in the stack. A forwarding engine operating in the data plane of each SR or SR/LDP node can use the top segment ID within the stack to lookup the egress interface for next hop. As the packet and attached segment ID stack are forwarded along the SP in a hop-by-hop fashion, segment IDs can be popped off the top of the stack. In another embodiment, the attached stack of segment IDs remains unchanged as the packet is forwarded along the SP. In this embodiment, a pointer to an active segment ID in the stack can be advanced as the packet is forwarded along the SP. In contrast to MPLS that uses labels to forward packets, nodal-segment IDs and adjacency-segment IDs are not swapped as the packet and attached segment ID stack are forwarded along the SP. An SR node that advertises an FA-segment ID, however, may swap the FA-segment ID with a stack of segment IDs mapped thereto, or an SR/LDP node that advertises an FA-segment ID may swap the FA-segment ID for a label mapped thereto.


To illustrate general concepts of SR, FIG. 2 shows an example SR enabled provider network that is in data communication with nodes AE1-AE3. The network consists of SR nodes 204-222 and PC nodes 224 and 226. Nodes 204-210 are assigned unique nodal-segment IDs 64-67, respectively, nodes 212-218 are assigned unique nodal-segment IDs 68-71, respectively, and node 222 is assigned unique nodal-segment ID of 72. Each of the SR nodes 204-222 have interfaces that are identified as shown. For example, node 204 has three interfaces designated 1-3, respectively. Each of the nodes 204-222 is assigned a unique loopback. Loopbacks A-D are assigned to nodes 204-210, respectively, loopbacks M-P are assigned to nodes 212-218 respectively, loopback R is assigned to PC node 224, loopback S is assigned to node 226, and loopback Z is assigned to node 222. Loopbacks are unique in the network and can be used for several purposes such as calculating the topology of the network, which in turn can be used to create paths or tunnels, to identify SPTs and thus next hop egress interfaces for SR forwarding tables, etc.


One or more nodes 204-222 can also assign locally significant adjacency-segment IDs and FA-segment IDs. For example, node 208 can assign adjacency-segment IDs 9001-9003 to links CB, CD, and CO, respectively. Node 206 can assign FA-segment ID 5500 to an SR tunnel constituting nodes 206, 208, 216, and 218. Nodes 204-222 and PC nodes 224 and 226 collect all information needed for topology creation via IGP and/or BGP. PC nodes 224 and 226 also collect additional information relevant to the state and performance of the network.


Each SR node can advertise routing and performance information to the other nodes in a network, including PC nodes, using IGP and/or BGP with SR extension. For example, node 208 can generate and send one or more advertisements that include adjacency-segment IDs 9001-9003 bound to link IDs CB, CD, and CO, respectively, and nodal-segment ID 66 bound to loopback C. Node 206 may generate an advertisement that includes FA-segment ID 5500 bound to a list of loopbacks B, C, O, and P, which in turn represent the nodes in an SR tunnel or a TE tunnel.


Using the advertisements they receive, the control planes of nodes 204-222 can generate respective SR forwarding tables for use in the data planes. For example, node 208 can generate example SR forwarding table 240 that maps adjacency-segment IDs 9001-9003 to node interface IDs 1-3, respectively, and nodal-segment IDs such as 64, 65, 67, 70, and 72, to node 208 interfaces 1, 1, 2, 3, and 2, respectively, which are the SPT next hop egress interfaces determined by node 208 for loopbacks A, B, D, O, and Z respectively. In the embodiment shown, only SR forwarding table 240 maps adjacency-segment IDs 9001-9003 to interfaces; SR forwarding tables in the other nodes of network 202 should not map adjacency-segment IDs 9001-9003.


In addition to creating SR forwarding tables, SR nodes, SR/LDP or PC nodes 224 and 226 can create segment ID stacks for respective SPs including SR tunnels. For example, ingress edge node 204 or PC node 226 on request can create example segment ID stack 228 for an SP between ingress edge node 204 and egress edge node 222. Example segment stack 228 can be created for a particular FEC (e.g., FEC Z). Example stack 228 includes three segment IDs: nodal-segment IDs 66 and 72 advertised by nodes 208 and 222, respectively, and adjacency-segment ID 9003 advertised by node 208. Stack 228 corresponds to an SP in which packets flow in order through nodes 204, 206, 208, 216, 218, and 222. Node 206 can use loopback/nodal-segment ID bindings to create a tunnel segment ID stack consisting of nodal-segment IDs consisting of 66, 70, and 71. After creation, node 206 maps the tunnel segment ID stack to FA-segment ID 5500. No other node should map this tunnel segment ID stack to FA-segment ID 5500. Tunnel segment ID stack 66, 70, and 71 corresponds to a tunnel in which packets flow in order through nodes 206, 208, 216, and 218.


To illustrate basic SR consider SR node 204 receiving a packet P that is destined for a device that can be reached via AE2, which in turn can be reached via node 222. In response SR node 204 can select a segment ID stack based on information contained in the packet. For example, node 204 can select FEC Z (i.e., the loopback for node 222) for a received packet P based on the destination IP address in packet P and/or other information. FEC Z is mapped to example stack 228 in a table not shown. Node 204 attaches stack 228 to packet P. Example segment stack 228 lists segment IDs that correspond to one hop and multi hop segments that packets traverse to reach egress edge node 222. The one hop and multi hop segments collectively form the SP corresponding to stack 228. Once the segment stack 228 is attached to packet P, ingress SR enable node 204 may access a SR forwarding table (not shown) using the top segment ID (e.g., segment ID=66) of the stack to read egress interface identifier 2, which is the next hop egress interface for the SPT to the SR node assigned nodal-segment ID 66.


With continuing reference to FIG. 2, FIG. 3 illustrates an example of a general process of packet forwarding using nodal and adjacency-segment IDs according to one embodiment. More particularly, FIG. 3 illustrates an example method that can be performed by an SR node, including an edge node, in a network like that shown in FIG. 2. In response to receiving a packet with an attached segment ID stack, or in response to attaching a segment ID stack to a packet, the SR node determines in step 304 whether the top segment ID of the stack matches the nodal-segment ID assigned to the SR node. If there is a match, the process proceeds to step 306 where the SR node pops the top segment ID, which may expose an underlying segment ID as the new top segment ID. If there is no new top segment ID (i.e., the segment popped in 306 was the last segment ID of the stack) the packet P has arrived at the egress edge node, and the process ends. If a new top segment ID is exposed, or if there is no match of segment IDs in step 304, the SR node accesses its SR forwarding table in step 314 to read the egress interface that is mapped to the top segment ID. In step 316 the SR node determines whether the top segment ID is an adjacency-segment ID. This determination can be implemented by simply comparing the top segment ID with the designated range of adjacency-segment IDs that are available for assignment within the network. If the top segment ID is found to be within the designated range, the top segment ID is determined to be an adjacency-segment ID and it is popped. In step 322 the SR node forwards packet P and attached stack to the next node via the egress interface identified in step 314.


With continuing reference to FIG. 3, FIG. 2 shows packet P and attached stack 228 as it is forwarded by nodes. As shown, nodes 204 and 206 forward packet P and stack 228 without popping a segment ID. However, node 208 pops nodal-segment ID 66 and adjacency-segment ID 9003 in accordance with steps 306 and 320, respectively, before the packet P and stack 228 are forwarded to node 216 in accordance with step 322. Nodes 216 and 218 forward packet P and stack 228 without popping segment IDs. SR egress edge node 222 recognizes itself as the last hop of the SP. Eventually, node 222 may employ traditional IP routing and forward packet P to access node AE2 based on routing table lookup using the destination IP address within packet P.


3. Segment Routing Using SR Tunnels


SR networks, such as the network shown in FIG. 2, are capable of transporting many different streams of packets over distinct SPs. Often times, many of these SPs share a common sub path through two or more core SR nodes. For example, two or more distinct streams of packets may flow through nodes 206, 208, 216 and 218 in order to reach egress node 222. PC nodes such as PC nodes 224 and 226 may leverage use of common sub paths to enhance network operation. To illustrate, PC node 226 using collected state and performance information, recognizes the sub path consisting of nodes 206, 208, 216, and 218 is commonly used by several packet flows through the network, and as a result PC node 226 can generate and send an instruction to create an SR tunnel from node 206 to node 218 through which packets flow sequentially through nodes 208 and 216. The instruction received by SR node 206 can list the loopbacks (i.e., loopbacks 65, 66, 70, and 71) that constitute the SR tunnel nodes. Node 206 can create a tunnel segment ID stack corresponding to the SR tunnel. For example, SR node 206 can create a tunnel segment ID stack consisting of nodal-segment IDs 66, 70, and 71. Thereafter, node 206 may generate an FA-segment ID 5500 for the tunnel segment ID list, which is then mapped in memory to the tunnel segment ID stack 66, 70, and 71. Eventually node 206 can advertise the FA-segment ID 5500 bound to the loopbacks of the nodes in the tunnel. The advertisement should list the loopbacks by position within the SR tunnel.


Nodes including PC nodes 224 and 226 receive the FA-segment ID advertisements. The nodes can use information from the FA-segment ID advertisements to create SP paths and corresponding segment ID stacks. To illustrate, edge node 212 may lack a segment ID stack for a particular FEC. Node 212 can generate and send to PC node 224 a path request that identifies the egress node it is trying to reach in addition to other information. PC node 224, using the information in the request as well as network state and performance information, responds by calculating a path that flows in order through nodes 214, 206, 208, 216, and 218. In the illustrated example, an SR tunnel preexists through nodes 206, 208, 216 and 218 as advertised by node 206. Node 212 receives the response from PC node 224 and calculates a segment ID stack for the SP. Or PC node 224 can respond to the request by also calculating the segment ID stack for the path. Either way, the segment ID stack includes FA-segment ID 5500, as opposed to a segment ID stack that includes all nodal-segment IDs for the nodes in the needed SP. In an alternative embodiment, PC node 224 can calculate the needed path and the corresponding segment ID stack without involvement of a PC node. When node 212 receives packets associated with the FEC, node 212 can attach the calculated segment ID stack.


Nodes within the SP can use the segment IDs, including FA-segment IDs, to forward the packet. FIG. 4 illustrates a process employed by nodes in a network that employs SR tunnels. The process shown in FIG. 4 is substantially similar to the process shown in FIG. 3. A significant difference exists at step 412. In this step, the SR node determines whether the top segment ID is an FA-segment ID. The node can make this determination by comparing the top segment ID with the predefined range for FA-segment IDs. If the top segment ID falls in the range, the top segment ID is determined to be an FA-segment ID. When the new top segment ID is determined to be an FA-segment ID, the node accesses its memory to read the tunnel segment ID stack mapped to the FA-segment ID. Thereafter, the node swaps the top segment ID with the tunnel segment ID stack and the process proceeds to step 416 where the egress interface is identified for the packet and attached segment stack.



FIG. 5 illustrates packet forwarding through the SP that includes the example SR tunnel corresponding to FA-segment ID 5500 in accordance with process shown in FIG. 4. Node 212 attaches the segment ID stack containing FA-segment ID 5500 to packet P. Node 206 receives the packet with attached segment stack from node 214, pops nodal-segment ID 65 in accordance with step 406, and swaps FA-segment ID 5500 with the tunnel segment ID stack consisting of nodal-segment IDs 66, 70, and 71 in accordance with step 414. Node 206 forwards packet P and attached segment stack to node 208 in accordance with step 422.


4. Segment Routing Using TE Tunnels



FIGS. 1, 2, and 5 illustrate example provider networks that contain LDP nodes or SR nodes, but not both. Some providers may employ hybrid networks or networks that contain both LDP and SR nodes. A hybrid network can successfully implement packet transport using TE tunnels extending between SR/LDP head end and tail end nodes. TE tunnels, can be created by head end and tail end SR/LDP nodes in response to receiving a PC node tunnel creation instruction that includes the explicit path definition (i.e., a list of the loopbacks for the nodes constituting the tunnel). Once created, the head end and/or tail end SR/LDP node can associate a FA-segment ID to the TE tunnel.



FIG. 6 illustrates an example hybrid network that can employ segment routing using TE tunnels. The network is similar to that shown in FIG. 3 and includes SR nodes 204, 210, 212, 214, and 222. Nodes 206 and 218 in this embodiment, however, are both SR and LDP enabled, and nodes 208 and 216 are LDP enabled. The SR/LDP and LDP nodes 206 and 218 can exchange labels using, for example, resource reservation protocol (RSVP) TE signaling during TE tunnel set up.


All SR and SR/LDP nodes are assigned unique nodal-segment IDs. For example, nodes 214, 206, 218, and 222 are assigned nodal-segment IDs 69, 65, 71, and 72, respectively. LDP nodes 208 and 216 are not assigned nodal-segment IDs. Each of the nodes 204-226 is assigned a unique loopback. Loopbacks A-D are assigned to nodes 204-210, respectively, loopbacks M-P are assigned to nodes 212-218 respectively, loopback R is assigned to PC node 224, loopback S is assigned to node 226, and loopback Z is assigned to node 222. These loopbacks are unique in the network and usable to calculate network topology, which in turn can be used to create paths or tunnels, to identify SPTs and next hop egress interfaces for SR forwarding tables, calculate segment ID stacks, etc.


As noted above SR/LDP nodes can create TE tunnels in response to receiving tunnel creation instructions from PC nodes. To illustrate, SR/LDP node 206 may receive an instruction from PC node 226 to create a TE tunnel constituting, in explicit order, nodes 206, 208, 216 and 218. The instruction may define the tunnel in terms of the loopbacks for the nodes constituting the tunnel. Nodes 206, 208, 216 and 218 can employ RSVP-TE messaging to create the tunnel. In this process the nodes may exchange labels. For example, SR/LDP node 206 receives label L1 from LDP node 208, and LDP node 208 receives label L2 from LDP node 216. The labels and egress interfaces are mapped appropriately in LDP forwarding tables. Node 206 generates FA-segment ID 5500 for the example tunnel. Once generated, the head end SR/LDP node 206 can map FA-segment ID 5500 to label L1 in memory. No other node should map label L1 to FA-segment ID 5500. This mapping links label L1 to the TE tunnel in which packets flow in order through nodes 206, 208, 216, and 218.


Each SR and SR/LDP node can advertise routing and performance information to the other nodes in a network, including PC nodes, using IGP and/or BGP with SR extension. For example, node 206 may generate an advertisement that includes FA-segment ID 5500 bound to a list of loopbacks B, C, O, and P for the example TE-tunnel. Edge nodes and/or PC nodes can use the FA-segment ID advertisements to create SPs and corresponding segment ID stacks in much the same manner as described above. For example edge node 212 may lack a segment ID stack for a particular FEC. Node 212 can generate and send to PC node 224 a path request that identifies the egress node for the FEC it is trying to reach in addition to other information. PC node 224, using the information in the request as well as network state and performance information, responds by calculating a path that flows in order through nodes 214, 206, 208, 216, and 218. In the illustrated example, the TE tunnel preexists through nodes 206, 208, 216 and 218 as advertised by node 206, which in turn can be leveraged for more efficient network operation. Node 212 receives the response from PC node 224 and calculates a segment ID stack for the needed SP. Or PC node 224 can respond to the request by also calculating the segment ID stack for the needed SP. Either way, the segment ID stack includes FA-segment ID 5500. In an alternative embodiment, PC node 224 can calculate the needed path and the corresponding segment ID stack without involvement of a PC node. When node 212 receives packets associated with the FEC, node 212 can attach the calculated segment ID stack.



FIG. 7 illustrates an example forwarding process employed within nodes of a network such as that shown in FIG. 6. The process shown in FIG. 7 is similar to the process shown in FIG. 4. However, significant differences exist. More particularly, as shown in FIG. 7 if the top segment ID is determined to be an FA-segment ID in step 712, the node that receives the packet P determines whether the FA-segment ID is mapped to a tunnel segment ID stack or a label. If it is mapped to a tunnel segment ID stack, the process proceeds to step 716 where the node swaps the FA-segment ID with the SR tunnel segment ID stack mapped to the FA-segment ID. However, if it is determined the FA-segment ID is not mapped to a tunnel segment ID stack, the process proceeds to step 726 where the SR/LDP node swaps the FA-segment ID with the label mapped in memory to the FA-segment ID. Thereafter, the node accesses its LDP forwarding table to read the next hop egress interface mapped to the label. The node then forwards the packet P with the attached label or attached label and segment ID stack via the egress interface identified in step 730. Although not shown in FIG. 7, the tail end SR/LDP node will receive the packet P with a segment list attached to it, and the tail end SR/LDP node will process or forward the packet in accordance with the process shown within FIG. 3 after the penultimate node in the TE tunnel pops the label and forwards the packet to the SR/LDP tail end.



FIG. 6 illustrates packet forwarding through the hybrid network that includes the example TE tunnel corresponding to FA-segment ID 5500 in accordance with process shown in FIG. 7. Node 212 attaches the segment ID stack containing FA-segment ID 5500 to packet P. Node 206 receives the packet with attached segment stack from node 214, pops nodal-segment ID 65 in accordance with step 406, and swaps FA-segment ID 5500 with label L1 in accordance with step 726. Node 206 forwards packet P and attached label and segment stack to node 208 in accordance with step 732.



FIG. 8 is a block diagram illustrating certain additional and/or alternative components of nodes that can be employed in the networks shown in FIG. 1, 2, 5, or 6. In this depiction, node 800 includes a number of line cards (line cards 802(1)-(N)) that are communicatively coupled to a forwarding engine or packet forwarder 810 and a processor 820 via a data bus 830 and a result bus 840. Line cards 802(1)-(N) include a number of port processors 850(1,1)-(N,N) which are controlled by port processor controllers 860(1)-(N). It will also be noted that forwarding engine 810 and processor 820 are not only coupled to one another via data bus 830 and result bus 840, but are also communicatively coupled to one another by a communications link 870.


The processors 850 and 860 of each line card 802 may be mounted on a single printed circuit board. When a packet or packet and header are received, the packet or packet and header may be identified and analyzed by router 800 in the following manner. Upon receipt, a packet (or some or all of its control information) or packet and header is sent from the one of port processors 850(1,1)-(N,N) at which the packet or packet and header was received to one or more of those devices coupled to data bus 830 (e.g., others of port processors 650(1,1)-(N,N), forwarding engine 810 and/or processor 820). Handling of the packet or packet and header can be determined, for example, by forwarding engine 810. For example, forwarding engine 810 may determine that the packet or packet and header should be forwarded to one or more of port processors 850(1,1)-(N,N). This can be accomplished by indicating to corresponding one(s) of port processor controllers 860(1)-(N) that the copy of the packet or packet and header held in the given one(s) of port processors 850(1,1)-(N,N) should be forwarded to the appropriate one of port processors 850(1,1)-(N,N). In addition, or alternatively, once a packet or packet and header has been identified for processing, forwarding engine 810, processor 820 or the like can be used to process the packet or packet and header in some manner or add packet security information, in order to secure the packet. On a node sourcing such a packet or packet and header, this processing can include, for example, encryption of some or all of the packet's or packet and header's information, the addition of a digital signature or some other information or processing capable of securing the packet or packet and header. On a node receiving such a processed packet or packet and header, the corresponding process is performed to recover or validate the packet's or packet and header's information that has been thusly protected.


Although the present invention has been described in connection with several embodiments, the invention is not intended to be limited to the specific forms set forth herein. On the contrary, it is intended to cover such alternatives, modifications, and equivalents as can be reasonably included within the scope of the invention as defined by the appended claims.

Claims
  • 1. A method comprising: a node receiving an instruction to create a tunnel in a network between the node and another node via a plurality of nodes in data communication between the node and the other node, wherein the node is segment routing enabled, wherein each of the plurality of nodes are not segment routing enabled, and wherein each of the plurality of nodes is multiprotocol label switching (MPLS) enabled;in response to receiving the instruction, the node mapping a first segment identifier (ID) to a label distribution protocol (LDP) label relating to the tunnel;the node creating an advertisement that comprises the first segment ID bound to a plurality of node identifiers, wherein the plurality of node identifiers uniquely identify the plurality of nodes, respectively, in the network;the node transmitting the advertisement to another node in the network.
  • 2. The method of claim 1 further comprising an act of receiving the LDP label from one of the plurality of nodes.
  • 3. The method of claim 1 wherein the instruction comprises the plurality of node identifiers.
  • 4. The method of claim 3 wherein each of the plurality of node identifiers comprises a loopback address.
  • 5. The method of claim 1 further comprising: transmitting a resource reservation protocol (RSVP) tunnel engineering (TE) message to one of the plurality of nodes;receiving the LDP label from the one of the plurality of nodes in response to transmitting the RSVP-TE message.
  • 6. The method of claim 1 further comprising; the node receiving a packet with the first segment ID attached thereto;the node swapping the first segment ID with the LDP label;the node forwarding the packet with the LDP label attached thereto.
  • 7. The method of claim 6 wherein the node forwards the packet with the attached LDP label to one of the plurality of nodes.
  • 8. The method of claim 1 wherein the instruction was generated by a third node that is configured to calculate paths through the network of nodes.
  • 9. An apparatus comprising: a first node, the first node comprising:a circuit configured to receive an instruction to create a multiprotocol label switching (MPLS) traffic engineering tunnel in a network between the first node and a second node via a plurality of nodes in data communication between the first node and the second node, wherein the first node is segment routing enabled, wherein each of the plurality of nodes are not segment routing enabled, wherein each of the plurality of nodes is MPLS enabled;a circuit for mapping a forwarding adjacency (FA) segment identifier (ID) to a label distribution protocol (LDP) label relating to the tunnel in response to the first node receiving the instruction;a circuit for advertising the FA segment ID to other nodes in a network of nodes;wherein the FA segment ID corresponds the MPLS traffic engineering tunnel.
  • 10. The apparatus of claim 9 wherein the LDP label was received by the first node from one of the plurality of nodes.
  • 11. The apparatus of claim 9 wherein the instruction comprises identifiers for the plurality of nodes, respectively.
  • 12. The apparatus of claim 9 wherein the first node further comprises: a circuit for transmitting a resource reservation protocol (RSVP) tunnel engineering (TE) message to one of the plurality of nodes;a circuit for receiving the LDP label from the one of the plurality of nodes.
  • 13. The apparatus of claim 9 wherein the first node further comprises: a circuit for receiving a packet with the FA segment ID attached thereto;a circuit for swapping the FA ID with the LDP label;a circuit for forwarding the packet with the LDP label attached thereto.
  • 14. An apparatus comprising: a node comprising one or more processors for executing instructions, and one or more memories for storing instructions executable by the one or more processors, wherein the one or more processors implement a method in response to executing the instructions, the method comprising;receiving an instruction to create a tunnel in a network between the node and another node via a plurality of nodes in data communication between the node and the other node wherein the node is segment routing enabled, wherein each of the plurality of nodes are not segment routing enabled, and wherein each of the plurality of nodes is multiprotocol label switching (MPLS) enabled;in response to receiving the instruction, mapping a first segment identifier (ID) to a label distribution protocol (LDP) label relating to the tunnel;advertising the first segment ID and identities of the plurality of nodes to other nodes in a the network;wherein a range in which the first segment ID is contained is different from a range in which the LDP label is contained.
  • 15. The apparatus of claim 14 wherein the method further comprises an act of receiving the LDP label from one of the plurality of nodes.
  • 16. The apparatus of claim 14 wherein the method further comprises: receiving a packet with the first segment ID attached thereto;swapping the first segment ID with the LDP label;forwarding the packet with the LDP label attached thereto.
  • 17. The apparatus of claim 14 wherein the instruction comprises identifiers for the plurality of nodes, respectively.
RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/211,101, entitled “Segment Routing: PCE Driven Dynamic Setup of Forwarding Adjacencies and Explicit Path,” filed Mar. 14, 2014, which claims the domestic benefit under Title 35 of the United States Code § 119(e) of U.S. Provisional Patent Application Ser. No. 61/791,242, entitled “Segment Routing,” filed Mar. 15, 2013, which are hereby incorporated by reference in their entirety and for all purposes as if completely and fully set forth herein.

US Referenced Citations (217)
Number Name Date Kind
5764624 Endo Jun 1998 A
6032197 Birdwell Feb 2000 A
6147976 Shand Nov 2000 A
6374303 Armitage et al. Apr 2002 B1
6577600 Bare Jun 2003 B1
6647428 Bannai et al. Nov 2003 B1
6963570 Agarwal Nov 2005 B1
7023846 Andersson et al. Apr 2006 B1
7031253 Katukam et al. Apr 2006 B1
7031607 Aswood Smith Apr 2006 B1
7061921 Sheth Jun 2006 B1
7068654 Joseph et al. Jun 2006 B1
7072346 Hama Jul 2006 B2
7088721 Droz et al. Aug 2006 B1
7154416 Savage Dec 2006 B1
7174387 Shand et al. Feb 2007 B1
7180887 Schwaderer Feb 2007 B1
7260097 Casey Aug 2007 B2
7286479 Bragg Oct 2007 B2
7330440 Bryant Feb 2008 B1
7359377 Kompella et al. Apr 2008 B1
7373401 Azad May 2008 B1
7420992 Fang Sep 2008 B1
7430210 Havala et al. Sep 2008 B2
7463639 Rekhter Dec 2008 B1
7466661 Previdi et al. Dec 2008 B1
7471669 Sabesan et al. Dec 2008 B1
7564803 Minei et al. Jul 2009 B1
7577143 Kompella Aug 2009 B1
7602778 Guichard et al. Oct 2009 B2
7610330 Quinn Oct 2009 B1
7773630 Huang et al. Aug 2010 B2
7817667 Frederiksen et al. Oct 2010 B2
7885259 Filsfils Feb 2011 B2
7885294 Patel Feb 2011 B2
7894352 Kompella et al. Feb 2011 B2
7894458 Jiang Feb 2011 B2
7940695 Bahadur et al. May 2011 B1
7983174 Monaghan et al. Jul 2011 B1
8064441 Wijnands et al. Nov 2011 B2
8339973 Pichumani Dec 2012 B1
8422514 Kothari et al. Apr 2013 B1
8542706 Wang et al. Sep 2013 B2
8611335 Wu Dec 2013 B1
8619817 Everson Dec 2013 B1
8630167 Ashwood Smith Jan 2014 B2
8711883 Kang Apr 2014 B2
8792384 Banerjee et al. Jul 2014 B2
8848728 Revah Sep 2014 B1
8923292 Friskney Dec 2014 B2
8953590 Aggarwal Feb 2015 B1
9036474 Dibirdi et al. May 2015 B2
9049233 Frost et al. Jun 2015 B2
9094337 Bragg Jul 2015 B2
9112734 Edwards et al. Aug 2015 B2
9118572 Sajassi Aug 2015 B2
9319312 Filsfils et al. Apr 2016 B2
9571349 Previdi Feb 2017 B2
9660897 Gredler May 2017 B1
9749227 Frost et al. Aug 2017 B2
20010037401 Soumiya Nov 2001 A1
20010055311 Trachewsky Dec 2001 A1
20020103732 Bundy et al. Aug 2002 A1
20030016678 Maeno Jan 2003 A1
20030026271 Erb et al. Feb 2003 A1
20030126272 Corl et al. Jul 2003 A1
20030133412 Iyer Jul 2003 A1
20030142674 Casey Jul 2003 A1
20030142685 Bare Jul 2003 A1
20030231634 Henderson Dec 2003 A1
20040160958 Oh Aug 2004 A1
20040174879 Basso et al. Sep 2004 A1
20040190527 Okura Sep 2004 A1
20040196840 Amrutur et al. Oct 2004 A1
20040202158 Takeno Oct 2004 A1
20040240442 Grimminger Dec 2004 A1
20050073958 Atlas Apr 2005 A1
20050105515 Reed May 2005 A1
20050157724 Montuno Jul 2005 A1
20050213513 Ngo Sep 2005 A1
20050259655 Cuervo et al. Nov 2005 A1
20060002304 Ashwood-Smith Jan 2006 A1
20060013209 Somasundaram Jan 2006 A1
20060056397 Aizu Mar 2006 A1
20060075134 Aalto Apr 2006 A1
20060080421 Hu Apr 2006 A1
20060092940 Ansari May 2006 A1
20060146696 Li Jul 2006 A1
20060187817 Charzinski Aug 2006 A1
20060262735 Guichard Nov 2006 A1
20060274716 Oswal et al. Dec 2006 A1
20070019647 Roy et al. Jan 2007 A1
20070041345 Yarvis Feb 2007 A1
20070053342 Sierecki Mar 2007 A1
20070058638 Guichard et al. Mar 2007 A1
20070189291 Tian Aug 2007 A1
20070245034 Retana Oct 2007 A1
20080002699 Rajsic Jan 2008 A1
20080049610 Linwong Feb 2008 A1
20080075016 Ashwood-Smith Mar 2008 A1
20080075117 Tanaka Mar 2008 A1
20080084881 Dharwadkar et al. Apr 2008 A1
20080101227 Fujita et al. May 2008 A1
20080101239 Goode May 2008 A1
20080172497 Mohan et al. Jul 2008 A1
20080189393 Wagner Aug 2008 A1
20080192762 Kompella et al. Aug 2008 A1
20080212465 Yan Sep 2008 A1
20080225864 Aissaoui et al. Sep 2008 A1
20080253367 Ould-Brahim Oct 2008 A1
20080259820 White et al. Oct 2008 A1
20080316916 Tazzari Dec 2008 A1
20090041038 Martini et al. Feb 2009 A1
20090049194 Csaszar Feb 2009 A1
20090067445 Diguet Mar 2009 A1
20090080431 Rekhter Mar 2009 A1
20090135815 Pacella May 2009 A1
20090196289 Shankar Aug 2009 A1
20090247157 Yoon Oct 2009 A1
20090296710 Agrawal Dec 2009 A1
20100063983 Groarke et al. Mar 2010 A1
20100088717 Candelore et al. Apr 2010 A1
20100124231 Kompella May 2010 A1
20100142548 Sheth Jun 2010 A1
20100220739 Ishiguro Sep 2010 A1
20100232435 Jabr Sep 2010 A1
20100272110 Allan et al. Oct 2010 A1
20100284309 Allan et al. Nov 2010 A1
20110021193 Hong Jan 2011 A1
20110060844 Allan et al. Mar 2011 A1
20110063986 Denecheau Mar 2011 A1
20110090913 Kim Apr 2011 A1
20110149973 Esteve Rothenberg Jun 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110261722 Awano Oct 2011 A1
20110268114 Wijnands et al. Nov 2011 A1
20110280123 Wijnands et al. Nov 2011 A1
20110286452 Balus Nov 2011 A1
20120044944 Kotha et al. Feb 2012 A1
20120063526 Xiao Mar 2012 A1
20120069740 Lu et al. Mar 2012 A1
20120069845 Carney et al. Mar 2012 A1
20120075988 Lu Mar 2012 A1
20120082034 Vasseur Apr 2012 A1
20120099861 Zheng Apr 2012 A1
20120106560 Gumaste May 2012 A1
20120120808 Nandagopal et al. May 2012 A1
20120170461 Long Jul 2012 A1
20120179796 Nagaraj Jul 2012 A1
20120213225 Subramanian et al. Aug 2012 A1
20120218884 Kini Aug 2012 A1
20120236860 Kompella et al. Sep 2012 A1
20120243539 Keesara Sep 2012 A1
20120287818 Corti et al. Nov 2012 A1
20120307629 Vasseur Dec 2012 A1
20130003728 Kwong et al. Jan 2013 A1
20130051237 Ong Feb 2013 A1
20130077476 Enyedi Mar 2013 A1
20130077624 Keesara et al. Mar 2013 A1
20130077625 Khera Mar 2013 A1
20130077626 Keesara et al. Mar 2013 A1
20130114402 Ould-Brahim May 2013 A1
20130142052 Burbidge Jun 2013 A1
20130188634 Magee Jul 2013 A1
20130219034 Wang et al. Aug 2013 A1
20130258842 Mizutani Oct 2013 A1
20130266012 Dutta et al. Oct 2013 A1
20130266013 Dutta et al. Oct 2013 A1
20130308948 Swinkels Nov 2013 A1
20130343204 Geib et al. Dec 2013 A1
20140044036 Kim Feb 2014 A1
20140160925 Xu Jun 2014 A1
20140169370 Filsfils et al. Jun 2014 A1
20140177638 Bragg et al. Jun 2014 A1
20140189156 Morris Jul 2014 A1
20140192677 Chew Jul 2014 A1
20140254596 Filsfils et al. Sep 2014 A1
20140269266 Filsfils et al. Sep 2014 A1
20140269421 Previdi et al. Sep 2014 A1
20140269422 Filsfils et al. Sep 2014 A1
20140269698 Filsfils et al. Sep 2014 A1
20140269699 Filsfils et al. Sep 2014 A1
20140269721 Bashandy et al. Sep 2014 A1
20140269725 Filsfils et al. Sep 2014 A1
20140269727 Filsfils et al. Sep 2014 A1
20140286195 Fedyk Sep 2014 A1
20140317259 Previdi et al. Oct 2014 A1
20140369356 Bryant et al. Dec 2014 A1
20150023328 Thubert et al. Jan 2015 A1
20150030020 Kini Jan 2015 A1
20150109902 Kumar Apr 2015 A1
20150249587 Kozat Sep 2015 A1
20150256456 Previdi et al. Sep 2015 A1
20150263940 Kini Sep 2015 A1
20150326675 Kini Nov 2015 A1
20150334006 Shao Nov 2015 A1
20150381406 Francois Dec 2015 A1
20160006614 Zhao Jan 2016 A1
20160021000 Previdi et al. Jan 2016 A1
20160034209 Nanduri Feb 2016 A1
20160034370 Nanduri Feb 2016 A1
20160119159 Zhao Apr 2016 A1
20160127142 Tian May 2016 A1
20160173366 Saad Jun 2016 A1
20160191372 Zhang Jun 2016 A1
20160352654 Filsfils et al. Aug 2016 A1
20160254987 Eckert et al. Sep 2016 A1
20160254988 Eckert et al. Sep 2016 A1
20160254991 Eckert et al. Sep 2016 A1
20170019330 Filsfils et al. Jan 2017 A1
20170104673 Bashandy et al. Apr 2017 A1
20170302561 Filsfils et al. Oct 2017 A1
20170302571 Frost et al. Oct 2017 A1
20170346718 Psenak et al. Nov 2017 A1
20170346737 Previdi et al. Nov 2017 A1
20170366453 Previdi et al. Dec 2017 A1
20180083871 Filsfils Mar 2018 A1
Foreign Referenced Citations (15)
Number Date Country
1726 679 Jan 2006 CN
101247 253 Aug 2008 CN
101399 688 Apr 2009 CN
101496 357 Jul 2009 CN
101616 466 Dec 2009 CN
101803 293 Aug 2010 CN
101841 442 Sep 2010 CN
101931 548 Dec 2010 CN
102098 222 Jun 2011 CN
102132 533 Jul 2011 CN
102299 852 Dec 2011 CN
102498 694 Jun 2012 CN
102714 625 Oct 2012 CN
WO 2008012295 Feb 2008 WO
WO2011032430 Mar 2011 WO
Non-Patent Literature Citations (72)
Entry
Psenak, Peter et al., “Enforcing Strict Shortest Path Forwarding Using Strict Segment Identifiers”; U.S. Appl. No. 15/165,794, filed May 26, 2016; consisting of Specification, Claims, Abstract, and Drawings (52 pages).
Nainar, Nagendra Kumar et al., “Reroute Detection in Segment Routing Data Plane”; U.S. Appl. No. 15/266,498, filed Sep. 15, 2016; consisting of Specification, Claims, Abstract, and Drawings (61 pages).
Frost, Daniel C. et al., “MPLS Segment Routing”; U.S. Appl. No. 15/637,744, filed Jun. 29, 2017; consisting of Specification, Claims, Abstract, and Drawings (26 pages).
Filsfils, Clarence et al., “Seamless Segment Routing”; U.S. Appl. No. 15/639,398, filed Jun. 30, 2017; consisting of Specification, Claims, Abstract, and Drawings (31 pages).
Aggarwal, R., et al., Juniper Networks; E. Rosen, Cisco Systems, Inc.; “MPLS Upstream Label Assignment and Context Specific Label Space;” Network Working Group; Internet Draft; Jan. 2005; pp. 1-8.
Akiya, N. et al., “Seamless Bidirectional Forwarding Detection (BFD) for Segment Routing (SR)”; draft-akiya-bfd-seamless-sr-00; Internet Engineering Task Force; Internet-Draft; Jun. 7, 2013; 7 pages.
Akiya, N. et al., “Seamless Bidirectional Forwarding Detection (BFD) for Segment Routing (SR)”; draft-akiya-bfd-seamless-sr-01; Internet Engineering Task Force; Internet-Draft; Dec. 5, 2013; 7 pages.
Akiya, N. et al., “Seamless Bidirectional Forwarding Detection (BFD) for Segment Routing (SR)”; draft-akiya-bfd-seamless-sr-02; Internet Engineering Task Force; Internet-Draft; Jun. 7, 2014; 7 pages.
Akiya, N. et al., “Seamless Bidirectional Forwarding Detection (BFD) for Segment Routing (SR)”; draft-akiya-bfd-seamless-sr-03; Internet Engineering Task Force; Internet-Draft; Aug. 23, 2014; 7 pages.
Akiya, N. et al., “Seamless Bidirectional Forwarding Detection (BFD) for Segment Routing (SR)”; draft-akiya-bfd-seamless-sr-04; Internet Engineering Task Force; Internet-Draft; Feb. 23, 2015; 7 pages.
Akiya, N., “Segment Routing Implications on BFD”; Sep. 9, 2013; 3 pages.
Alcatel-Lucent, “Segment Routing and Path Computation Element—Using Traffic Engineering to Optimize Path Placement and Efficiency in IP/MPLS Networks”; Technology White Paper; 2015; 28 pages.
Aldrin, S., et al., “Seamless Bidirectional Forwarding Detection (S-BFD) Use Cases”; draft-ietf-bfd-seamless-use-case-08; Network Working Group; Internet-Draft; May 6, 2016; 15 pages.
Awduche, Daniel O., et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels,” Network Working Group, Internet-Draft, Aug. 2000, pp. 1-12.
Awduche, Daniel O., et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels,” Network Working Group, Request for Comments 3209, Dec. 2001, pp. 1-61.
Awduche, D. et al., “Requirements for Traffic Engineering Over MPLS”; Network Working Group; Request for Comments: 2702; Sep. 1999; pp. 1-29.
Awduche, D. et al., “Overview and Principles of Internet Traffic Engineering”; Network Working Group; Request for Comments: 3272; May 2002; pp. 1-71.
Backes, P. and Rudiger Geib, “Deutsche Telekom AG's Statement About IPR Related to Draft-Geig-Spring-OAM-Usecase-01,” Feb. 5, 2014, pp. 1-2.
Bryant, S. et al., Cisco Systems, “IP Fast Reroute Using Tunnels-draft-bryant-ipfrr-tunnels-03”, Network Working Group, Internet-Draft, Nov. 16, 2007, pp. 1-30.
Bryant, S., et al., Cisco Systems, “Remote LFA FRR,” draft-ietf-rtgwg-remote-lfa-04, Network Working Group, Internet-Draft, Nov. 22, 2013, pp. 1-24.
Cisco Systems, Inc., “Introduction to Intermediate System-to-Intermediate System Protocol,” published 1992-2002; pp. 1-25.
Crabbe, E., et al., “PCEP Extensions for MPLS-TE LSP Protection With Stateful PCE Draft-Crabbe-PCE-Stateful-PCE-Protection-00,” Network Working Group, Internet-Draft, Oct. 2012, pp. 1-12.
Crabbe, E., et al., Stateful PCE Extensions for MPLS-TE LSPs, draft-crabbe-pce-stateful-pce-mpls-te-00; Network Working Group, Internet-Draft, Oct. 15, 2012, pp. 1-15.
Deering, S., et al., Cisco, Internet Protocol, Version 6 (IPv6) Specification, Network Working Group, Request for Comments 2460, Dec. 1998, pp. 1-39.
Eckert, T., “Traffic Engineering for Bit Index Explicit Replication BIER-TE, draft-eckert-bier-te-arch-00,” Network Working Group, Internet-Draft, Mar. 5, 2015, pp. 1-21.
Eckert, T., et al., “Traffic Engineering for Bit Index Explicit Replication BIER-TE, draft-eckert-bier-te-arch-01,” Network Working Group, Internet-Draft, Jul. 5, 2015, pp. 1-23.
Farrel, A., et al., Old Dog Consulting, A Path Computation Element (PCE)—Based Architecture, Network Working Group, Request for Comments 4655, Aug. 2006, pp. 1-80.
Farrel, A., et al., Old Dog Consulting, Inter-Domain MPLS and GMPLS Traffic Engineering—Resource Reservation Protocol—Traffic Engineering (RSVP-TE) Extensions, Network Working Group, Request for Comments 5151, Feb. 2008, pp. 1-25.
Fedyk, D., et al., Alcatel-Lucent, Generalized Multiprotocol Label Switching (GMPLS) Control Ethernet Provider Backbone Traffic Engineering (PBB-TE), Internet Engineering Task Force (IETF), Request for Comments 6060, Mar. 2011, pp. 1-20.
Filsfils, C., et al., Cisco Systems, Inc., “Segment Routing Architecture,” draft-filsfils-rtgwg-segment-routing-00, Jun. 28, 2013; pp. 1-28.
Filsfils, C., et al., Cisco Systems, Inc., “Segment Routing Architecture”; draft-filsfils-rtgwg-segment-routing-01, Network Working Group, Internet-Draft, Oct. 21, 2013, pp. 1-28.
Filsfils, C. et al., Cisco Systems, Inc., “Segment Routing Interoperability with LDP”; draft-filsfils-spring-segment-routing-ldp-interop-01.txt; Apr. 18, 2014, pp. 1-16.
Filsfils, C. et al., “Segment Routing Architecture”; draft-ietf-spring-segment-routing-07; Network Working Group, Internet-Draft; Dec. 15, 2015; pp. 1-24.
Filsfils, C. et al.; “Segment Routing Use Cases”; draft-filsfils-rtgwg-segment-routing-use-cases-01; Network Working Group; Internet-Draft; Jul. 14, 2013; pp. 1-46.
Filsfils, C. et al., “Segment Routing Use Cases”, draft-filsfils-rtgwg-segment-routing-use-cases-02; Network Working Group; Internet-Draft; Oct. 21, 2013; pp. 1-36.
Filsfils, C. et al., “Segment Routing with MPLS Data Plane”, draft-ietf-spring-segment-routing-mpls-05; Network Working Group; Internet-Draft; Jul. 6, 2016; 15 pages.
Frost, D., et al., Cisco Systems, Inc., “MPLS Generic Associated Channel (G-Ach) Advertisement Protocol,” draft-ietf-mpls-gach-adv-00, Internet-Draft, Jan. 27, 2012, pp. 1-17.
Frost, D., et al., Cisco Systems, Inc., “MPLS Generic Associated Channel (G-Ach) Advertisement Protocol,” draft-ietf-mpls-gach-adv-08, Internet-Draft, Jun. 7, 2013, pp. 1-22.
Frost, D., et al., Cisco Systems, Inc., “MPLS Generic Associated Channel (G-Ach) Advertisement Protocol,” Request for Comments 7212, Jun. 2014, pp. 1-23.
Geib, R., “Segment Routing Based OAM Use Case,” IETF 87, Berlin, Jul./Aug. 2013, pp. 1-3.
Geib, R., Deutsch Telekom, “Use Case for a Scalable and Topology Aware MPLS data plane monitoring System,” draft-geib-spring-oam-usecase-00; Internet-Draft, Oct. 17, 2013, pp. 1-7.
Geib, R., Deutsch Telekom, “Use Case for a Scalable and Topology Aware MPLS Data Plane Monitoring System,” draft-geib-spring-oam-usecase-01; Internet-Draft, Feb. 5, 2014, pp. 1-10.
Gredler, H., et al., Juniper Networks, Inc., “Advertising MPLS Labels in IS-IS draft-gredler-isis-label-advertisement-00,” Internet-Draft; Apr. 5, 2013; pp. 1-13.
Gredler, H. et al., hannes@juniper.net, IETF87, Berlin, “Advertising MPLS LSPs in the IGP,” draft-gredler-ospf-label-advertisement, May 21, 2013; pp. 1-14.
Guilbaud, Nicolas and Ross Cartlidge, “Google˜Localizing Packet Loss in a Large Complex Network,” Feb. 5, 2013, pp. 1-43.
Imaizumi, H., et al.; Networks, 2005; “FMEHR: An Alternative Approach to Multi-Path Forwarding on Packed Switched Networks,” pp. 196-201.
Kompella, K. et al, Juniper Networks, “Label Switched Paths (LSP) Hierarchy with Generalized Multi-Protocol Label Switching (GMPLS) Traffic Engineering (TE),” Network Working Group, Request for Comments 4206, Oct. 2005, pp. 1-14.
Kompella, K., et al., Juniper Networks, Inc., “Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures,” Network Working Group, Request for Comments 4379, Feb. 2006, pp. 1-50.
Kompella, K. et al., Juniper Networks,“Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling,” Network Working Group, Request for Comments 4761, Jan. 2007, pp. 1-28.
Kumar, N. et al., Cisco Systems, Inc., “Label Switched Path (LSP) Ping/Trace for Segment Routing Networks Using MPLS Dataplane,” draft-kumar-mpls-spring-lsp-ping-00, Oct. 21, 2013, pp. 1-12.
Kumar, N. et al, “Label Switched Path (LSP) Ping/Trace for Segment Routing Networks Using MPLS Dataplane,” draft-kumarkini-mpls-spring-lsp-ping-00; Network Work Group; Internet-Draft; Jan. 2, 2014, pp. 1-15.
Kumar, N. et al, “OAM Requirements for Segment Routing Network”; draft-kumar-spring-sr-oam-requirement-00; Spring; Internet-Draft; Feb. 14, 2014; 6 pages.
Kumar, N. et al, “OAM Requirements for Segment Routing Network”; draft-kumar-spring-sr-oam-requirement-01;Spring; Internet-Draft; Jul. 1, 2014; 6 pages.
Kumar, N. et al, “OAM Requirements for Segment Routing Network”; draft-kumar-spring-sr-oam-requirement-02; Spring; Internet-Draft; Dec. 31, 2014; 6 pages.
Kumar, N. et al, “OAM Requirements for Segment Routing Network”; draft-kumar-spring-sr-oam-requirement-03; Spring; Internet-Draft; Mar. 9, 2015; 6 pages.
Kumar, N. et al., “Label Switched Path (LSP) Ping/Trace for Segment Routing Networks Using MPLS Dataplane”, draft-ietf-mpls-spring-lsp-ping-00; Network Work Group; Internet Draft; May 10, 2016; 17 pages.
Pignataro, C. et al., “Seamless Bidirectional Forwarding Detection (S-BFD) for IPv4, IPv6 and MPLS”, draft-ietf-bfd-seamless-ip-06; Internet Engineering Task Force; Internet-Draft; May 6, 2016; 8 pages.
Pignataro, C. et al., “Seamless Bidirectional Forwarding Detection (S-BFD)”; draft-ietf-bfd-seamless-base-11; Internet Engineering Task Force; Internet-Draft; May 6, 2016; 21 pages.
Previdi, S. et al., Cisco Systems, Inc., “Segment Routing with IS-IS Routing Protocol, draft-previdi-filsfils-isis-segment-routing-00,” IS-IS for IP Internets, Internet-Draft, Mar. 12, 2013, pp. 1-27.
Previdi, S. et al., Cisco Systems, Inc., “Segment Routing with IS-IS Routing Protocol, draft-previdi-filsfils-isis-segment-routing-02,” Internet-Draft, Mar. 20, 2013, A55 pp. 1-27.
Previdi, S. et al., “IS-IS Extensions for Segment Routing”; draft-ietf-isis-segment-routing-extensions-05; IS-IS for IP Internets, Internet-Draft; Jun. 30, 2015; pp. 1-37.
Previdi, S. et al., “IS-IS Extensions for Segment Routing”; draft-ietf-isis-segment-routing-extensions-06; IS-IS for IP Internets, Internet-Draft; Dec. 14, 2015; pp. 1-39.
Psenak, P., et al. “OSPF Extensions for Segment Routing”, draft-ietf-ospf-segment-routing-extensions-05; Open Shortest Path First IGP; Internet-Draft; Jun. 26, 2015; pp. 1-29.
Raszuk, R., NTT 13, “MPLS Domain Wide Labels,” draft-raszuk-mpls-domain-wide-labels-00, MPLS Working Group, Internet-Draft, Jul. 14, 2013, pp. 1-6.
Rosen, E. et al., Cisco Systems, Inc., “BGP/MPLS VPNs”, Network Working Group, Request for Comments: 2547; Mar. 1999, pp. 1-26.
Sivabalan, S., et al.; “PCE-Initiated Traffic Engineering Path Setup in Segment Routed Networks; draft-sivabalan-pce-segmentrouting-00.txt,” Internet Engineering Task Force, IETF; Standard Working Draft, Internet Society (ISOC) 4, Rue Des Falaises CH-1205, Geneva, Switzerland, Jun. 2013, pp. 1-16.
Li, T., et al., Redback Networks, Inc., “IS-IS Extensions for Traffic Engineering,” Network Working Group, Request for Comments 5305, Oct. 2008, 17 pages.
Tian, Albert J. et al., Redback Networks, “Source Routed MPLS LSP Using Domain Wide Label, draft-tian-mpls-lsp-source-route-01.txt”, Network Working Group, Internet Draft, Jul. 2004, pp. 1-12.
Vasseur, JP, et al.; Cisco Systems, Inc. “A Link-Type Sub-TLV to Convey the Number of Traffic Engineering Label Switched Paths Signaled with Zero Reserved Bandwidth Across a Link,” Network Working Group, Request for Comments 5330; Oct. 2008, 16 pages.
Vasseur, JP, et al.; Cisco Systems, Inc. Path Computation Element (PCE) Communication Protocol (PCEP): Request for Comments: 5440, Internet Engineering Task Force, IETF; Standard, Internet Society (ISOC) 4, Rue Des Falaises CH-1205, Geneva, Switzerland, chapters 4-8, Mar. 2009; pp. 1-87.
Wijnands, Ijsbrand and Bob Thomas, Cisco Systems, Inc,; Yuji Kamite and Hitoshi Fukuda, NTT Communications; “Multicast Extensions for LDP;” Network Working Group; Internet Draft; Mar. 2005; pp. 1-12.
Rosen, E. et al., Cisco Systems, Inc., “Multiprotocol Label Switching Architecture”, Network Working Group, Request for Comments: 3031; Jan. 2001, pp. 1-61.
Related Publications (1)
Number Date Country
20170111277 A1 Apr 2017 US
Provisional Applications (1)
Number Date Country
61791242 Mar 2013 US
Continuations (1)
Number Date Country
Parent 14211101 Mar 2014 US
Child 15394169 US