The invention relates to computer networks and, more particularly, to engineering traffic flows within computer networks.
Routing devices within a network, often referred to as routers, maintain routing information that describe available routes through the network. Upon receiving an incoming packet, the router examines information within the packet and forwards the packet in accordance with the routing information. In order to maintain an accurate representation of the network, routers exchange routing information in accordance with one or more defined routing protocol, such as the Border Gateway Protocol (BGP).
Multi-Protocol Label Switching (MPLS) is a suite of protocols used to engineer traffic patterns within Internet Protocol (IP) networks. By utilizing MPLS, a source device can request a path through a network to a destination device, i.e., a Label Switched Path (LSP). An LSP defines a distinct path through the network to carry MPLS packets from the source device to a destination device. Each router along a LSP allocates a label and propagates the label to the closest upstream router along the path. Routers along the path cooperatively perform MPLS operations to forward the MPLS packets along the established path. A variety of protocols exist for establishing LSPs. For example, the Label Distribution Protocol (LDP), and the Resource Reservation Protocol with Traffic Engineering extensions (RSVP-TE).
Some implementations make use of Point to Multi-Point (P2MP) LSP in which a path is established through a network from a source device to multiple destination devices. P2MP LSPs are commonly used, for example, to distribute multicast data or to implement virtual private networks (VPNs). In the case of a P2MP LSP, one or more of the routers along the path may comprise branch routers located at points where the path divides. In addition to performing MPLS operations to forward the MPLS multicast packets along the path, the branch routers perform replication of the packets such that each branch of the P2MP LSP continues to carry copies of the multicast packets.
In generally, P2MP LSP construction follows a source-initiated signaling model in which the source device executes a label distribution protocol, such as RSVP-TE, to signal a different point-to-point LSP for each destination device (leaf node). The P2P LSPs, referred to as source-to-leaf (S2L) sub-LSPs, provide a label switch paths from the source device to a different, corresponding destination device. For example, the source device may signal the P2P sub-LSPs and combine the sub-LSPs to form the P2MP LSP. Techniques for forming a P2MP LSP using source-to-leaf sub-LSPs are described in RFC 4875, “Extensions to Resource Reservation Protocol—Traffic Engineering (RSVP-TE) for Point-to-Multipoint TE Label Switched Paths (LSPs),” IETF, May 2007, the entire contents of which are incorporated herein by reference.
In general, techniques are described for establishing a point-to-multipoint (P2MP) label switched path (LSP) using a branch node-initiated signaling model in which branch node to leaf (B2L) sub-LSPs are signaled and utilized to form a P2MP LSP. For example, a P2MP LSP may be formed in which each B2L sub-LSPs at each level is said to be ‘attached’ to a sub-LSP at a higher-level branch node or the source node. In general, a branch node to leaf (B2L) sub-LSP refers to a P2P LSP that is signaled from a branch node to a leaf node.
In one example, a centralized path computation element (PCE) may compute explicit route objects (EROS) for the S2L and B2L sub-LSPs, and send the EROs to the source node and branch nodes respectively via a Path Computation Element (PCE) Communication Protocol (PCEP). The source node and branch nodes in turn signal the S2L sub-LSPs and B2L sub-LSPs separately. After the sub-LSPs are set up, the source node may merge all S2L sub-LSPs by building a flood next-hop; Each branch node attach lower level sub-LSPs (locally initiated) to the associated higher level sub-LSPs, by adding a branch next-hop to the flood next-hop of that higher level sub-LSP.
In one example, a method comprises receiving, with a controller, a request for a point-to-multipoint (P2MP) label switched path (LSP) from a source node through one or more branch nodes to a plurality of leaf nodes within a network. The method comprises determining, with the controller, a hierarchy of point-to-point (P2P) LSPs, wherein a first level of the hierarchy includes at least one source-to-leaf (S2L) P2P LSP from the source node as an ingress for the S2L P2P LSP through one or more of the branch nodes to a first one of the leaf nodes as an egress for the S2L P2P LSP, and wherein each remaining level of the hierarchy includes at least one branch-to-leaf (B2L) P2P LSP from one of the branch nodes as in ingress to the B2L P2P LSP to a different one of the leaf nodes as an egress for the B2L P2P LSP. The method further comprises outputting messages, with the controller, to direct the source node, the one or more transit nodes and the plurality of leaf nodes to signal the hierarchy of P2P LSPs to form the P2MP LSP.
In another example, a device comprises a network interface to receive a request for a point-to-multipoint (P2MP) label switched path (LSP) from a source node through one or more branch nodes to a plurality of leaf nodes within a network. The device includes a path computation module executing on one or more processors. The path computation module determines a hierarchy of point-to-point (P2P) sub-LSPs. A first level of the hierarchy includes at least one P2P source-to-leaf (S2L) sub-LSP from the source node as an ingress for the P2P S2L sub-LSP through one or more of the branch nodes to a first one of the leaf nodes as an egress for the P2P S2L LSP. Each remaining level of the hierarchy of P2P sub-LSPs includes at least one P2P branch-to-leaf (B2L) sub-LSP from one of the branch nodes as in ingress to the P2P B2L sub-LSP to a different one of the leaf nodes as an egress for the P2P B2L sub-LSP. A path provisioning module of the device outputs messages to direct the source node, the one or more branch nodes and the plurality of leaf nodes to signal the hierarchy of P2P sub-LSPs to form the P2MP LSP.
A computer-readable storage medium comprising instructions that cause a network device to receive a request for a point-to-multipoint (P2MP) label switched path (LSP) from a source node through one or more branch nodes to a plurality of leaf nodes within a network. The instructions cause the device to determine a hierarchy of point-to-point (P2P) sub-LSPs. A first level of the hierarchy includes at least one P2P source-to-leaf (S2L) sub-LSP from the source node as an ingress for the P2P S2L sub-LSP through one or more of the branch nodes to a first one of the leaf nodes as an egress for the P2P S2L LSP. Each remaining level of the hierarchy includes at least one P2P branch-to-leaf (B2L) sub-LSP from one of the branch nodes as in ingress to the P2P B2L sub-LSP to a different one of the leaf nodes as an egress for the P2P B2L sub-LSP. The instructions cause the device to output messages, with the controller, to direct the source node, the one or more branch nodes and the plurality of leaf nodes to signal the hierarchy of P2P sub-LSPs to form the P2MP LSP.
The techniques may provide certain advantages. For example, the techniques described herein provides a scalable solution in which the number of sub-LSPs for which the source node or any given branch node need maintain state is equal to the number of physical data flows output from that node to downstream nodes, i.e., the number of output interfaces used for the P2MP LSP by that node to output data flows to downstream nodes. As such, unlike the conventional source node-initiated model in which each node maintains state for sub-LSPs that service each of the leaf nodes downstream from the device, the size and scalability of a P2MP LSP is no longer bound to the number of leaves that are downstream from that node. Hence, the signaling efficiency and scalability of the techniques described herein may be significantly higher than the source node-initiated signaling model. Further, the chance of any remerge condition and cross-over may be significantly reduced.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
Source network 11 may comprise any public or private network or the Internet. Subscriber networks 18 may include local area networks (LANs) or wide area networks (WANs) that comprise a plurality of subscriber devices. The subscriber devices may include personal computers, laptops, workstations, personal digital assistants (PDAs), wireless devices, network-ready appliances, filer servers, print servers or other devices that access source network 11 via source router 12A. In some cases, the subscriber devices request multicast streams, such as IPTV channels, from source network 11.
As described herein, routers 14A-14L (“routers 14”) establish a P2MP-LSP 16 using a branch node-initiated signaling model in which branch node to leaf (B2L) sub-LSPs are signaled and utilized to form the P2MP LSP. For example, P2MP LSP 16 having a plurality of “levels” may be formed in which the B2L sub-LSPs at each level are “attached” to a sub-LSP at a higher-level branch node or the source node (router 14A). In general, a branch to leaf (B2L) sub-LSP refers to a point-to-point (P2P) LSP that is signaled from a branch node to a leaf node. In the example of
In one example, controller 25 operates as centralized path computation element to compute paths for all of the S2L and B2L sub-LSPs used to form P2MP LSP 16. Controller 25 may send explicit route objects (EROS) to each of source router 14 and branch routers 14B, 14C, and 14G via a Path Computation Element (PCE) Communication Protocol (PCEP), where each of the EROs specify a particular route from the source router or the brand router to one of the leaf nodes. The source node (router 14A) and the branch nodes (routers 14B, 14C, and 14G) in turn signal the P2P S2L sub-LSPs and B2L sub-LSPs separately along the routes specified by the EROS. In this example, controller 25 may output an ERO directing source router 14A to signal a P2P S2L sub-LSP 18A from source router 14A to leaf router 14D and an ERO directing source router 14A to signal a P2P S2L sub-LSP 18B from source router 14A to leaf router 14L. In addition, controller 25 may output an ERO directing branch router 14B to signal a branch-initiated P2P B2L sub-LSP 18C from branch router 14B to leaf router 14F, an ERO directing branch router 14C to signal a branch-initiated P2P B2L sub-LSP 18D from branch router 14C to leaf router 14H, and an ERO directing branch router 14G to signal a branch-initiated P2P B2L sub-LSP 18E from branch router 14G to leaf router 14J.
After the sub-LSPs are set up, source node 14A internally merges all S2L sub-LSPs (i.e., S2L sub-LSPs 18A, 18B) by constructing a flood next-hop that floods traffic from source network 11 to the S2L sub-LSPs. Separately, each branch node attaches their branch-initiated B2L sub-LSPs to the associated higher level S2L or B2L sub-LSPs that traverse the branch by adding a branch next-hop to the flood next-hop of that higher level sub-LSP.
In this way, routers 14 establish P2MP LSP 16 using a branch node-initiated signaling model in which branch node to leaf (B2L) sub-LSPs are signaled by the branch nodes (routers 14B, 14C and 14G in this example) and utilized to form the P2MP LSP. The techniques may provide certain advantages. For example, the number of sub-LSPs for which each node within P2MP LSP 16 need maintain state is equal to the number of physical data flows output from that node to downstream nodes, i.e., the number of output interfaces used for the P2MP LSP by that node to output data flows to downstream nodes. As such, unlike the conventional source node-initiated model in which the source device maintains state for sub-LSPs for each of the leaf nodes, the size and scalability of a P2MP LSP is no longer bound to the number of leaves that are downstream from that node.
That is, in this example, source node 14A need only maintain control plane state information associated with signaling two P2P sub-LSPs, i.e., P2P S2L sub-LSPs 18A and 18B. Branch nodes 14B, 14C and 14G maintain control plane state information associated with B2L sub-LSPs 18C, 18D, and 18E, respectively, for which the node operates as an ingress and the higher-level sub-LSP to which the sub-LSP attaches. For example, router 14B need only maintain state for B2L sub-LSP 18C for which the router operates as an ingress and S2L sub-LSP 18A to which sub-LSP 18C attaches. As such, the source node (router 14A) for P2MP LSP 16 need not signal and maintain state for five separate sub-LSP to leaf node (routers 14F, 14J, 14H, 14D and 14L). Similarly, router 14B need not maintain state for B2L sub-LSPs 18D, 18E even though those sub-LSPs service leafs downstream from router 14B. As another example, router 14C need not maintain state for B2L sub-LSP 18E even though the sub-LSP services leaf node (router 14J) downstream from router 14C. Thus, the size and scalability of P2MP LSP 16 is no longer bound to the number of leaves that are downstream from the source node or any given branch node. Hence, the signaling efficiency and scalability of the techniques described herein may be significantly higher than the source node-initiated signaling model. Further, since controller 25 may be able to perform path computation for all sub-LSPs based on a global view of network 10 and provides a centralized coordination between those sub-LSPs, the chance of any remerge condition and cross-over may be significantly reduced.
In this example, controller 25 computes a first level (Level 0) of S2L P2P sub-LSPs 18A and 18B originating from source node 14A. Controller 25 computes a second level (Level 1) of B2L sub-LSPs 18C and 18D that are to be attached to one of the level 0 sub-LSPs. In the example of
To complete P2MP LSP 16, controller 25 computes a third level (Level 2) having a single B2L sub-LSP 18E that attaches to Level 1 sub-LSP 18D. In this way, controller 25 computes N level of P2P sub-LSPs where at any level M, except Level 0, the P2P sub-LSPs are branch-node initiated and attach to a sub-LSP of level M−1. Controller 25 may determine the set B2L LSPs at each level so as to reduce the maximum state to be maintained at any given source or branch node.
Controller 25 includes a control unit 27 coupled to a network interface 29 to exchange packets with routers 14 and other network devices of network system 10. Control unit 27 may include one or more processors (not shown in
Control unit 27 provides an operating environment for network services applications 30, path computation element 32, which includes topology module 42, path computation module 44, and path provisioning module 46. In one example, these modules may be implemented as one or more processes executing on one or more virtual machines of one or more servers. Moreover, while generally illustrated and described as executing on a single controller 25, aspects of these modules may be delegated to other computing devices.
Network services applications 30 represent one or more processes that manage and coordinate services provided to clients or customers of network system 10 Network services applications 30 may provide, for instance, include Voice-over-IP (VoIP), Video-on-Demand (VOD), bulk transport, walled/open garden, IP Mobility Subsystem (IMS) and other mobility services, and Internet services to clients of the service provider network. In response to the needs of the subscriber devices, networks services applications 30 may require services provided by path computation element 32, such as node management, session management, and policy enforcement. Moreover, network services applications 30 may require path computation element 32 to establish transport LSPs, such as P2MP LSP 16, through network system 10 for delivery of the services.
Network services applications 30 issue path requests to path computation element 34 to request transports LSPs (e.g., P2MP LSP 16) in a path computation domain (network system 10) controlled by controller 25. In general, a path request may specify a required bandwidth or other constraint and endpoints representing a source node and one or edge nodes that communicate over the path computation domain managed by controller 25. Path requests may further specify time/date during which paths must be operational and CoS parameters (for instance, bandwidth required per class for certain paths).
Path computation element 34 accepts path requests from network services applications 30 to establish paths between the endpoints over the path computation domain. Paths may be requested for different times and dates and with disparate bandwidth requirements. Path computation element 34 may reconcile path requests from network services applications 30 to multiplex requested paths onto the path computation domain based on requested path parameters and anticipated network resource availability.
To intelligently compute and establish paths through the path computation domain, path computation element 34 may include a topology module 42 to receive and store topology information describing available resources of the path computation domain, including routers 14 and interconnecting communication links.
Path computation module 44 of path computation element 34 computes requested paths through the path computation domain. As explained herein, path computation module 44 may establish P2MP-LSP 16 by computing and directing routers 14 to signal branch node to leaf (B2L) sub-LSPs to form the P2MP LSP. For example, path computation module 44 may compute paths for all of the S2L and B2L sub-LSPs used to form P2MP LSP 16. In response, path provisioning module 46 may send explicit route objects (EROS) to each of source router 14 and branch routers 14B, 14C, and 14G via a Path Computation Element (PCE) Communication Protocol (PCEP), where each of the EROs specify a particular route from the source router or the brand router to one of the leaf nodes. The source node (router 14A) and the branch nodes (routers 14B, 14C, and 14G) in turn signal the P2P S2L sub-LSPs and B2L sub-LSPs separately along the routes specified by the EROS. In this example, path provisioning module 46 may output an ERO directing source router 14A to signal a P2P S2L sub-LSP 18A from source router 14A to leaf router 14D and an ERO directing source router 14A to signal a P2P S2L sub-LSP 18B from source router 14A to leaf router 14L. In addition, path provisioning module 46 may output an ERO directing branch router 14B to signal a branch-initiated P2P B2L sub-LSP 18C from branch router 14B to leaf router 14F, an ERO directing branch router 14C to signal a branch-initiated P2P B2L sub-LSP 18D from branch router 14C to leaf router 14H, and an ERO directing branch router 14G to signal a branch-initiated P2P B2L sub-LSP 18E from branch router 14G to leaf router 14J.
Path computation module 44 includes data structures to store path information for computing and establishing requested paths. These data structures include constraints 54, path requirements 56, operational configuration 58, and path export 60. Network services applications 30 may invoke northbound API 50 to install/query data from these data structures. Constraints 54 represent a data structure that describes external constraints upon path computation. Constraints 54 allow network services applications 30 to, e.g., modify link attributes before path computation module 44 computes a set of paths. For examples, Radio Frequency (RF) modules (not shown) may edit links to indicate that resources are shared between a group and resources must be allocated accordingly. Network services applications 30 may modify attributes of link to effect resulting traffic engineering computations in accordance with CCP. In such instances, link attributes may override attributes received from topology indication module 64 and remain in effect for the duration of the node/attendant port in the topology. A link edit message to constraints 54 may include a link descriptor specifying a node identifier and port index, together with link attributes specifying a bandwidth, expected time to transmit, shared link group, and fate shared group, for instance. The link edit message may be sent by the PCE.
Operational configuration 58 represents a data structure that provides configuration information to path computation element 32 to configure the path computation algorithm with respect to, for example, class of service (CoS) descriptors and detour behaviors. Operational configuration 58 may receive operational configuration information in accordance with CCP. An operational configuration message specifies CoS value, queue depth, queue depth priority, scheduling discipline, over provisioning factors, detour type, path failure mode, and detour path failure mode, for instance. A single CoS profile may be used for the entire path computation domain.
Path export 60 represents an interface that stores path descriptors for all paths currently committed or established in the path computation domain. In response to queries received via northbound API 50, path export 60 returns one or more path descriptors. Queries received may request paths between any two edge and access nodes terminating the path(s). Path descriptors may be used by network services applications 30 to set up forwarding configuration at the edge and access nodes terminating the path(s). A path descriptor may include an Explicit Route Object (ERO). A path descriptor or “path information” may be sent, responsive to a query from an interested party, in accordance with CCP. A path export message delivers path information including path type (primary or detour); bandwidth for each CoS value; and, for each node in the ordered path from ingress to egress, a node identifier, ingress label, and egress label.
Path requirements 56 represent an interface that receives path requests for paths to be computed by path computation module 44 and provides these path requests (including path requirements) to path engine 62 for computation. Path requirements 56 may be received in accordance with CCP, or may be handled by the PCE. In such instances, a path requirement message may include a path descriptor having an ingress node identifier and egress node identifier for the nodes terminating the specified path, along with request parameters including CoS value and bandwidth. A path requirement message may add to or delete from existing path requirements for the specified path.
Topology module 42 includes topology indication module 64 to handle topology discovery and, where needed, to maintain control channels between path computation element 32 and nodes of the path computation domain. Topology indication module 64 may include an interface to describe received topologies to path computation module 44.
Topology indication module 64 may use CCP topology discovery or some other topology discovery protocol to describe the path computation domain topology to path computation module 44. Using CCP topology discovery, topology indication module 64 may receive a list of node neighbors, with each neighbor including a node identifier, local port index, and remote port index, as well as a list of link attributes each specifying a port index, bandwidth, expected time to transmit, shared link group, and fate shared group, for instance.
Topology indication module 64 may communicate with a topology server, such as a routing protocol route reflector, to receive topology information for a network layer of the network. Topology indication module 64 may include a routing protocol process that executes a routing protocol to receive routing protocol advertisements, such as Open Shortest Path First (OSPF) or Intermediate System-to-Intermediate System (IS-IS) link state advertisements (LSAs) or Border Gateway Protocol (BGP) UPDATE messages. Topology indication module 64 may in some instances be a passive listener that neither forwards nor originates routing protocol advertisements. In some instances, topology indication module 64 may alternatively, or additionally, execute a topology discovery mechanism such as an interface for an Application-Layer Traffic Optimization (ALTO) service. Topology indication module 64 may therefore receive a digest of topology information collected by a topology server, e.g., an ALTO server, rather than executing a routing protocol to receive routing protocol advertisements directly.
In some examples, topology indication module 64 receives topology information that includes traffic engineering (TE) information. Topology indication module 64 may, for example, execute Intermediate System-to-Intermediate System with TE extensions (IS-IS-TE) or Open Shortest Path First with TE extensions (OSPF-TE) to receive TE information for advertised links. Such TE information includes one or more of the link state, administrative attributes, and metrics such as bandwidth available for use at various LSP priority levels of links connecting routers of the path computation domain. In some instances, indication module 64 executes BGP-TE to receive advertised TE information for inter-autonomous system and other out-of-network links. Additional details regarding executing BGP to receive TE info are found in U.S. patent application Ser. No. 13/110,987, filed May 19, 2011 and entitled “DYNAMICALLY GENERATING APPLICATION-LAYER TRAFFIC OPTIMIZATION PROTOCOL MAPS,” which is incorporated herein by reference in its entirety.
Traffic engineering database (TED) 72 stores topology information, received by topology indication module 64, for a network that constitutes a path computation domain for controller 25 to a computer-readable storage medium (not shown). TED 72 may include one or more link-state databases (LSDBs), where link and node data is received in routing protocol advertisements, received from a topology server, and/or discovered by link-layer entities such as an overlay controller and then provided to topology indication module 64. In some instances, an operator may configure traffic engineering or other topology information within TED 72 via a client interface.
Path engine 62 accepts the current topology snapshot of the path computation domain in the form of TED 72 and computes, using TED 72, CoS-aware traffic-engineered paths between nodes as indicated by configured node-specific policy (constraints 54) and/or through dynamic networking with external modules via APIs. Path engine 62 may further compute detours for all primary paths on a per-CoS basis according to configured failover and capacity requirements (as specified in operational configuration 58 and path requirements 56, respectively).
In general, to compute a requested path, path engine 62 determines based on TED 72 and all specified constraints whether there exists a path in the layer that satisfies the TE specifications for the requested path for the duration of the requested time. Path engine 62 may use the Djikstra constrained SPF (CSPF) 246 path computation algorithms for identifying satisfactory paths though the path computation domain. If there are no TE constraints, path engine 62 may revert to SPF. If a satisfactory computed path for the requested path exists, path engine 62 provides a path descriptor for the computed path to path manager 76 to establish the path using path provisioning module 46. A path computed by path engine 62 may be referred to as a “computed” path, until such time as path provisioning module 46 programs the scheduled path into the network, whereupon the scheduled path becomes an “active” or “committed” path. A scheduled or active path is a temporarily dedicated bandwidth channel for the scheduled time in which the path is, or is to become, operational to transport flows.
Path manager 76 establishes computed scheduled paths using path provisioning module 46, which in this instance includes forwarding information base (FIB) configuration module 66 (illustrated as “FIB CONFIG. 66”), policer configuration module 68 (illustrated as “POLICER CONFIG. 68”), and CoS scheduler configuration module 70 (illustrated as “COS SCHEDULER CONFIG. 70”).
FIB configuration module 66 may program forwarding information to data planes of nodes of the path computation domain, e.g., network system 10. For example, in the event controller 25 allocates MPLS labels for the entire P2MP LSP 16, FIB configuration module 66 of path provisioning module 46 may construct and install a flooding next hop at each of the branch nodes to forward traffic from a higher level one of the P2P S2L sub-LSPs or P2P B2L sub-LSPs to one of the P2P B2L sub-LSPs for which the branch node operates as an ingress. Alternatively, the flooding next hop may be constructed locally at each of the branch nodes during the signaling of P2MP LSP 16. As a result, the forwarding information (FIB) of a node within network system 10 may include the MPLS switching tables including the necessary flooding next hop(s), a detour path for each primary LSP, a CoS scheduler per-interface and policers at LSP ingress.
FIB configuration module 66 may implement, for instance, a PCEP protocol or a software-defined networking (SDN) protocol, such as the OpenFlow protocol, to provide and direct the nodes to install forwarding information to their respective data planes. Accordingly, the “FIB” may refer to forwarding tables in the form of, for instance, one or more OpenFlow flow tables each comprising one or more flow table entries that specify handling of matching packets. FIB configuration module 66 may in addition, or alternatively, implement other interface types, such as a Simple Network Management Protocol (SNMP) interface, path computation element protocol (PCEP) interface, a Device Management Interface (DMI), a CLI, Interface to the Routing System (IRS), or any other node configuration interface. FIB configuration module 66 may establish communication sessions with nodes to install forwarding information to receive path setup event information, such as confirmation that received forwarding information has been successfully installed or that received forwarding information cannot be installed (indicating FIB configuration failure). Additional details regarding PCEP may be found in J. Medved et al., U.S. patent application Ser. No. 13/324,861, “PATH COMPUTATION ELEMENT COMMUNICATION PROTOCOL (PCEP) EXTENSIONS FOR STATEFUL LABEL SWITCHED PATH MANAGEMENT,” filed Dec. 13, 2011, and in “Path Computation Element (PCE) Communication Protocol (PCEP),” Network Working Group, Request for Comment 5440, March 2009, the entire contents of each of which being incorporated by reference herein. Additional details regarding IRS are found in “Interface to the Routing System Framework,” Network Working Group, Internet-draft, Jul. 30, 21012, which is incorporated by reference as if fully set forth herein.
FIB configuration module 66 may add, change (i.e., implicit add), or delete forwarding table entries in accordance with information received from path computation module 44. A FIB configuration message from path computation module 44 to FIB configuration module 66 may specify an event type (add or delete); a node identifier; a path identifier; one or more forwarding table entries each including an ingress port index, ingress label, egress port index, and egress label; and a detour path specifying a path identifier and CoS mode.
Policer configuration module 68 may be invoked by path computation module 44 to request a policer be installed on a particular aggregation node or access node for a particular LSP ingress. As noted above, the FIBs for aggregation nodes or access nodes include policers at LSP ingress. Policer configuration module 68 may receive policer configuration requests according to CCP. A CCP policer configuration request message may specify an event type (add, change, or delete); a node identifier; an LSP identifier; and, for each class of service, a list of policer information including CoS value, maximum bandwidth, burst, and drop/remark. FIB configuration module 66 configures the policers in accordance with the policer configuration requests.
CoS scheduler configuration module 70 may be invoked by path computation module 44 to request configuration of CoS scheduler on the aggregation nodes or access nodes. CoS scheduler configuration module 70 may receive the CoS scheduler configuration information in accordance with CCP. A CCP scheduling configuration request message may specify an event type (change); a node identifier; a port identity value (port index); and configuration information specifying bandwidth, queue depth, and scheduling discipline, for instance.
In the example of
Router 90 includes interface cards 92A-92N (“IFCs 92”) for receiving packets via input links 93A-93N (“input links 93”) and sending packets via output links 94A-94N (“output links 94”). IFCs 92 are interconnected by a high-speed switch 111 provided by forwarding component 106. In one example, the switch comprises switch fabric, switchgear, a configurable network switch or hub, and the like. Links 93, 94 comprise any form of communication path, such as electrical paths within an integrated circuit, external data busses, optical links, network connections, wireless connections, or other type of communication path. IFCs 92 are coupled to input links 93 and output links 94 via a number of interface ports (not shown).
Routing component 104 provides an operating environment for protocols 98, which are typically implemented as executable software instructions. As illustrated, protocols 98 include RSVP-TE 98A, intermediate system to intermediate system (IS-IS) 98B and PCEP 98C.
By executing the routing protocols, routing component 104 may identify existing routes through the network and determines new routes through the network. Routing component 104 stores routing information in a routing information base (RIB) 96 that includes, for example, known routes through the network. Forwarding component 106 stores forwarding information base (FIB) 110 that includes destinations of output links 94. FIB 110 may be generated in accordance with RIB 96.
As described herein, router 90 may receive commands from a centralized controller, such as controller 25, via PCEP 98C. In response, router 90 invokes RSVP-TE 98A to signal P2P sub-LSPs for merging into a P2MP LSP. Protocols 98 may include other routing protocols in addition to or instead of RSVP-TE 98A, IS-IS 98B and PCEP 98C, such as other Multi-protocol Label Switching (MPLS) protocols including LDP, or routing protocols, such as Internet Protocol (IP), the open shortest path first (OSPF), routing information protocol (RIP), border gateway protocol (BGP), interior routing protocols, or other network protocols.
The architecture of router 90 illustrated in
Routing component 104 and forwarding component 106 may be implemented solely in software, or hardware, or may be implemented as a combination of software, hardware, or firmware. For example, routing component 104 and forwarding component 106 may include one or more processors which execute program code in the form of software instructions. In that case, the various software modules of router 90 may comprise executable instructions stored on a computer-readable storage medium, such as computer memory or hard disk.
Initially, controller 25 receives a request or other message indicative of a need for a P2MP LSP (120). For example, controller 25 may receive a request from one or more nodes within network system 10 for a service (122), such as a request to join a L2 or L3 VPN or to join a particular multicast group. As another example, controller 25 may receive a request in the form of configuration data from an administrator specifying a service or a specific P2MP LSP that is required.
In response and based upon the particular, controller 25 may determine a source node associated with a source of the service (e.g., source network 11) and one or more end nodes associated with destinations of the service (e.g., one subscriber networks 18) (123). In the example described above, controller 25 may identify router 14A as a source node for a requested service and routers 14D, 14F, 14H, 14J and 14L as leaf nodes for delivering the service to subscribers of subscriber networks 18.
Upon determining the source node and the one or more end nodes, controller 25 computes a set of P2P sub-LSPs that may be used to form a P2MP LSP from the source node to the end node (124). Moreover, rather than identify a set of P2P LSPs that all originate from the source node, controller 25 identifies the set of P2P LSPs to include one or more P2P LSPs that originate from a branch node of the P2MP LSP and, therefore, are to be signaled by the branch node. Further, controller 25 may determine the set of branch-node initiated P2P LSPs to be used so as to reduce or minimize the state maintained at any given node of the P2MP LSP. In the example above, controller 25 determines the P2P sub-LSPs 18A-18E where the source node (router 14A) need only maintain state for two sub-LSPs (18A and 18B). As sub-LSPs 18C-18E will be branch-initiated LSPs, the source node for the P2MP LSP need not maintain state for the LSPs. Moreover, the number of sub-LSPs for which the source node and given branch node need maintain state will be equal the number of physical data flows output from that node to downstream nodes, i.e, the number of output interfaces used for the P2MP LSP to output data flows from that node to downstream nodes.
For example, source router 14A need only maintain state for sub-LSPs 18A and 18B. Moreover, branch router 14B need only maintain state for two sub-LSPs: (1) branch P2P sub-LSP 18C that router 14B initiated, and (2) P2P sub-LSP 18A to which the branch LSP 18C will be stitched. As another example, branch router 14G need only maintain state for two sub-LSPs: (1) branch P2P sub-LSP 18E that router 14G initiated and for which the router operates as the ingress, and (2) P2P sub-LSP 18D to which the branch LSP 18E will be attached. In this way, the source node and each branch node need only maintain state for a number of sub-LSPs that is equal to the number of physical output interfaces at that node that are used to carry traffic for the P2MP LSP. As such, the signaling efficiency and scalability for each of the nodes may be significantly improved.
Upon determining the P2P sub-LSPs, controller 25 directs the appropriate nodes of network system 10 to initiate signaling and establishment of the P2P sub-LSPs (126, 132). For example, controller 25 may output EROs to routers 14A, 14B, 14C and 14G to signal P2P sub-LSPs 18 since these routers operate as ingresses to the sub-LSPs, as shown in
Further, routers 14A, 14B, 14C and 14G stich the P2P sub-LSPs to form the desired P2MP LSP (130). For example, router 14A may initiate signaling of P2P sub-LSP 18A with routers 14B, 14C and 14D. In addition, router 14B may initiate signaling of P2P sub-LSP 18C with routers 14E and 14F. At this time, router 14B determines that P2P sub-LSPs 18A and 18C share a common identifier for a P2MP LSP 16, in this example. As such, router 14B stitches level 1 P2P sub-LSP 18C to level 0 P2P sub-LSP 18A. For example, RSVP-TE 98A of router 14B may build a flood next hop within forwarding information 110 of forwarding component 106 so incoming traffic associated with P2P sub-LSP 18A is flooded output interfaces associated with downstream routers 14C and 14E. At this time, RSVP-TE 98A may construct the flood next hop as a chained next hop that directs forwarding component 106 to perform different label operations for each output interface. For purposes of example, routing component 104 of router 14B may construct the flood next hop as 100 {200, 300}, which indicates that inbound traffic received from router 14A with label 100 (as allocated to router 14A by router 14B for P2P sub-LSP 18A) is to be swapped and output with label 200 on an output interface associated with router 14C and output with label 300 on an output interface associated with router 14E, where label 200 was signaled by router 14C for sub-LSP 18A and label 300 was signaled by router 14E for sub-LSP 18C. Further examples of construction of a chained next hop are described in U.S. Pat. No. 7,990,993, entitled “PLATFORM-INDEPENDENT CONTROL PLANE AND LOWER-LEVEL DERIVATION OF FORWARDING STRUCTURES,” incorporated herein by reference.
Controller 25 continues to direct routers 14 to establish the P2P sub-LSPs (126), which the routers establish and stitch (128,130), until the entire P2MP forwarding tree is established and P2MP LSP is operational (134). Upon establishing the P2MP LSP, routers 14 operate to forward traffic from the source node (router 14A) to the leaf nodes (14D, 14F, 14H, 14J and 14L) via the P2MP LSP.
Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5600642 | Pauwels et al. | Feb 1997 | A |
6374303 | Armitage et al. | Apr 2002 | B1 |
6477166 | Sanzi et al. | Nov 2002 | B1 |
6493349 | Casey | Dec 2002 | B1 |
6501754 | Ohba et al. | Dec 2002 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6571218 | Sadler | May 2003 | B1 |
6597703 | Li et al. | Jul 2003 | B1 |
6611528 | Farinacci et al. | Aug 2003 | B1 |
6625773 | Boivie et al. | Sep 2003 | B1 |
6731652 | Ramfelt et al. | May 2004 | B2 |
6778531 | Kodialam et al. | Aug 2004 | B1 |
6807182 | Dolphin et al. | Oct 2004 | B1 |
6879594 | Lee et al. | Apr 2005 | B1 |
6920503 | Nanji et al. | Jul 2005 | B1 |
6968389 | Menditto et al. | Nov 2005 | B1 |
7035226 | Enoki et al. | Apr 2006 | B2 |
7039687 | Jamieson et al. | May 2006 | B1 |
7082102 | Wright | Jul 2006 | B1 |
7133928 | McCanne | Nov 2006 | B2 |
7251218 | Joregensen | Jul 2007 | B2 |
7269135 | Frick et al. | Sep 2007 | B2 |
7281058 | Shepherd et al. | Oct 2007 | B1 |
7296090 | Jamieson et al. | Nov 2007 | B2 |
7330468 | Tse-Au | Feb 2008 | B1 |
7333491 | Chen et al. | Feb 2008 | B2 |
7359328 | Allan | Apr 2008 | B1 |
7359393 | Nalawade et al. | Apr 2008 | B1 |
7360084 | Hardjono | Apr 2008 | B1 |
7366894 | Kalimuthu et al. | Apr 2008 | B1 |
7418003 | Alvarez et al. | Aug 2008 | B1 |
7463591 | Kompella et al. | Dec 2008 | B1 |
7477642 | Aggarwal et al. | Jan 2009 | B2 |
7483439 | Shepherd et al. | Jan 2009 | B2 |
7489695 | Ayyangar | Feb 2009 | B1 |
7519010 | Aggarwal et al. | Apr 2009 | B1 |
7522599 | Aggarwal et al. | Apr 2009 | B1 |
7522600 | Aggarwal et al. | Apr 2009 | B1 |
7532624 | Ikegami et al. | May 2009 | B2 |
7545735 | Shabtay et al. | Jun 2009 | B1 |
7558219 | Aggarwal et al. | Jul 2009 | B1 |
7558263 | Aggarwal et al. | Jul 2009 | B1 |
7564803 | Minei et al. | Jul 2009 | B1 |
7564806 | Aggarwal et al. | Jul 2009 | B1 |
7570604 | Aggarwal et al. | Aug 2009 | B1 |
7570605 | Aggarwal et al. | Aug 2009 | B1 |
7570638 | Shimizu et al. | Aug 2009 | B2 |
7590115 | Aggarwal et al. | Sep 2009 | B1 |
7593405 | Shirazipour et al. | Sep 2009 | B2 |
7602702 | Aggarwal | Oct 2009 | B1 |
7633859 | Filsfils et al. | Dec 2009 | B2 |
7742482 | Aggarwal | Jun 2010 | B1 |
7768925 | He et al. | Aug 2010 | B2 |
7787380 | Aggarwal et al. | Aug 2010 | B1 |
7797382 | Bou-Diab | Sep 2010 | B2 |
7804790 | Aggarwal et al. | Sep 2010 | B1 |
7830787 | Wijnands et al. | Nov 2010 | B1 |
7839862 | Aggarwal | Nov 2010 | B1 |
7856509 | Kodeboyina | Dec 2010 | B1 |
7860104 | Aggarwal | Dec 2010 | B1 |
7925778 | Wijnands et al. | Apr 2011 | B1 |
7933267 | Aggarwal et al. | Apr 2011 | B1 |
7940698 | Minei et al. | May 2011 | B1 |
7957386 | Aggarwal et al. | Jun 2011 | B1 |
7983261 | Aggarwal et al. | Jul 2011 | B1 |
7990963 | Aggarwal et al. | Aug 2011 | B1 |
7990965 | Aggarwal et al. | Aug 2011 | B1 |
7990993 | Ghosh et al. | Aug 2011 | B1 |
8068492 | Aggarwal et al. | Nov 2011 | B1 |
8111633 | Aggarwal et al. | Feb 2012 | B1 |
8121056 | Aggarwal et al. | Feb 2012 | B1 |
8160076 | Aggarwal et al. | Apr 2012 | B1 |
20020071390 | Reeves et al. | Jun 2002 | A1 |
20020109879 | Wing So | Aug 2002 | A1 |
20020118644 | Moir | Aug 2002 | A1 |
20020181477 | Mo et al. | Dec 2002 | A1 |
20020186664 | Gibson et al. | Dec 2002 | A1 |
20020191584 | Korus et al. | Dec 2002 | A1 |
20030012215 | Novaes | Jan 2003 | A1 |
20030016672 | Rosen et al. | Jan 2003 | A1 |
20030021282 | Hospodor | Jan 2003 | A1 |
20030031175 | Hayashi et al. | Feb 2003 | A1 |
20030043772 | Mathis et al. | Mar 2003 | A1 |
20030056007 | Katsube et al. | Mar 2003 | A1 |
20030063591 | Leung et al. | Apr 2003 | A1 |
20030087653 | Leung et al. | May 2003 | A1 |
20030088696 | McCanne | May 2003 | A1 |
20030099235 | Shin et al. | May 2003 | A1 |
20030108047 | Mackiewich et al. | Jun 2003 | A1 |
20030112748 | Puppa et al. | Jun 2003 | A1 |
20030123446 | Muirhead et al. | Jul 2003 | A1 |
20030172114 | Leung | Sep 2003 | A1 |
20030177221 | Ould-Brahim et al. | Sep 2003 | A1 |
20030191937 | Balissat et al. | Oct 2003 | A1 |
20030210705 | Seddigh et al. | Nov 2003 | A1 |
20040032856 | Sandstrom | Feb 2004 | A1 |
20040034702 | He | Feb 2004 | A1 |
20040037279 | Zelig et al. | Feb 2004 | A1 |
20040042406 | Wu et al. | Mar 2004 | A1 |
20040047342 | Gavish et al. | Mar 2004 | A1 |
20040081154 | Kouvelas | Apr 2004 | A1 |
20040151180 | Hu et al. | Aug 2004 | A1 |
20040151181 | Chu et al. | Aug 2004 | A1 |
20040165600 | Lee | Aug 2004 | A1 |
20040190517 | Gupta et al. | Sep 2004 | A1 |
20040213160 | Regan et al. | Oct 2004 | A1 |
20040218536 | Yasukawa et al. | Nov 2004 | A1 |
20040240445 | Shin et al. | Dec 2004 | A1 |
20040240446 | Compton | Dec 2004 | A1 |
20050001720 | Mason et al. | Jan 2005 | A1 |
20050013295 | Regan et al. | Jan 2005 | A1 |
20050018693 | Dull | Jan 2005 | A1 |
20050025156 | Smathers | Feb 2005 | A1 |
20050027782 | Jalan et al. | Feb 2005 | A1 |
20050097203 | Unbehagen et al. | May 2005 | A1 |
20050108419 | Eubanks | May 2005 | A1 |
20050111351 | Shen | May 2005 | A1 |
20050129001 | Backman et al. | Jun 2005 | A1 |
20050169270 | Mutou et al. | Aug 2005 | A1 |
20050220132 | Oman et al. | Oct 2005 | A1 |
20050232193 | Jorgensen | Oct 2005 | A1 |
20050262232 | Cuervo et al. | Nov 2005 | A1 |
20050265308 | Barbir et al. | Dec 2005 | A1 |
20050271035 | Cohen et al. | Dec 2005 | A1 |
20050271036 | Cohen et al. | Dec 2005 | A1 |
20050281192 | Nadeau et al. | Dec 2005 | A1 |
20060013141 | Mutoh et al. | Jan 2006 | A1 |
20060039364 | Wright | Feb 2006 | A1 |
20060047851 | Voit et al. | Mar 2006 | A1 |
20060088031 | Nalawade | Apr 2006 | A1 |
20060126496 | Filsfils et al. | Jun 2006 | A1 |
20060147204 | Yasukawa et al. | Jul 2006 | A1 |
20060153067 | Vasseur et al. | Jul 2006 | A1 |
20060164975 | Filsfils et al. | Jul 2006 | A1 |
20060182034 | Klinker et al. | Aug 2006 | A1 |
20060221958 | Wijnands et al. | Oct 2006 | A1 |
20060262735 | Guichard et al. | Nov 2006 | A1 |
20060262786 | Shimizu et al. | Nov 2006 | A1 |
20070025276 | Zwiebel et al. | Feb 2007 | A1 |
20070025277 | Sajassi et al. | Feb 2007 | A1 |
20070036162 | Tingle et al. | Feb 2007 | A1 |
20070076709 | Mattson et al. | Apr 2007 | A1 |
20070091891 | Zwiebel et al. | Apr 2007 | A1 |
20070098003 | Boers et al. | May 2007 | A1 |
20070104119 | Sarkar et al. | May 2007 | A1 |
20070124454 | Watkinson | May 2007 | A1 |
20070129291 | Tian | Jun 2007 | A1 |
20070140107 | Eckert et al. | Jun 2007 | A1 |
20070177593 | Kompella | Aug 2007 | A1 |
20070177594 | Kompella | Aug 2007 | A1 |
20070189177 | Zhai | Aug 2007 | A1 |
20080056258 | Sharma et al. | Mar 2008 | A1 |
20080112330 | He et al. | May 2008 | A1 |
20080123524 | Vasseur et al. | May 2008 | A1 |
20080123654 | Tse-Au | May 2008 | A1 |
20080291921 | Du et al. | Nov 2008 | A1 |
20090028149 | Yasukawa et al. | Jan 2009 | A1 |
20090175274 | Aggarwal et al. | Jul 2009 | A1 |
20090225650 | Vasseur et al. | Sep 2009 | A1 |
20090268731 | Narayanan et al. | Oct 2009 | A1 |
20100111086 | Tremblay et al. | May 2010 | A1 |
20100208733 | Zhao et al. | Aug 2010 | A1 |
20120044936 | Bellagamba et al. | Feb 2012 | A1 |
20120057505 | Xue | Mar 2012 | A1 |
20130016605 | Chen | Jan 2013 | A1 |
20140003229 | Gandhi et al. | Jan 2014 | A1 |
20140029418 | Jain et al. | Jan 2014 | A1 |
Number | Date | Country |
---|---|---|
2005-086222 | Mar 2005 | JP |
2005-130258 | May 2005 | JP |
2005-167482 | Jun 2005 | JP |
2005-252385 | Sep 2005 | JP |
2005-323266 | Nov 2005 | JP |
2004001206 | Jan 2004 | KR |
02091670 | Nov 2002 | WO |
2004071032 | Aug 2004 | WO |
Entry |
---|
Satyanarayana et al., “Extensions to GMPLS RSVP Graceful Restart”, draft-aruns-ccamp-restart-ext-01.txt, Jul. 2004, Network Working Group Internet Draft, 23 pgs. |
Aggarwal et al., “MPLS Upstream Label Assignment and Context Specific Label Space,” Network Working Group Internet Draft, Jan. 2005, draft-raggarwa-mpls-upstream-label-00.txt, 9 pgs. |
Wijnands et al., “Multicast Extensions for LDP,” Network Working Group Internet Draft, Mar. 2005, draft-wijnands-mpls-ldp-mcast-ext-00.txt, 13 pgs. |
Aggarwal et al., “MPLS Upstream Label Assignment for RSVP-TE and LDP,” Aug. 24, 2005, http://www.tla-group.com/˜mpls/ietf-63-mpls-upstream-rsvp-ldp.ppt, 8 pgs. |
Fujita, N., “Dynamic Selective Replication Schemes for Content Delivery Networks,” IPSJ SIG Notes, vol. 2001, No. 111, Information Processing Society of Japan, Nov. 21, 2001, 2 pgs. |
Awduche et al., “RFC 3209—RSVP-TE: Extensions to RSVP for LSP Tunnels,” Network Working Group, Dec. 2001, 64 pgs. http://rfc.sunsite.dk/rfc/rfc3209html. |
RSVP-TE: Resource Reservation Protocol—Traffic Extension, Javvin Company, 2 pgs, printed Apr. 18, 2005. http://www.javvin.com/protocolRSVPTE.html. |
Zhang et al., “A Destination-initiated Multicast Routing Protocol for Shortest Path Tree Constructions,” GLOBECOM 2003, IEEE Global Telecommunications Conference, XP010677629, pp. 2840-2844. |
Yasukawa et al., Requirements for point to multipoint extension to RSVP-TE, Oct. 2003. |
Wei et al., “Establishing point to multipoint MPLS TE LSPs,” Aug. 2004. |
Aggarwal et al., “Establishing Point to Multipoint MPLS TE LSPs,” submitted to Internet Engineering Task Force (IETF) Feb. 11, 2007, pp. 1-15. |
Rekhter et al., “A Border Gateway Protocol 4 (BGP-4),” Mar. 1995, 93 pp. |
Atlas et al., “MPLS RSVP-TE Interoperability for Local Protection/Fast Reroute,” IETF, Jul. 2001, pp. 1-14. |
Rosen et al., “Multicast in MPLS/BGP IP VPNs,” draft-rosen-vpn-mcast-07.txt, May 2004, 27 pgs. |
Deering et al., “Protocol Independent Multicast-Sparse Mode (PIM-SM): Motivation and Architecture,” draft-ietf-idmr-pim-arch-05.txt, Aug. 4, 1998, 30 pgs. |
Kompella et al., “Virtual Private LAN Service,” draft-ietf-12vpn-vpls-bgp-00.txt, May 2003, 22 pgs. |
Martini et al., “Transport of Layer 2 Frames Over MPLS,” Network Working Group Internet Draft, draft-martini-12circuit-trans-mpls-08.txt, Nov. 2001, 18 pgs. |
Martini et al., “Encapsulation Methods for Transport of Layer 2 Frames Over IP and MPLS Networks,” Network Working Group Internet Draft, draft-martini-12circuit-encap-mpls-04.txt, Nov. 2001, 17 pgs. |
Decraene et al., “LDP Extension for Inter-Area Label Switched Paths (LSPs),” Network Working Group RFC 5283, Jul. 2008, 13 pp. |
Decraene et al., “LDP extension for Inter-Area LSP”, Network Working Group, draft-ietf-mpls-ldp-interarea-03.txt, Feb. 2008, 12 pp. |
Decraene et al., “LDP extensions for Inter-Area LSP”, Network Working Group, draft-decraene-mpls-ldp-interarea-02.txt, Jun. 2006, 8 pp. |
Kompella, “Layer 2 VPNs Over Tunnels,” Network Working Group, Jan. 2006, 27 pp. |
Rekhter et al., “Carrying Label Information in BGP-4”, Network Working Group, RFC 3107, May 2001, 9 pp. |
Andersson et al., “LDP Specification”, Network Working Group, RFC 3036, Jan. 2001, (118 pages). |
Swallow et al., “Network Scaling with Aggregated IP LSPs”, Network Working Group, draft-swallow-mpls-aggregated-fec-00.txt, Jul. 2007, 10 pp. |
Shah et al., “Extensions to MPLS-based Layer 2 VPNs,” Network Working Group, Sep. 2001, 14 pp. |
U.S. Appl. No. 12/574,428, by Rahul Aggarwal, filed Oct. 6, 2009. |
U.S. Appl. No. 12/871,784, by Rahul Aggarwal, filed Aug. 30, 2010. |
U.S. Appl. No. 13/448,085, by Rahul Aggarwal, filed Apr. 16, 2012. |
U.S. Appl. No. 12/951,885, by Rahul Aggarwal, filed Nov. 22, 2010. |
Aggarwal et al. “Extensions to Resource Reservation Protocol—Traffic Engineering (RSVP-TE) for Point-to-Multipoint TE Label Switched Paths (LSPs)” Network Working Group, RFC 4875, May 2007, 54 pgs. |
Atlas et al. “Interface to the Routing System Framework,” Network Working Group, Internet-draft, Jul. 30, 21012, 22 pgs. |
Vasseur et al. “Path Computation Element (PCE) Communication Protocol (PCEP),” Network Working Group, Request for Comment 5440, Mar. 2009, 88 pgs. |
U.S. Appl. No. 13/110,987, filed May 19, 2011 and entitled “Dynamically Generating Application-Layer Traffic Optimization Protocol Maps.” |
U.S. Appl. No. 13/324,861, “Path Computation Element Communication Protocol (PCEP) Extensions for Stateful Label Switched Path Management,” filed Dec. 13, 2011. |