The present disclosure relates generally to communication networks. More particularly, the present disclosure relates to under G.8032 “Ethernet ring protection switching” (G.8032v1-2008, and G.8032v2-2010) multiple fault recovery mechanisms.
The Ethernet Ring Protection Switching (ERPS) protocol is an industry standard and is specified within International Telecommunication Union ITU SG15 Q9, under G.8032 “Ethernet ring protection switching” (G.8032v1-2008, and G.8032v2-2010), the contents of which are incorporated by reference. ERPS specifies protection switching mechanisms and a protocol for Ethernet layer network (ETH) rings. Each Ethernet Ring Node is connected to adjacent Ethernet Ring Nodes participating in the same Ethernet Ring, using two independent links. A ring link is bounded by two adjacent Ethernet Ring Nodes, and a port for a ring link is called a ring port. The minimum number of Ethernet Ring Nodes in an Ethernet Ring is two. Two fundamental principles of G.8032 include a) loop avoidance and b) utilization of learning, forwarding, and Filtering Database (FDB) mechanisms defined in the Ethernet flow forwarding function (ETH_FF). Loop avoidance in an Ethernet Ring is achieved by guaranteeing that, at any time, traffic may flow on all but one of the ring links. This particular link is called the Ring Protection Link (RPL), and under normal conditions this ring link is blocked, i.e. not used for service traffic. One designated Ethernet Ring Node, the RPL Owner Node, is responsible for blocking traffic at one end of the RPL. Under an Ethernet ring failure condition, the RPL Owner Node is responsible for unblocking its end of the RPL (unless the RPL has failed) allowing the RPL to be used for traffic. The other Ethernet Ring Node adjacent to the RPL, the RPL Neighbor Node, may also participate in blocking or unblocking its end of the RPL. The event of an Ethernet Ring failure results in protection switching of the traffic. This is achieved under the control of the ETH_FF functions on all Ethernet Ring Nodes. An Automatic Protection Switching (APS) protocol is used to coordinate the protection actions over the ring.
G.8032v2 introduced additional features, such as: multi-ring/ladder network support; revertive/non-revertive mode after condition, that is causing the switch, is cleared; administrative commands: Forced Switch (FS), Manual Switch (MS) for blocking a particular ring port; flush FDB (Filtering database) logic, which significantly reduces amount of flush FDB operations in the ring; and support of multiple ERP instances on a single ring. With respect to multi-ring/ladder network support, G.8032 specifies support for a network of interconnected rings. The recommendation defines basic terminology for interconnected rings including interconnection nodes, major ring, and sub-ring. Interconnection nodes are ring nodes that are common to both interconnected rings. The major ring is an Ethernet ring that controls a full physical ring and is connected to the Interconnection nodes on two ports. The sub-ring is an Ethernet ring that is connected to a major ring at the interconnection Nodes. By itself, the sub-ring does not constitute a closed ring and is closed through connections to the interconnection nodes. In interconnected rings, G.8032 was not designed nor specified to gracefully handle (i.e., recover) concurrent or simultaneous multiple faults on the major ring. Whenever a [single] fault occurs on a network element blade, which is supporting ring links; there is the potential for end-to-end network “black holing” of [client] traffic being transported by G.8032 ring interconnections. This leads to increases in service unavailability and dissatisfaction of customers due to network impairments.
In an exemplary embodiment, a method include detecting a failure on both ports of a major ring at a network element that has an interconnecting sub-ring terminating thereon; causing a block at an associated sub-ring termination port of the interconnecting sub-ring responsive to the failure on both the ports of the major ring; and monitoring the failure and clearing the block responsive to a recovery of one or both ports from the failure. The block causes the failure on the major ring to be visible on the interconnecting sub-ring for implementing G.8032 Ethernet Ring Protection Switching thereon. The method can further include implementing G.8032 Ethernet Ring Protection Switching in the major ring responsive to the failure. The method can further include implementing G.8032 protection in the interconnecting sub-ring responsive to the block. The method can further include monitoring virtual local area network (VLAN) membership between the major ring and interconnecting sub-ring with the block thereon; and, responsive to a change in the VLAN membership such that the major ring and the interconnecting sub-ring are no longer interconnected, releasing the block. The both ports of the major ring at the network element can be contained in a same module. The method can further include causing the block through one of a forced switch or manual switch applied to the associated sub-ring termination port or administratively disabling the associated sub-ring termination port.
In another exemplary embodiment, a network element includes a west major ring port and an east major ring port each associated with a major ring; a sub-ring termination port associated with a sub-ring interconnected with the major ring; and circuitry configured to: detect a failure affecting both the west major ring port and the east major ring port; cause a block at the sub-ring termination port; and monitor the failure and clear the block responsive to a recovery of one or both of the west major ring port and the east major ring port from the failure. The block causes the failure on the major ring to be visible on the interconnecting sub-ring for implementing G.8032 Ethernet Ring Protection Switching thereon. The circuitry can be further configured to implement G.8032 Ethernet Ring Protection Switching in the major ring responsive to the failure. The sub-ring can be configured to implement G.8032 Ethernet Ring Protection Switching responsive to the block. The circuitry can be further configured to monitor virtual local area network (VLAN) membership between the major ring and interconnecting sub-ring with the block thereon; and, responsive to a change in the VLAN membership such that the major ring and the interconnecting sub-ring are no longer interconnected, release the block. The network element can include an interconnection node. The block can be caused through one of a forced switch or manual switch applied to the sub-ring termination port or administratively disabling the sub-ring termination port.
In yet another exemplary embodiment, a network includes a major ring including a first interconnection node and a second interconnection node; and a sub-ring interconnected to the major ring through the first interconnection node and the second interconnection node; wherein, responsive to a multiple concurrent failure at one of the first interconnection node and the second interconnection node affecting traffic between the major ring and the sub-ring, the one of the first interconnection node and the second interconnection node is configured to cause a block on a port terminating the sub-ring. The block causes the multiple concurrent failure on the major ring to be visible on the sub-ring for implementing G.8032 Ethernet Ring Protection Switching thereon. Responsive to the multiple concurrent failure, the major ring can be configured to implement G.8032 Ethernet Ring Protection Switching. Responsive to the multiple concurrent failure and the block, the sub-ring can be configured to implement G.8032 Ethernet Ring Protection. The one of the first interconnection node and the second interconnection node is configured to monitor virtual local area network (VLAN) membership between the major ring and sub-ring with the block thereon; and, responsive to a change in the VLAN membership such that the major ring and the sub-ring are no longer interconnected, release the block. The block can be caused through one of a forced switch or manual switch applied to the port terminating the sub-ring or administratively disabling the port terminating the sub-ring.
The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
In various exemplary embodiments, G.8032 multiple concurrent fault recovery mechanisms are disclosed that allow [client] traffic being transported between an interconnected major ring and a sub-ring to be successfully delivered in the event of dual concurrent or simultaneous faults on the interconnection node of the major ring. Advantageously, the G.8032 multiple concurrent fault recovery mechanisms improve a Service Provider's network and service availability when G.8032 (ring interconnections) is being used to transport client traffic. Although G.8032 was specified/defined to provide resiliency for single fault scenarios only, a [single] network element blade (e.g., line module, card, blade, etc.) fault can actually result in multiple faults on a ring (e.g., ring east and west port). From a technical protocol perspective, this is a multiple failure scenario, however, from a customer perspective this is a single fault scenario, i.e. a single failed module. The G.8032 multiple concurrent fault recovery mechanisms addresses the Service Provider's (network) perspective.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
The interconnected network 100 includes three distinct rings, i.e. the rings 102, 104, 106. The ring 102 includes the network elements 12-1, 12-2, 12-3 and a channel block 120 is located on a port of the module 110 in the network element 12-2 facing the network element 12-1 for the ring 102. The ring 104 includes the network elements 12-1, 12-2, 12-3, 12-4 and a channel block 122 is located on a port of the network element 12-4 facing the network element 12-2. The ring 106 includes the network elements 12-1, 12-2, 12-3, 12-5 and a channel block 124 is located on a port of the network element 12-5 facing the network element 12-2. The channel blocks 120, 122, 124 can also be referred to as RPL blocks. For illustration purposes,
Referring to
Referring to
Referring to
Referring to
The sub-ring 206 encounters a similar problem; due to the location of a block 224, the SR 12-21, 12-22 both send traffic destined for any of the MR 12-11, 12-12, 12-13 towards the port SP1 of the IC 12-22, and that traffic is dropped because both of the major ring ports MP1, MP2 have failed. Essentially, after the failure occurs on the major ring ports MP1, MP2, all traffic between any of the MR 12-11, 12-12, 12-13 and any of SR 12-31, 12-32, 12-33 would be disrupted.
Referring to
With the block in place, the interconnecting sub-rings 204, 206 now see the failure due to the failure on both ports of the major ring, and the interconnecting sub-rings implement normal G.8032 protection mechanisms therein. Concurrently, the ring node monitors the state of the failure on both ports of the major ring (step 255). If one or both of the major ring ports recovers (step 256), the ring node clears the block from the sub-ring termination ports (step 257). If VLAN membership of major ring/sub-ring changes such that the two are no longer interconnected (step 258), the ring node clears the block from the sub-ring termination ports (step 257). The G.8032 multiple concurrent fault recovery method 250 ends (step 253) or continues to monitor the state (step 255). Similarly, if VLAN membership of a major-ring/sub-ring changes such that these two become interconnected (when they were previously not sharing any traffic), a check should be made to see if a dual port failure condition on the major ring already exists. If so, then as soon as the VLAN membership change causes the two to become interconnected, the block must be immediately performed on the sub-ring's termination port.
Referring to
This forces blocks 270, 272 to ports facing the ports SP1, SP4 respectively (and thereby removing the original RPL blocks 222, 224). The traffic 230 from the MR 12-11 to the SR 12-333 will now go through the IC 12-21 and arrive on SP3 as before, but will now be let through. The block on SP1 will also cause all sub-ring traffic from the SR 12-33 to go out of SP3 to the IC 12-21 and traffic between SR3 and MR1 will be restored. Similarly, the placement of the block on SP4 for the sub-ring 206 will cause all sub-ring traffic to be directed towards the IC 12-21 and restore connectivity to the major ring 202 for the SR 12-31, 12-32. Advantageously, the G.8032 multiple concurrent fault recovery method 250 leverages the existing G.8032 state machine with actions to address a multiple concurrent fault affecting interconnected sub-rings.
Referring to
The control blades 304 include a microprocessor 310, memory 312, software 314, and a network interface 316 to operate within the networks 100, 200. Specifically, the microprocessor 310, the memory 312, and the software 314 may collectively control, configure, provision, monitor, etc. the network element 12. The network interface 316 may be utilized to communicate with an element manager, a network management system, etc. Additionally, the control blades 304 may include a database 320 that tracks and maintains provisioning, configuration, operational data and the like. The database 320 may include a forwarding database (FDB) 322. In this exemplary embodiment, the network element 12 includes two control blades 304 which may operate in a redundant or protected configuration such as 1:1, 1+1, etc. In general, the control blades 304 maintain dynamic system information including Layer two forwarding databases, protocol state machines, and the operational status of the ports 308 within the network element 12. In an exemplary embodiment, the blades 302, 304 are configured to implement G.8032 rings, such as the major ring and/or sub-rings, and to implement the various processes, algorithms, methods, mechanisms, etc. described herein for implementing the G.8032 multiple concurrent fault recovery method 250. For example, a multiple concurrent or simultaneous fault can occur when one of the blades 302 include both east and west ports of a major ring.
It will be appreciated that some exemplary embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors, digital signal processors, customized processors, and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the aforementioned approaches may be used. Moreover, some exemplary embodiments may be implemented as a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, etc. each of which may include a processor to perform methods as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), Flash memory, and the like. When stored in the non-transitory computer readable medium, software can include instructions executable by a processor that, in response to such execution, cause a processor or any other circuitry to perform a set of operations, steps, methods, processes, algorithms, etc.
Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7103807 | Bosa et al. | Sep 2006 | B2 |
7499407 | Holness et al. | Mar 2009 | B2 |
7505466 | Rabie et al. | Mar 2009 | B2 |
7633968 | Haran et al. | Dec 2009 | B2 |
8018841 | Holness et al. | Sep 2011 | B2 |
8144586 | McNaughton et al. | Mar 2012 | B2 |
8149692 | Holness et al. | Apr 2012 | B2 |
8305938 | Holness et al. | Nov 2012 | B2 |
8509061 | Holness et al. | Aug 2013 | B2 |
8588060 | Holness | Nov 2013 | B2 |
8625410 | Abdullah et al. | Jan 2014 | B2 |
20050207348 | Tsurumi | Sep 2005 | A1 |
20070268817 | Smallegange et al. | Nov 2007 | A1 |
20090175176 | Mohan | Jul 2009 | A1 |
20100135291 | Martin et al. | Jun 2010 | A1 |
20100177635 | Figueira | Jul 2010 | A1 |
20100250733 | Turanyi et al. | Sep 2010 | A1 |
20100260196 | Holness et al. | Oct 2010 | A1 |
20100260197 | Martin et al. | Oct 2010 | A1 |
20100284413 | Abdullah et al. | Nov 2010 | A1 |
20110110359 | Cooke et al. | May 2011 | A1 |
20120033666 | Holness et al. | Feb 2012 | A1 |
20120106360 | Sajassi et al. | May 2012 | A1 |
20120147735 | Wang et al. | Jun 2012 | A1 |
20120155246 | Wang | Jun 2012 | A1 |
20120195233 | Wang et al. | Aug 2012 | A1 |
20120224471 | Vinod et al. | Sep 2012 | A1 |
20120230214 | Kozisek et al. | Sep 2012 | A1 |
20120243405 | Holness et al. | Sep 2012 | A1 |
20120250695 | Jia et al. | Oct 2012 | A1 |
20120281710 | Holness et al. | Nov 2012 | A1 |
20130258840 | Holness et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
1575221 | Sep 2005 | EP |
Entry |
---|
Recommendation ITU-T G.8032/Y.1344, Ethernet ring protection switching, Jun. 2008. |
Recommendation ITU-T G.8032/Y.1344, Ethernet ring protection switching, Mar. 2010. |
Marc Holness, G.8032 Ethernet Ring Protection Overview, Mar. 2008 ITU-T Q9—SG 15. |
Marc Holness, ITU-T G-Series Supplement 52 Overview, G.8032 Usage and Operational Considerations, Joint IEEE-SA and ITU Workshop on Ethernet, Geneva, Switzerland, Jul. 13, 2013. |
Jeong-Dong Ryoo et al., Ethernet Ring Protection for Carrier Ethernet Networks, IEEE Communications Magazine • Sep. 2008. |
Marc Holness, Metro Ethernet—History and Overview, The Greater Chicago Chapter SCTE, May 22, 2013. |
Apr. 9, 2015 International Search Report issued in International Patent Application PCT/US2014/072940. |
“Ethernet ring protection switching,” International Telecommunication Union, G.8032/Y.1344 (Feb. 2012), pp. 1-104. |
Number | Date | Country | |
---|---|---|---|
20150207668 A1 | Jul 2015 | US |