System and method for managing topology changes in a network environment

Information

  • Patent Grant
  • 8982733
  • Patent Number
    8,982,733
  • Date Filed
    Friday, March 4, 2011
    13 years ago
  • Date Issued
    Tuesday, March 17, 2015
    9 years ago
Abstract
A method is provided in one example embodiment and includes receiving a spanning tree protocol topology change notification (STP TCN) in a network; removing topology data for a first plurality of gateways associated with a first network segment ID that is shared by a particular gateway that communicated the STP TCN; and communicating an edge TCN to a second plurality of gateways associated with a second network segment ID and for which topology data has not been removed based on the STP TCN.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of communications and, more particularly, to managing topology changes in a network environment.


BACKGROUND

Ethernet architectures have grown in complexity in recent years. This is due, at least in part, to diverse technologies that have emerged to accommodate a plethora of end users. For example, Data Center Ethernet (DCE) represents an extension to Classical Ethernet (CE), and it can offer a lower cost, lower latency, high-bandwidth configuration. The forwarding methodology adopted by DCE networks is generally scalable and, further, provides forwarding paths with equal-cost multipathing with support for different forwarding topologies. In certain network scenarios, topology information may not be current, accurate, and/or consistent. Optimally managing changes in network topologies presents a significant challenge to system designers, network operators, and service providers alike.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1 is a simplified schematic diagram of a communication system for managing topology changes in a network environment in accordance with one embodiment of the present disclosure;



FIGS. 2A-E are simplified schematic diagrams illustrating details related to example implementations of the communication system in accordance with one embodiment;



FIG. 3 is a simplified schematic diagram illustrating additional details related to the communication system in accordance with one embodiment; and



FIG. 4 is a simplified flowchart illustrating a series of example steps for a flow associated with the communication system.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


A method is provided in one example embodiment and includes receiving a spanning tree protocol topology change notification (STP TCN) in a network; removing topology data for a first plurality of gateways associated with a first network segment ID that is shared by a particular gateway that communicated the STP TCN; and communicating an edge TCN to a second plurality of gateways associated with a second network segment ID and for which topology data has not been removed based on the STP TCN.


In more specific implementations, the network includes a Data Center Ethernet (DCE) network, a first Classical Ethernet (CE) network, and a second CE network, which collectively form a layer-2 (L2) domain. The method can further include executing an intermediate system to intermediate system (IS-IS) protocol on a first set of network links in the network; and executing a spanning tree protocol (STP) on a second set of network links in the network. In yet other embodiments, removing of the topology data comprises removing Media Access Control (MAC) addresses from a MAC table, where the first and second network segment IDs are encoded in respective MAC addresses.


In more detailed instances, the edge TCN contains a generate bit, which causes a receiving network element to remove MAC addresses learned from a communicating network element that provided the edge TCN. Furthermore, the edge TCN can contain a propagate bit, where a receiving network element processes the edge TCN if it belongs to a same CE segment and removes MAC addresses on STP designated ports. A manual configuration mechanism can be used to associate the network segment IDs with respective gateways. Alternatively, STP hello messages can be used to associate the network segment IDs with respective gateways.


Example Embodiments


Turning to FIG. 1, FIG. 1 is a simplified block diagram of a communication system 10 for managing topology changes in a network environment in accordance with one embodiment. FIG. 1 may include a Data Center Ethernet (DCE) network segment 12, a Classical Ethernet (CE) network segment 14 (e.g., CE-1) and a CE network segment 16 (e.g., CE-2). Additionally, FIG. 1 may include a set of CE-DCE gateway switches 20a-b operating in DCE network segment 12 and CE-1 network segment 14, along with a set of CE-DCE gateway switches 22a-22b operating in DCE network segment 12 and CE-2 network segment 16. Further, FIG. 1 may include a plurality of CE switches 24a-e operating in CE-1 network segment 14 and a plurality of CE switches 28a-e operating in CE-2 network segment 16. Although CE-1 network segment 14 and CE-2 network segment 16 are a part of the same communication system 10, each is an independent CE cloud using an independent forwarding protocol. That is, in this example implementation, network elements within CE-1 are not directly linked to any network elements within CE-2.


DCE network segment 12, CE-1 network segment 14, and CE-2 network segment 16 represent a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through communication system 10. These networks offer a communicative interface between network elements (e.g., switches, bridges, gateways, etc.) and may be any IP network, local area network (LAN), virtual LAN (VLAN), wireless LAN (WLAN), metropolitan area network (MAN), wide area network (WAN), extranet, Intranet, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment. The networks can support a transmission control protocol (TCP)/IP, or a user datagram protocol (UDP)/IP in particular embodiments of the present disclosure; however, these networks may alternatively implement any other suitable communication protocol for transmitting and receiving data packets within communication system 10.


CE switches 24a-e and 28a-e, and CE-DCE gateway switches 20a-b and 22a-b are network elements that route (or that cooperate with each other in order to route) traffic and/or packets in a network environment. As used herein in this Specification, the term ‘network element’ is used interchangeably with the term ‘gateway’ and these terms are meant to encompass switches, routers, bridges, loadbalancers, firewalls, inline service nodes, proxies, servers, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. The network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange (reception and/or transmission) of data or information.


DCE networks commonly use a routing protocol (e.g., intermediate system to intermediate system (IS-IS)) for forwarding purposes, where CE networks commonly use a spanning tree protocol (STP) as their forwarding protocol. DCE and CE may form the same layer-2 (L2) domain. Loop prevention within hybrid CE-DCE networks operating different forwarding protocols is an important issue. One example of a network protocol ensuring loop-free hybrid CE-DCE networks is the L2 Gateway Spanning Tree Protocol (L2G-STP), as discussed in pending U.S. patent application having Ser. No. 12/941,881 and entitled “SYSTEM AND METHOD FOR PROVIDING A LOOP FREE TOPOLOGY IN A NETWORK ENVIRONMENT” which is incorporated in its entirety into this Specification. An aspect of loop prevention within a network implicates the availability of network elements. Thus, a mechanism is needed to efficiently and effectively manage changes in topology within a CE network portion of a hybrid CE-DCE network.


Returning to FIG. 1, in one particular example, DCE network segment 12 is representative of a L2 multi-pathing (L2MP) network, which may be executing the IS-IS forwarding protocol. CE-DCE gateway switches 20a-b and 22a-b can be employing the IS-IS protocol on DCE links, and using STP on the CE links. In this particular configuration of FIG. 1, CE switches 24a-e and 28a-e may be executing an STP. Further, as none of the switches in CE-1 network segment 14 (e.g., CE switches 24a-e) are directly connected to any switches in CE-2 network segment 16 (e.g., CE switches 28a-e), CE-1 and CE-2 may execute the STP.


Note that before turning to the example flows and infrastructure of example embodiments of the present disclosure, a brief overview of the switching environment is provided for purposes of context and explanation. Link state routing is a protocol that allows a node in a network to determine network topology by sharing information about transmission cost to each of its neighboring nodes. Link state routing packets are transmitted to (and received from) neighbors. The least expensive path to various destinations can be determined using the link state information. Link state information can be used to generate network topology information at various network nodes for creating forwarding tables. The forwarding tables allow network nodes (such as switches and bridges) to forward the received traffic on an appropriate output interface. In order to generate a network topology map and a forwarding table at a specific node, link state information is distributed from various network nodes. Each network node is configured to create a link state packet having information about the distance, delay, or cost to each of its neighbors. A link state record (LSR) can then be transmitted to neighboring nodes.


Transient loops arise when network topology changes because neighboring nodes may not be forwarding transmissions using the same generated network topology. Transient and permanent loops waste network bandwidth and, further, may burden end nodes with duplicate copies of topology information. One mechanism for preventing loops is STP. STP commonly runs on a switch and, further, operates to maintain a loop-free topology in an L2 switched network. The term spanning tree protocol (STP) as used herein includes any version of STP, including for example, traditional STP (IEEE 802.1d), rapid spanning tree protocol (RSTP) (IEEE 802.1w), multiple spanning tree protocol (MSTP) (IEEE 802.1s), or any other spanning tree protocol. CE switches may use STP to prevent loops, whereas other devices such as DCE switches may be configured to use protocols other than STP (e.g., IS-IS) to provide loop-free operations. While STP and other protocols work well for a standalone network comprising switches that utilize one protocol for preventing loops, the different protocols may not interoperate with each other and, therefore, cannot effectively be used in a combined (i.e., a hybrid) network.



FIG. 1 depicts clouds that form a switching network, where a cloud is defined as a set of one of more network switches/bridges and end hosts: all of which may be interconnected. At the edge of a DCE cloud and a CE cloud, a model for control plane interaction between the two clouds is commonly defined. Specifically, DCE and CE use different protocols to construct their respective forwarding topology (IS-IS versus STP). Thus, even though a single L2 domain would span the clouds, two different protocols govern the determination of the forwarding topology. One immediate issue surfaces as to how a change of topology (e.g., a link failure) in one cloud (e.g., CE-1 network segment 14) is managed in the other two clouds (e.g., DCE network segment 12 and CE-2 network segment 16). Updating network elements after a change in topology can create a significant increase in traffic, as new link state information needs to be communicated and forwarding tables need to be adjusted. The increased traffic can waste bandwidth and burden network elements.


For example, in regards to the example of FIG. 1, a change in the topology (e.g., a change in the availability of a network element) of segment CE-1 network segment 14 would need to be to be reported and managed throughout the hybrid CE-DCE network, including for DCE segment 12 and for CE-2 network segment 16. In CE networks executing STP forwarding protocols, network elements recognizing a change in topology (e.g., a link failure) propagate an STP topology change notification (TCN) to other network elements within the CE network. Upon receiving an STP TCN, each CE network element updates or relearns the topology of the network. The network elements continue to propagate STP TCNs until the network elements have been updated and, therefore, contain the same understanding of the CE network topology.


In hybrid CE-DCE networks, topology changes within one network segment (e.g., CE-1 network segment 14) do not need to be addressed in other network segments (e.g., DCE network segment 12 and CE-2 network segment 16) in the same manner as in the network segment that experiences the change. To optimize the handling of a topology change, it becomes important to minimize needless link state information from being communicated and to update the forwarding tables of network elements impacted by the topology change.


In accordance with the teachings of the present disclosure, communication system 10 addresses the aforementioned shortcomings (and others) in optimally managing topology changes. In logistical terms, the general approach of communication system 10 is to assign each L2 gateway switch (i.e., an L2 switch connected to CE and L2MP) with an L2 gateway domain ID. This can be accomplished through user configuration, where by default, the L2 gateway switches belong to the STP domain ID. This information of the L2 gateway domain ID can be encoded in an L2G-STP reserved MAC address and, subsequently, flooded to the L2 gateway switches using the L2MP IS-IS protocol. Note that a block of 1000 MAC addresses is generally reserved for the L2G-STP protocol.


Hence, the L2G switches would then have the information of the L2 gateway switches, along with their associated domain ID before certain activities occur in the network. Therefore, whenever an L2 gateway switch receives an STP TCN, it can flush the learned MAC addresses on designated ports. This network element is configured to then generate an edge TCN using the IS-IS protocol. For example, when an L2 gateway switch receives an edge TCN (where the message includes the sender switch ID), it can identify the switches belonging to the same L2 gateway domain as the sender switch. The network element is configured to then flush the MAC addresses learned from those switches. In addition, if the edge TCN were different from the L2 gateway domain ID, then it would not send the STP TCN in its CE cloud.


Note that such an approach may (in certain circumstances) shift responsibility to a customer to configure STP domain IDs based on the L2 network topologies. In order to address the case where there is connectivity between two L2 gateway domains, the architecture can employ an L2 gateway domain discovery protocol, which can send messages in the CE cloud to detect its peer L2 gateway switches. This optional feature can be provided based on particular configuration environments, or particular operator needs. Details relating to the possible signaling and interactions between the components of communication system 10 are provided below with reference to FIGS. 2A-4.


Turning to FIGS. 2A-2E, these FIGURES illustrate example topology change management activities associated with communication system 10. FIGS. 2A-2E reflect example flows for particular topology change events and, therefore, these related illustrations are discussed together. Note that the term ‘topology data’ as used herein in this Specification is meant to include any information relating to network elements and/or network characteristics (e.g., port information, MAC address information, configuration data, etc.). FIG. 2A illustrates one example path that data information can travel from host ‘e’ 30 in CE-2 network segment 16 to host ‘b’ 26 in CE-1 network segment 14. Data information may travel within CE-2 network segment 16 from host ‘e’ 30 to a CE switch 28e, then from CE switch 28e to another CE switch 28b, and finally from CE switch 28b to CE-DCE gateway switch S322a (as indicated by the arrow of FIG. 2A). CE-DCE gateway switch S322a may communicate data information from host ‘e’ 30 into DCE network segment 12 (as indicated by the arrow).


CE-DCE gateway switch S120a is configured to receive data information from host ‘e’ 30 in CE-2 network segment 16. CE-DCE gateway switch S120a may communicate the data from host ‘e’ 30 to CE switch 24b, which may communicate the data information to another CE switch 24e, which in turn communicates the data information to host ‘b’ 26 (as indicated by the arrow of FIG. 2A). Thus, in the example in FIG. 2A, data information communicated from host ‘e’ 30 to host ‘b’ 26 travels the path of: host ‘e’ 30 to CE switch 28e, to CE switch 28b, to CE-DCE gateway switch S322a, into DCE network segment 12, to CE-DCE gateway switch S120a, to CE switch 24b, to CE switch 24e, and finally to host ‘b’ 26. Each MAC address table for CE-DCEs S220b, S322a, and S422b may include table entries indicating that data information addressed to host ‘b’ 26 should be transmitted through CE-DCE gateway switch S120a.


CE-DCE gateway switches are configured to learn the CE network segment they are operating in (e.g., CE-1 network segment 14 or CE-2 network segment 16). A CE-DCE gateway switch may learn which CE network it operates in by manual configuration before deployment of the network element in the hybrid CE-DCE network. A network administrator may assign each CE-DCE gateway switch a domain identifier (ID), which can be any suitable data segment that can distinguish domains. By default, each CE-DCE gateway switch could be part of domain ID 1. The CE domain ID may be encoded in a reserved MAC address (e.g., a block of MAC addresses reserved in L2G-STP). A CE-DCE gateway switch may communicate the CE domain ID [to which it is associated] to other CE-DCE gateway switches through DCE network segment 12 using IS-IS. Another method of associating CE-DCE gateway switches with a CE domain ID is to have CE-DCE gateway switches communicate an STP hello message from ports operating in the CE network segment. STP hello messages transmitted in a CE network segment (e.g., CE-1 network segment 14 or CE-2 network segment 16) enable other CE-DCE gateway switches to know they are operating in the same network (e.g., the same CE network).



FIG. 2B is a simplified schematic diagram illustrating the development of a topology change in communication system 10. In this particular example, a link failure develops between CE switch 24b and CE switch 24e, as indicated by the ‘X’. Once the link failure occurs, data information traveling from host ‘e’ 30 to host ‘b’ 26 can no longer reach its destination. CE switch 24e may recognize the link failure and determine a new network path to host ‘b’ 26 (e.g., using link state information). When a CE switch identifies a new path to host ‘b’, it communicates an STP topology change notification (TCN). In one example, CE switch 24e determines that the new path to host ‘b’ 26 can propagate through CE switch 24c and CE-DCE gateway switch S220b (e.g., a new STP uplink). CE switch 24e transmits an STP TCN into CE-1 network segment 14 onto the link between itself and CE switch 24c. The port associated with that link can change from a blocking port to a forwarding port. Upon receiving an STP TCN, CE switch 24c may flush its MAC address table of the MAC addresses, and communicate an STP TCN towards CE-DCE gateway switch S220b. CE-DCE gateway switch S220b may receive the STP TCN and flush its MAC address table of the MAC addresses it learned from STP ports (e.g., any MAC addresses learned from network elements in CE-1). CE-DCE gateway switch S220b further removes (from its MAC table) the MAC addresses learned from ports connected to DCE network segment 12. Moreover, upon receiving an STP TCN, CE-DCE gateway switch S220b transmits an edge TCN into DCE network segment 12 (as illustrated in FIG. 2C). The edge TCN may include generate and propagate bits, as well as the domain ID of the CE-DCE gateway switch that sent the edge TCN (i.e., CE-DCE gateway switch S220b).



FIG. 2C is a simplified schematic diagram that depicts an example topology change in communication system 10. An edge TCN sent by CE-DCE S220b may be transmitted through DCE network segment 12 using IS-IS, eventually reaching a plurality of CE-DCE gateway switches. CE-DCE gateway switches associated with the same network segment ID as CE-DCE gateway switch S220b (e.g., S120a) may process edge TCNs if the propagate bit is set. CE-DCE gateway switch S120a may remove (from its internally stored MAC tables) the MAC addresses for the STP designated ports (e.g., ports attached to CE-1 network segment 14) and the MAC addresses learned from DCE network segment 12. Essentially, CE-DCE gateway switch S120a flushes the MAC addresses in its MAC table. CE-DCE S120a further communicates an STP TCN on the ports connected to CE-1 network segment 14 (e.g., the STP designated ports). Moreover, CE-DCE gateway switch S120a may transmit an edge TCN into DCE network segment 12 with the generate bit set (as explained further below).


Returning to FIG. 2C, CE-DCE gateway switches not associated with the same network ID as CE-DCE S220b can process an edge TCN with a generate bit set in this example. Upon receiving an edge TCN from CE-DCE gateway switch S220b with the generate bit set, CE-DCE gateway switches S322a and S422b may remove from their respective MAC address tables the MAC addresses learned from the network element that sent the edge TCN (i.e., S2). Because CE-1 and CE-2 are separate CE domains (i.e., no network elements in CE-1 and CE-2 are directly connected), a change in topology of CE-1 network segment 14 does not require the network elements within CE-2 network segment 16 to transmit an STP TCN. Therefore, CE-DCE gateway switches S322a and S422b do not flush their MAC tables of the MAC addresses, and do not send STP TCNs into CE-2 network segment 16, thereby minimizing sub-optimal handling of a topology change within CE-1. Flooding a CE network segment with STP TCNs creates significant amounts of overhead traffic for the network elements to process. Further, by flushing the MAC addresses learned from CE-DCE gateway switches that send out the edge TCNs with generate bits set, the CE-DCE gateway switches do not have to relearn the topology of the hybrid communication system (e.g., update the MAC addresses in their MAC tables). It is beneficial for CE-DCE gateway switches S322a and S422b to relearn the MAC addresses learned from CE-DCE gateway switch S220b. Minimizing MAC table flushing within CE-DCE gateway switches and reducing needless STP TCNs optimizes the time and resources necessary to manage topology changes in the hybrid CE-DCE communication system.



FIG. 2D is a simplified schematic diagram, which continues to illustrate a change of topology within communication system 10. As noted above, upon receiving an edge TCN from a CE-DCE gateway switch with the same network segment ID (i.e., S1), CE-DCE gateway switch S120a transmits an edge TCN into DCE network segment 12 with a generate bit set. Similar to the process described in FIG. 2C, when CE-DCE gateway switches S322a and S422b receive edge TCNs with a generate bit set from CE-DCE gateway switches that do not have the same network segment ID, they remove (from their respective MAC tables) the MAC addresses learned from the CE-DCE gateway switch that sent the edge TCN. Similar to above, by not flushing the MAC addresses in their MAC tables and by not transmitting STP TCNs within CE-2 network segment 16, CE-DCE gateway switches S322a and S322b minimize inefficient STP TCN propagation in CE-2 that would otherwise waste network bandwidth. Further, by flushing the MAC addresses learned from CE-DCE gateway switch S120a, CE-DCE gateway switches S322a and S422b relearn the MAC addresses learned from S120a. That is, CE-DCE gateway switches S322a and S422b do not have to relearn the topology of the hybrid communication system.



FIG. 2E is a simplified schematic diagram, which continues to illustrate an example of a change in topology of communication system 10. After flushing their respective MAC tables, CE-DCE gateway switches S120a, S322a, and S422b learn that data information addressed to host ‘b’ 26 propagates through CE-DCE gateway switch S220b. As illustrated in FIG. 2E, after the change in topology (e.g., a failed link), data information sent from host ‘e’ 30 to host ‘b’ 26 still travels through CE switch 28e, CE switch 28b, CE-DCE gateway switch S322a, and into DCE network segment 12 (as indicated by the arrows). Because there was no topology change within DCE network segment 12 and CE-2 network segment 16, data information from host ‘e’ 30 to host ‘b’ 26 follows the same path through CE-2 network segment 16 and DCE network segment 12. However, traffic from host ‘e’ 30 to host ‘b’ 26 now travels from DCE network segment 12 through CE-DCE gateway switch S220b, to CE switch 24c, to CE switch 24e, and finally to host ‘b’ 26 (as indicated by the arrow).


As indicated in FIG. 2E, the respective MAC tables of CE-DCE gateway switches S120a, S322a, and S422b include entries that indicate that data information addressed to host ‘b’ 26 should be transmitted through CE-DCE gateway switch S220b. Thus, FIGS. 2A-2E illustrate a method of efficiently and effectively managing a topology change in a CE network segment of a hybrid CE-DCE network and that does not require CE-DCE gateway switches associated with different CE network segments to needlessly flush their respective MAC addresses, and flood their respective CE network segments with STP TCNs.



FIG. 3 is a simplified block diagram illustrating potential details associated with communication system 10. In this particular example, CE-DCE gateway switches 20a-b (i.e., S1 and S2) and 22a-b (i.e., S3 and S4) include respective processors 44a-d, respective memory elements 46a-d, respective L2G-STP modules 40a-d, and respective routing modules 42a-d (e.g., IS-IS routing modules). Note also that L2G-STP modules 40a-d and routing modules 42a-d can be part of hybrid configurations in which any of the modules of FIG. 3 are suitably consolidated, combined, removed, added to, etc.


In operation, L2G-STP modules 40a-d may be configured to coordinate topology changes in a CE-DCE hybrid network. The L2G-STP modules 40a-d can learn (or associate with) a network domain ID to which each CE-DCE gateway switch is connected (e.g., operating in). L2G-STP modules 40a-d can coordinate receiving edge and STP TCNs, as well as communicating edge and STP TCNs. Further, L2G-STP modules 40a-d may be configured to facilitate updating MAC address tables stored in memory elements 46a-d to add or to remove MAC addresses. Processors 44a-d may execute code stored in memory elements 46a-d.


Note that CE-DCE switches 20a-b and 22a-b may share (or coordinate) certain processing operations. Using a similar rationale, their respective memory elements may store, maintain, and/or update data in any number of possible manners. In a general sense, the arrangement depicted in FIG. 3 may be more logical in its representations, whereas a physical architecture may include various permutations/combinations/hybrids of these elements. In one example implementation, CE-DCE gateway switches 20a-b and 22a-b include software (e.g., as part of L2G-STP modules 40a-d and/or routing modules 42a-d) to achieve the topology change management activities, as outlined herein in this document. In other embodiments, this feature may be provided externally to any of the aforementioned elements, or included in some other network element to achieve this intended functionality. Alternatively, several elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, any of the devices of FIGS. 1-3 may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate these topology change management operations.


Turning to FIG. 4, FIG. 4 is a simplified flowchart illustrating one example flow 100 that could be accommodated by communication system 10. This particular flow may begin at 110, where a network domain ID is associated with a given set of CE-DCE gateway switches. At 120, an STP TCN can be received at a CE-DCE switch. At 130, an edge TCN is communicated, where this piece of information includes a generate bit and a propagate bit set in the DCE cloud. At 140, the CE-DCE gateway switches receive edge TCNs.


At 150, a CE-DCE gateway switch (having the same domain ID as the CE-DCE gateway switch that sent an edge TCN with a propagate bit set) updates its MAC table. Note that such a table may be provisioned in any given memory element of the gateway. The objective of this operation is to remove the MAC addresses learned from the STP designated ports and the DCE cloud. Additionally at 150, STP TCNs are communicated on STP designated ports. Furthermore, an edge TCN (with a generate bit set in the DCE cloud) is communicated. At 160, a CE-DCE gateway switch with a different domain (in comparison to the CE-DCE gateway that sent an edge TCN with the generate bit set) is configured to update its MAC table to remove MAC addresses learned from the CE-DCE gateway switch that sent the edge TCN with the generate bit set.


Note that in certain example implementations, the topology change management activities outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element (as shown in FIG. 3) can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that can be executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor (as shown in FIG. 3) could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


In one example implementation, L2G-STP modules 40a-d and/or routing modules 42a-d include software in order to achieve the topology change management outlined herein. These activities can be facilitated by CE-DCE gateway switches 20a-b and 22a-b, and/or any of the elements of FIGS. 1-3. CE-DCE gateway switches 20a-b and 22a-b can include memory elements for storing information to be used in achieving the intelligent switching control, as outlined herein. Additionally, CE-DCE gateway switches 20a-b and 22a-b may include a processor that can execute software or an algorithm to perform the switching activities, as discussed in this Specification. These devices may further keep information in any suitable memory element (random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any possible memory items (e.g., database, table, cache, etc.) should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’


Note that with the examples provided herein, interaction may be described in terms of two or three elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that communication system 10 (and its teachings) are readily scalable and can accommodate a large number of clouds, networks, and/or switches, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided herein should not limit the scope or inhibit the broad teachings of communication system 10 as potentially applied to a myriad of other architectures. Additionally, although described with reference to particular scenarios where L2G-STP modules 40a-d, and/or routing modules 42a-d are provided separately, these modules can be consolidated or combined in any suitable fashion, or provided in a single proprietary unit.


It is also important to note that the steps discussed with reference to FIGS. 1-4 illustrate only some of the possible scenarios that may be executed by, or within, communication system 10. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by communication system 10 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


Although the present disclosure has been described in detail with reference to particular embodiments, it should be understood that various other changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the present disclosure. For example, although the present disclosure has been described as operating in networking environments or arrangements, the present disclosure may be used in any communications environment that could benefit from such technology. Virtually any configuration that seeks to intelligently manage network topology changes and/or switch packets could enjoy the benefits of the present disclosure. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.

Claims
  • 1. A method, comprising: receiving a spanning tree protocol topology change notification (STP TCN) in a network;removing topology data using an STP for a first plurality of gateways associated with a first network segment ID that is shared by a particular gateway that communicated the STP TCN; andcommunicating an edge TCN over an intermediate system to intermediate system (IS-IS) protocol to a second plurality of gateways associated with a second network segment ID and for which topology data has not been removed based on the STP TCN; andremoving topology data from the second plurality of gateways that was learned from a communicating network element that communicated the edge TCN without communicating the STP TCN into a second network associated with the second network segment ID.
  • 2. The method of claim 1, wherein the network includes a Data Center Ethernet (DCE) network, a first Classical Ethernet (CE) network, and a second CE network, which collectively form a layer-2 (L2) domain.
  • 3. The method of claim 1, wherein removing the topology data comprises removing Media Access Control (MAC) addresses from a MAC table, and wherein the first and second network segment IDs are encoded in respective MAC addresses.
  • 4. The method of claim 1, wherein the edge TCN contains a generate bit, which causes a receiving network element to remove MAC addresses learned from a communicating network element that provided the edge TCN.
  • 5. The method of claim 1, wherein the edge TCN contains a propagate bit, and wherein a receiving network element processes the edge TCN if it belongs to a same CE segment and removes MAC addresses on STP designated ports.
  • 6. The method of claim 1, wherein a manual configuration mechanism is used to associate the network segment IDs with respective gateways.
  • 7. The method of claim 1, wherein STP hello messages are used to associate the network segment IDs with respective gateways.
  • 8. Logic encoded in non-transitory media that includes code for execution and when executed by a processor operable to perform operations comprising: receiving a spanning tree protocol topology change notification (STP TCN) in a network;removing topology data using an STP for a first plurality of gateways associated with a first network segment ID that is shared by a particular gateway that communicated the STP TCN; andcommunicating an edge TCN over an intermediate system to intermediate system (IS-IS) protocol to a second plurality of gateways associated with a second network segment ID and for which topology data has not been removed based on the STP TCN; andremoving topology data from the second plurality of gateways that was learned from a communicating network element that communicated the edge TCN without communicating the STP TCN into a second network associated with the second network segment ID.
  • 9. The logic of claim 8, wherein the network includes a Data Center Ethernet (DCE) network, a first Classical Ethernet (CE) network, and a second CE network, which collectively form a layer-2 (L2) domain.
  • 10. The logic of claim 8, wherein removing the topology data comprises removing Media Access Control (MAC) addresses from a MAC table, and wherein the first and second network segment IDs are encoded in respective MAC addresses.
  • 11. The logic of claim 8, wherein the edge TCN contains a generate bit, which causes a receiving network element to remove MAC addresses learned from a communicating network element that provided the edge TCN.
  • 12. The logic of claim 8, wherein the edge TCN contains a propagate bit, and wherein a receiving network element processes the edge TCN if it belongs to a same CE segment and removes MAC addresses on STP designated ports.
  • 13. The logic of claim 8, wherein STP hello messages are used to associate the network segment IDs with respective gateways.
  • 14. An apparatus, comprising: a memory element configured to store electronic code;a processor operable to execute instructions associated with the electronic code; anda routing module configured to interface with the processor such that the apparatus is configured for: receiving a spanning tree protocol topology change notification (STP TCN) in a network;removing topology data using an STP for a first plurality of gateways associated with a first network segment ID that is shared by a particular gateway that communicated the STP TCN; andcommunicating an edge TCN over an intermediate system to intermediate system (IS-IS) protocol to a second plurality of gateways associated with a second network segment ID and for which topology data has not been removed based on the STP TCN; andremoving topology data from the second plurality of gateways that was learned from a communicating network element that communicated the edge TCN without communicating the STP TCN into a second network associated with the second network segment ID.
  • 15. The apparatus of claim 14, wherein the network includes a Data Center Ethernet (DCE) network, a first Classical Ethernet (CE) network, and a second CE network, which collectively form a layer-2 (L2) domain.
  • 16. The apparatus of claim 14, wherein removing the topology data comprises removing Media Access Control (MAC) addresses from a MAC table, and wherein the first and second network segment IDs are encoded in respective MAC addresses.
  • 17. The apparatus of claim 14, wherein the edge TCN contains a generate bit, which causes a receiving network element to remove MAC addresses learned from a communicating network element that provided the edge TCN.
  • 18. The apparatus of claim 14, wherein the edge TCN contains a propagate bit, and wherein a receiving network element processes the edge TCN if it belongs to a same CE segment and removes MAC addresses on STP designated ports.
  • 19. The apparatus of claim 15, wherein STP hello messages are used to associate the network segment IDs with respective gateways.
US Referenced Citations (296)
Number Name Date Kind
4006320 Markl Feb 1977 A
4486877 Turner Dec 1984 A
4569042 Larson Feb 1986 A
4630268 Rodenbaugh Dec 1986 A
4907277 Callens et al. Mar 1990 A
5010544 Chang et al. Apr 1991 A
5014265 Hahne et al. May 1991 A
5121382 Yang et al. Jun 1992 A
5159592 Perkins Oct 1992 A
5243342 Kattemalalavadi et al. Sep 1993 A
5265092 Soloway et al. Nov 1993 A
5274643 Fisk Dec 1993 A
5321694 Chang et al. Jun 1994 A
5341477 Pitkin et al. Aug 1994 A
5343461 Barton et al. Aug 1994 A
5353283 Tsuchiya Oct 1994 A
5371852 Attanasio et al. Dec 1994 A
5394402 Ross Feb 1995 A
5416842 Ariz May 1995 A
5422876 Turudic Jun 1995 A
5426637 Derby et al. Jun 1995 A
5430715 Corbalis et al. Jul 1995 A
5430727 Callon Jul 1995 A
5450394 Gruber Sep 1995 A
5450449 Kroon Sep 1995 A
5452294 Natarajan Sep 1995 A
5459837 Caccavale Oct 1995 A
5473599 Li et al. Dec 1995 A
5477531 McKee et al. Dec 1995 A
5491692 Gunner et al. Feb 1996 A
5500851 Kozaki et al. Mar 1996 A
5500860 Perlman et al. Mar 1996 A
5509123 Dobbins et al. Apr 1996 A
5519704 Farinacci et al. May 1996 A
5521907 Ennis et al. May 1996 A
5555256 Calamvokis Sep 1996 A
5561669 Lenny et al. Oct 1996 A
5563875 Hefel et al. Oct 1996 A
5594732 Bell et al. Jan 1997 A
5602918 Chen et al. Feb 1997 A
5604803 Aziz Feb 1997 A
5617417 Sathe et al. Apr 1997 A
5617421 Chin et al. Apr 1997 A
5621721 Vatuone Apr 1997 A
5623492 Teraslinna Apr 1997 A
5623605 Keshav et al. Apr 1997 A
5642515 Jones et al. Jun 1997 A
5650993 Lakshman et al. Jul 1997 A
5651002 Van Seters et al. Jul 1997 A
5659542 Bell et al. Aug 1997 A
5673265 Gupta et al. Sep 1997 A
5675741 Aggarwal et al. Oct 1997 A
5689566 Nguyen Nov 1997 A
5699478 Nahumi Dec 1997 A
5699485 Shoham Dec 1997 A
5708502 Denton et al. Jan 1998 A
5715399 Bezos Feb 1998 A
5740171 Mazzola et al. Apr 1998 A
5740176 Gupta et al. Apr 1998 A
5742604 Edsall et al. Apr 1998 A
5764636 Edsall Jun 1998 A
5793763 Mayes et al. Aug 1998 A
5812528 VanDervort Sep 1998 A
5819089 White Oct 1998 A
5835494 Hughes et al. Nov 1998 A
5838994 Valizadeh Nov 1998 A
5850388 Anderson et al. Dec 1998 A
5867666 Harvey Feb 1999 A
5870397 Chauffour et al. Feb 1999 A
5870557 Bellovin et al. Feb 1999 A
5884010 Chen et al. Mar 1999 A
5894556 Grimm et al. Apr 1999 A
5905871 Buskens et al. May 1999 A
5917820 Rekhter Jun 1999 A
5918017 Attanasio et al. Jun 1999 A
5918019 Valencia Jun 1999 A
5931961 Ranganathan et al. Aug 1999 A
5943347 Shepard Aug 1999 A
5983265 Martino, II Nov 1999 A
5987011 Toh Nov 1999 A
5991809 Kriegsman Nov 1999 A
6003079 Friedrich et al. Dec 1999 A
6006264 Colby et al. Dec 1999 A
6009081 Wheeler et al. Dec 1999 A
6018516 Packer Jan 2000 A
6023455 Takahashi Feb 2000 A
6023733 Periasamy et al. Feb 2000 A
6031846 Gurusami et al. Feb 2000 A
6032194 Gai et al. Feb 2000 A
6041352 Burdick et al. Mar 2000 A
6061454 Malik et al. May 2000 A
6070190 Reps et al. May 2000 A
6078590 Farinacci et al. Jun 2000 A
6078956 Bryant et al. Jun 2000 A
6088717 Reed et al. Jul 2000 A
6094562 Zhong Jul 2000 A
6101180 Donahue et al. Aug 2000 A
6104695 Wesley et al. Aug 2000 A
6115711 White Sep 2000 A
6115752 Chauhan Sep 2000 A
6118765 Phillips Sep 2000 A
6118796 Best et al. Sep 2000 A
6185203 Berman Feb 2001 B1
6192036 Buhler et al. Feb 2001 B1
6230271 Wadlow et al. May 2001 B1
6252851 Siu et al. Jun 2001 B1
6275471 Bushmitch et al. Aug 2001 B1
6278687 Hunneyball Aug 2001 B1
6286045 Griffiths et al. Sep 2001 B1
6317775 Coile et al. Nov 2001 B1
6337861 Rosen Jan 2002 B1
6356545 Vargo et al. Mar 2002 B1
6363477 Fletcher et al. Mar 2002 B1
6389006 Bialik May 2002 B1
6445717 Gibson et al. Sep 2002 B1
6446121 Shah et al. Sep 2002 B1
6510150 Ngo Jan 2003 B1
6515967 Wei et al. Feb 2003 B1
6526044 Cookmeyer et al. Feb 2003 B1
6535490 Jain Mar 2003 B1
6563796 Saito May 2003 B1
6578066 Logan et al. Jun 2003 B1
6584438 Manjunath et al. Jun 2003 B1
6614781 Elliott et al. Sep 2003 B1
6628624 Mahajan et al. Sep 2003 B1
6665637 Bruhn Dec 2003 B2
6680921 Svanbro et al. Jan 2004 B1
6687225 Kawarai et al. Feb 2004 B1
6687360 Kung et al. Feb 2004 B2
6700874 Takihiro et al. Mar 2004 B1
6725191 Mecayten Apr 2004 B2
6731609 Hirni et al. May 2004 B1
6741600 Weiss et al. May 2004 B1
6757654 Westerlund et al. Jun 2004 B1
6765881 Rajakarunanayake Jul 2004 B1
6775703 Burns et al. Aug 2004 B1
6785261 Schuster et al. Aug 2004 B1
6798739 Lee Sep 2004 B1
6804244 Anandakumar et al. Oct 2004 B1
6836804 Jagadeesan Dec 2004 B1
6839353 DeJager Jan 2005 B1
6845091 Ogier et al. Jan 2005 B2
6901048 Wang et al. May 2005 B1
6917983 Li Jul 2005 B1
6940821 Wei et al. Sep 2005 B1
6944132 Aono et al. Sep 2005 B1
6947381 Wen et al. Sep 2005 B2
7013267 Huart et al. Mar 2006 B1
7024257 Pearce et al. Apr 2006 B2
7039716 Jagadeesan May 2006 B1
7047190 Kapilow May 2006 B1
7068607 Paratain et al. Jun 2006 B2
7069034 Sourour Jun 2006 B1
7072968 Mikami et al. Jul 2006 B2
7099820 Huart et al. Aug 2006 B1
7133368 Zhang et al. Nov 2006 B2
7143184 Shah et al. Nov 2006 B1
7212517 Dzik May 2007 B2
7283474 Bergenwall Oct 2007 B1
7286467 Sylvain Oct 2007 B1
7289454 Bovo et al. Oct 2007 B2
7310334 FitzGerald et al. Dec 2007 B1
7336620 Bennett Feb 2008 B2
7352700 Chan et al. Apr 2008 B2
7352705 Adhikari et al. Apr 2008 B1
7406034 Cometto et al. Jul 2008 B1
7417993 Ebergen et al. Aug 2008 B1
7426577 Bardzil et al. Sep 2008 B2
7457877 Shah et al. Nov 2008 B1
7483370 Dayal et al. Jan 2009 B1
7496044 Wing Feb 2009 B1
7519006 Wing Apr 2009 B1
7525949 Rampal et al. Apr 2009 B1
7564858 Moncada-Elias et al. Jul 2009 B1
7643430 Morandin Jan 2010 B2
7660314 Wybenga et al. Feb 2010 B2
7672227 Santoso et al. Mar 2010 B2
7729267 Oran et al. Jun 2010 B2
7752037 Chen et al. Jul 2010 B2
7817580 Jain et al. Oct 2010 B2
7864712 Khan et al. Jan 2011 B2
7870611 Ishikawa Jan 2011 B2
7886080 Sajassi et al. Feb 2011 B2
7944470 Foster et al. May 2011 B2
7969894 Mangal Jun 2011 B2
8065317 Wang et al. Nov 2011 B2
8077633 Jain Dec 2011 B2
8116213 Krygowski Feb 2012 B2
8174996 Omar May 2012 B2
8244909 Hanson et al. Aug 2012 B1
8291077 I'Anson Oct 2012 B2
8582467 Hirota et al. Nov 2013 B2
20020003775 Nakano et al. Jan 2002 A1
20020067693 Kodialam et al. Jun 2002 A1
20020073375 Hollander Jun 2002 A1
20020083186 Stringer Jun 2002 A1
20020196802 Sakov et al. Dec 2002 A1
20030053419 Kanazawa et al. Mar 2003 A1
20030072269 Teruhi et al. Apr 2003 A1
20030097438 Bearden et al. May 2003 A1
20030110276 Riddle Jun 2003 A1
20030137972 Kowalewski et al. Jul 2003 A1
20030142680 Oguchi Jul 2003 A1
20030163772 Jaworski Aug 2003 A1
20030165114 Kusama et al. Sep 2003 A1
20030208616 Laing et al. Nov 2003 A1
20030219022 Dillon et al. Nov 2003 A1
20030220971 Kressin Nov 2003 A1
20030225549 Shay Dec 2003 A1
20040008715 Barrack et al. Jan 2004 A1
20040052257 Abdo et al. Mar 2004 A1
20040073690 Hepworth et al. Apr 2004 A1
20040114539 Beshai et al. Jun 2004 A1
20040125965 Alberth et al. Jul 2004 A1
20040170163 Yik et al. Sep 2004 A1
20040184323 Mori et al. Sep 2004 A1
20040193709 Selvaggi et al. Sep 2004 A1
20040218617 Sagfors Nov 2004 A1
20040223458 Gentle Nov 2004 A1
20040240431 Makowski et al. Dec 2004 A1
20040252646 Adhikari et al. Dec 2004 A1
20050036519 Balakrishnan et al. Feb 2005 A1
20050105474 Metzler May 2005 A1
20050111487 Matta et al. May 2005 A1
20050117576 McDysan et al. Jun 2005 A1
20050152406 Chauveau Jul 2005 A2
20050216599 Anderson et al. Sep 2005 A1
20050220123 Wybenga et al. Oct 2005 A1
20050226172 Richardson et al. Oct 2005 A1
20050243733 Crawford et al. Nov 2005 A1
20050246041 Kreifeldt et al. Nov 2005 A1
20050259597 Benedetto et al. Nov 2005 A1
20050265356 Kawarai et al. Dec 2005 A1
20050283639 Le Pennec et al. Dec 2005 A1
20050286419 Joshi et al. Dec 2005 A1
20050286436 Flask Dec 2005 A1
20060007869 Hirota et al. Jan 2006 A1
20060018333 Windisch et al. Jan 2006 A1
20060041431 Maes Feb 2006 A1
20060098586 Farrell et al. May 2006 A1
20060104217 Lehane May 2006 A1
20060104306 Adamczyk et al. May 2006 A1
20060112400 Zhang et al. May 2006 A1
20060122835 Huart et al. Jun 2006 A1
20060133286 Elie-Dit-Cosaque et al. Jun 2006 A1
20060140136 Filsfils et al. Jun 2006 A1
20060159029 Samuels et al. Jul 2006 A1
20060179338 Sumner Aug 2006 A1
20060215684 Capone Sep 2006 A1
20060250967 Miller et al. Nov 2006 A1
20060268742 Chu et al. Nov 2006 A1
20060274760 Loher Dec 2006 A1
20060280130 Nomura et al. Dec 2006 A1
20060291385 Yang Dec 2006 A1
20070041335 Znamova et al. Feb 2007 A1
20070058571 Rose Mar 2007 A1
20070064616 Miranda Mar 2007 A1
20070107034 Gotwals May 2007 A1
20070127395 Jain et al. Jun 2007 A1
20070150480 Hwang et al. Jun 2007 A1
20070153774 Shay et al. Jul 2007 A1
20070171835 Gobara et al. Jul 2007 A1
20070204017 Maes Aug 2007 A1
20070212065 Shin et al. Sep 2007 A1
20070223462 Hite et al. Sep 2007 A1
20070258359 Ogasawara et al. Nov 2007 A1
20070263554 Finn Nov 2007 A1
20070286165 Chu et al. Dec 2007 A1
20080019282 Alaria et al. Jan 2008 A1
20080031149 Hughes et al. Feb 2008 A1
20080031154 Niazi et al. Feb 2008 A1
20090022069 Khan et al. Jan 2009 A1
20090028044 Windisch et al. Jan 2009 A1
20090059800 Mohan Mar 2009 A1
20090080334 DeCusatis et al. Mar 2009 A1
20090125595 Maes May 2009 A1
20090144403 Sajassi et al. Jun 2009 A1
20090175274 Aggarwal et al. Jul 2009 A1
20090193057 Maes Jul 2009 A1
20090201937 Bragg et al. Aug 2009 A1
20090219823 Qian et al. Sep 2009 A1
20090219836 Khan et al. Sep 2009 A1
20090274153 Kuo et al. Nov 2009 A1
20090296588 Nishi et al. Dec 2009 A1
20090328051 Maes Dec 2009 A1
20100049826 Maes Feb 2010 A1
20100061254 Thottakkara et al. Mar 2010 A1
20100061269 Banerjee et al. Mar 2010 A1
20100069052 Ahomaki et al. Mar 2010 A1
20100182937 Bellagamba Jul 2010 A1
20100189118 Nonaka Jul 2010 A1
20100226244 Mizutani et al. Sep 2010 A1
20100302936 Jan et al. Dec 2010 A1
20110019678 Mehta et al. Jan 2011 A1
20110134804 Maes Jun 2011 A1
20120106339 Mishra et al. May 2012 A1
Foreign Referenced Citations (2)
Number Date Country
WO 2008010918 Jan 2008 WO
WO 2009014967 Jan 2009 WO
Non-Patent Literature Citations (72)
Entry
U.S. Appl. No. 13/152,300, filed Jun. 2, 2011, entitled “System and Method for Managing Network Traffic Disruption,” Inventor(s): Shekher Bulusu, et al.
U.S. Appl. No. 13/077,828, filed Mar. 31, 2011 entitled “System and Method for Probing Multiple Paths in a Network Environment,” Inventor(s): Hariharan Balasubramanian, et al.
U.S. Appl. No. 13/160,957, filed Jun. 15, 2011, entitled “System and Method for Providing a Loop Free Topology in a Network Environment,” Inventor(s): Shekher Bulusu, et al.
USPTO Sep. 25, 2012 Non-Final Office Action from U.S. Appl. No. 12/941,881.
USPTO Dec. 11, 2012 Response to Sep. 25, 2012 Non-Final Office Action from U.S. Appl. No. 12/941,881.
USPTO 2012-26 Final Office Action from U.S. Appl. No. 12/941,881.
USPTO Mar. 26, 2013 Non-Final Office Action from U.S. Appl. No. 13/077,828.
USPTO Nov. 26, 2012 Non-Final Office Action from U.S. Appl. No. 12/938,237.
USPTO Feb. 22, 2013 Response to Nov. 26, 2012 Non-Final Office Action from U.S. Appl. No. 12/938,237.
USPTO Mar. 26, 2013 Final Office Action from U.S. Appl. No. 12/938,237.
USPTO Jan. 7, 2013 Non-Final Office Action from U.S. Appl. No. 13/160,957.
USPTO Apr. 2, 2013 Response to Non-Final Office Action dated Jan. 7, 2013 from U.S. Appl. No. 13/160,957.
Andreasan et al., “RTP No-Op Payload Format,” Internet Draft <draft-wing-avt-rtp-noop-00.txt>, Internet Engineering Task Force, Feb. 2004, pp. 1-8.
Cheng, Jin et al., “Fast TCP: Motivation, Architecture, Algorithms, Performance,” IEEE INFOCOM 2004, Aug. 2, 2004, 44 pages.
Fedyk, D., et al., “”ISIS Extensions Supporting IEEE 802.1aq Shortest Path Bridging, Network Working Group Internet Draft, Mar. 8, 2011, 42 pages; http://tools.ietf.org/html/draft-ietf-isis-ieee-aq-05.
IEEE Standards Departent, “Virtual Bridged Local Area Networks—Amendment 9: Shortest Path Bridging—IEEE P802.1aq/D2.1,” © 2009, Institute of Electrical and Electronics Engineers, Inc., Aug. 21, 2009; 208 pages.
Niccolini, S., et al. “How to store traceroute measurements and related metrics,” Internet Draft draft-niccolini-ippm-storetraceroutes-02.txe., Oct. 24, 2005.
Woundy et al., “ARIS: Aggregate Route-Based IP Switching,” Internet Draft draft-woundy-aris-ipswitching-00-txt, Nov. 1996.
Perlman, Radia, “Rbridges: Transparent Routing,” in Proc. IEEE INFOCOM, Mar. 2004.
Perlman, et al., “Rbridges: Base Protocol Specification,” IETF Draft <draft-ietf-trill-rbridge-protocol-11.txt>, Jan. 2009.
Touch, et al., Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement, RFC 5556, IETF, May 2009.
USPTO Jul. 5, 2013 Non-Final Office Action from U.S. Appl. No. 13/152,200.
USPTO Jun. 13, 2013 RCE Response to Mar. 26, 2013 Final Office Action from U.S. Appl. No. 12/938,237.
USPTO Jul. 19, 2013 Non-Final Office Action from U.S. Appl. No. 12/938,237.
USPTO Aug. 6, 2013 RCE Response to May 8, 2013 Final Office Action from U.S. Appl. No. 13/160,957.
U.S. Appl. No. 13/152,200, filed Jun. 2, 2011, entitled “System and Method for Managing Network Traffic Disruption,” Inventor(s): Shekher Bulusu, et al.
Kessler, G., “Chapter 2.2 PING of TCP: A Primer on Internet and TCP/IP Tools,” RFC 1739; Dec. 1994; www.ietf.org.
Callon et al., “A Framework for Multiprotocol Label Switching,” IETF Network Working Group, Internet Draft draft-ietf-mpls-framework-02.txt, Nov. 21, 1997.
Deering, S., et al., “Internet Protocol Version 6,” RFC 1883, Dec. 1995.
Feldman, N., “ARIS Specification,” Internet Draft, Mar. 1997.
Gobrial, Margret N., “Evaluation of Border Gateway Protocol (BGP) Version 4(V4) in the Tactical Environment,” Military Communications Conference, Oct. 21-24, 1996; Abstract Only http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=569372&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel3%2F4198%2F12335%2F00569372.pdf%3Farnumber%3D569372.
Halabi, Bassam, Internet Routing Architectures (CISCO), Macmillan Technical Publishing, Apr. 23, 1997; Abstract and Table of Contents only. http://www.ciscopress.com/store/internet-routing-architectures-cisco-9781562056520.
Handley and V. Jacobson, “SDP: Session Description Protocol,” RFC 2327; Apr. 1998, 43pgs.
Heinanen, J., “Multiprotocol Encapsulation over ATM Adaptation Layer 5,” RFC 1483, Jul. 1993.
Jennings, C., “NAT Classification Test Results,” Internet Draft draft-jennings-behave-test-results-02draft-jennings-behave-test-results-02.txt, Jun. 25, 2006.
Katsube, Y. et al., “Toshiba's Router Architecture Extensions for ATM: Overview,” RFC 2098, Feb. 1997.
Laubach, M., “Classical IP and ARP over ATM,” RFC 1577, Jan. 1994.
Laubach, M., “IP over ATM Working Group's Recommendations for the ATM Forum's Multiprotocol BOF Version 1,” RFC 1754, Jan. 1995.
Liao et al., “Adaptive Recovery Techniques for Real-time Audio Streams,” IEEE INFOCOM2001; Twentieth Annual Joint Conference of the IEE Computer and Communications Societies Proceedings, Apr. 22-26, 2001, vol. 2, pp. 815-823.
McGovern, M., et al., “CATNIP: Common Architecture for the Internet,” RFC 1707, Oct. 1994.
Nagami, K., et al., “Toshiba's Flow Attribute Notification Protocol (FANP) Specification,” RFC 2129, Apr. 1997.
Newman, P. et al., “Ipsilon Flow Management Protocol Specification for IPv4 Version 1.0,” RFC 1953, May 1996.
Newman, P. et al., “Ipsilon's General Switch Management Protocol Specification Version 1.1,” RFC 1987, Aug. 1996.
PCT Feb. 7, 2008 International Search Report for PCT/US2007/015506.
Perez, M., et al., “ATM Signaling Support for IP over ATM,” RFC 1755, Feb. 1995.
Rosen et al., “A Proposed Architecture for MPLS,” IETF Network Working Group, Internet Draft draft-ietf-mpls-arch-00.txt, Aug. 1997.
Rosen et al., “MPLS Label Stock Encoding,” RFC 3032, Jan. 2001.
Rosenberg et al., “STUN—Simple Traversal of User Datagram Protocol (UDP) Through Network Address Translators (NATs),” Network Working Group, RFC 3489, Mar. 2003, 44 pgs.
Schulzrinne, H., et al., “RTP, A Transport Protocol for Real-Time Applications,” Network Working Group RFC3550, Jul. 2003, 98 pages.
Smith, Bradley R., et al., “Securing the Border Gateway Routing Protocol,” Global Telecommunications Conference, Nov. 18-22, 1996.
Townsley, et al., “Layer Two Tunneling Protocol, L2TP,” Network Working Group, RFC 2661, Aug. 1999, 75 pages.
Ullman, R., “Rap: Internet Route Access Protocol,” RFC 1476, Jun. 1993.
Viswanathan et al., “ARIS: Aggregate Route-Based IP Switching,” Internet Draft, Mar. 1997.
Wang, Q. et al., “TCP-Friendly Congestion Control Schemes in the Internet,” National Key Lab of Switching Technology and Telecommunication Networks, Beijing University of Posts & Telecommunications; 2001, pp. 211-216; http://www.sics.se/˜runtong/11.pdf.
USPTO May 24, 2013 RCE Response to Final Office Action dated Feb. 27, 2013 from U.S. Appl. No. 12/941,881.
USPTO May 24, 2013 Supplemental Response to Final Office Action dated Feb. 27, 2013 from U.S. Appl. No. 12/941,881.
USPTO Jun. 14, 2013 Notice of Allowance from U.S. Appl. No. 12/941,881.
USPTO May 8, 2013 Final Office Action from U.S. Appl. No. 13/160,957.
USPTO Jan. 14, 2014 Notice of Allowance from U.S. Appl. No. 13/152,200.
USPTO Oct. 30, 2013 Notice of Allowance from U.S. Appl. No. 13/077,828.
USPTO Feb. 19, 2014 Notice of Allowance from U.S. Appl. No. 12,938,237.
USPTO May 9, 2014 Notice of Allowance from U.S. Appl. No. 13/160,957.
U.S. Appl. No. 12/941,881, filed Nov. 8, 2010, entitled “System and Method for Providing a Loop Free Topology in a Network Environment,” Inventor: Shekher Bulusu.
U.S. Appl. No. 12/938,237, filed Nov. 2, 2010, entitled “System and Method for Providing Proactive Fault Monitoring in a Network Environment,” Inventor(s): Chandan Mishra, et al.
U.S. Appl. No. 12/916,763, filed Nov. 1, 2010, entitled “Probing Specific Customer Flow in Layer-2 Multipath Networks,” Inventor(s): Chandan Mishra et al.
U.S. Appl. No. 12/658,503, filed Feb. 5, 2010, entitled “Fault Isolation in Trill Networks,” Inventor(s): Ali Sajassi et al.
Wikipedia, “IEEE 802.1ag,” Connectivity Fault Management, retrieve and printed Nov. 2, 2010, 4 pages; http://en.wikipedia.org/wiki/IEEE—802.1ag.
G. Malkin, “Traceroute Using an IP Option,” Network Working Group, RFC 1393, Jan. 1993, 8 pages; http://tools.ietf.org/pdf/rfc1393.pdf.
K. Kompella and G. Swallow, “Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures,” Network Working Group, RFC 4379, Feb. 2006, 52 pages; http://tools.ietf.org/pdf/rfc4379.pdf.
PCT “International Preliminary Report on Patentability, Date of Issuance Jan. 20, 2009 (1 page), Written Opinion of the International Searching Authority, Date of Mailing Feb. 7, 2008 (6 pages) and International Search Report, Date of Mailing Feb. 7, 2008 (2 pages),” for PCT/US2007/015506.
PCT “International Preliminary Report on Patentability (dated Jan. 26, 2010; 1 page) and Written Opinion of the International Searching Authority and International Search Report (dated Oct. 2, 2008; 7 pages),” for PCT International Application PCT/US2008/070243.
IEEE Standards Department, “Virtual Bridged Local Area Networks—Amendment 6: Provider Backbone Bridges—IEEE P802.1ah/D4.2,” © 2008, Institute of Electrical and Electronics Engineers, Inc., Mar. 26, 2008; 116 pages.
Related Publications (1)
Number Date Country
20120224510 A1 Sep 2012 US