Multicast in a trill network

Information

  • Patent Grant
  • 9407533
  • Patent Number
    9,407,533
  • Date Filed
    Tuesday, January 17, 2012
    12 years ago
  • Date Issued
    Tuesday, August 2, 2016
    8 years ago
Abstract
One embodiment of the present invention provides a switch. A switch includes a storage and a multicast management mechanism. The storage is configured to store an entry indicating a multicast group membership learned at a remote switch. The multicast management mechanism is coupled to the storage and is configured to suppress flooding of packets destined for the multicast group.
Description

The present disclosure is related to U.S. patent application Ser. No. 13/087,239, titled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011, and to U.S. patent application Ser. No. 13/092,752, titled “Name Services for Virtual Cluster Switching,” by inventors Suresh Vobbilisetty, Phanidhar Koganti, and Jesse B. Willeke, filed 22 Apr. 2011, the disclosures of which are incorporated by reference herein.


BACKGROUND

1. Field


The present disclosure relates to network management. More specifically, the present disclosure relates to a method and system for facilitating multicast in a network.


2. Related Art


The exponential growth of the Internet has made it a popular delivery medium for multimedia applications, such as video on demand and television. Such applications have brought with them an increasing demand for bandwidth. As a result, equipment vendors race to build larger and faster switches with versatile capabilities, such as multicasting, to move more traffic efficiently. However, the size of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. Furthermore, switches with higher capability are usually more complex and expensive. More importantly, because an overly large and complex system often does not provide economy of scale, simply increasing the size and capability of a switch may prove economically unviable due to the increased per-port cost.


One way to increase the throughput of a switch system is to use switch stacking. In switch stacking, multiple smaller-scale, identical switches are interconnected in a special pattern to form a larger logical switch. The amount of required manual configuration and topological limitations for switch stacking becomes prohibitively tedious when the stack reaches a certain size, which precludes switch stacking from being a practical option in building a large-scale switching system.


Meanwhile, layer-2 (e.g., Ethernet) switching technologies continue to evolve. More routing-like functionalities, which have traditionally been the characteristics of layer-3 (e.g., Internet Protocol or IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing devices. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.


While TRILL brings many desirable features to layer-2 networks, some issues remain unsolved when TRILL RBridges manage and maintain multicast group membership.


SUMMARY

One embodiment of the present invention provides a switch. A switch includes a storage and a multicast management mechanism. The storage is configured to store an entry indicating a multicast group membership learned at a remote switch. The multicast management mechanism is coupled to the storage and is configured to suppress flooding of packets destined for the multicast group.


In a variation on this embodiment, the switch further includes a logical switch management mechanism configured to maintain a membership in a logical switch, wherein the logical switch is configured to accommodate a plurality of remotely located switches and operate as a single logical switch.


In a variation on this embodiment, the switch includes a data structure configured to store a non-flooding forwarding entry corresponding to the multicast group and the remote switch.


In a variation on this embodiment, the switch and the remote switch belong to a link aggregation, wherein the entry indicating the multicast group membership learned at the remote switch is treated as a multicast group membership learned locally at the switch.


In a variation on this embodiment, the switch includes a communication mechanism configured to transmit to the remote switch information learned locally on the multicast group membership.


In a variation on this embodiment, the multicast group is formed based on one or more of the following: Internet Group Management Protocol (IGMP) version 1, IGMP version 2, IGMP version 3, Multicast Listener Discovery (MLD) version 1, and MLD version 2.


In a variation on this embodiment, the switch is a TRILL routing bridge, wherein the multicast management mechanism is configured to perform IGMP or MLD snooping.


In a variation on this embodiment, the switch includes a communication mechanism configured to send IGMP state information in response to receiving a first join message or a last leave message from a locally coupled end device.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates an exemplary TRILL network that includes a plurality of RBridges that share multicast group membership information among themselves, in accordance with an embodiment of the present invention.



FIG. 2A illustrates an exemplary network configuration of end devices, including multicast-enabled layer-3 routers, coupled to a TRILL network, in accordance with an embodiment of the present invention.



FIG. 2B illustrates an exemplary network configuration where layer-3-enabled RBridges in a TRILL network support multicast, in accordance with an embodiment of the present invention.



FIG. 3 illustrates an exemplary configuration of end devices belonging to different Virtual Local Area Networks (VLANs) coupled to a TRILL network which shares multicast group membership information among RBridges, in accordance with an embodiment of the present invention.



FIG. 4A presents a flowchart illustrating a first process of an RBridge forwarding a multicast packet in a TRILL network based on shared multicast group membership information, wherein the multicast packet is selectively distributed in the TRILL network, in accordance with an embodiment of the present invention.



FIG. 4B presents a flowchart illustrating a second process of an RBridge forwarding a multicast packet in a TRILL network based on shared multicast group membership information, wherein the multicast packet is distributed to all other RBridges in the TRILL network, in accordance with an embodiment of the present invention.



FIG. 5A illustrates an exemplary database that stores locally learned multicast group membership information associated with each VLAN, in accordance with an embodiment of the present invention.



FIG. 5B illustrates an exemplary database that stores remotely learned multicast group membership information associated with each VLAN, in accordance with an embodiment of the present invention.



FIG. 6A presents a flowchart illustrating the process of an RBridge forwarding multicast control messages and locally learned multicast membership information, and updating a database that stores locally learned multicast group membership information, in accordance with an embodiment of the present invention.



FIG. 6B presents a flowchart illustrating the process of an RBridge updating a database that stores remotely learned multicast group membership information, in accordance with an embodiment of the present invention.



FIG. 7 illustrates an exemplary network where a virtual RBridge identifier is assigned to two physical TRILL RBridges which are coupled to end devices via divided aggregate links, in accordance with an embodiment of the present invention.



FIG. 8A presents a flowchart illustrating the process of an RBridge distributing a multicast control message across a TRILL network with virtual link aggregation support, and accordingly updating databases configured to maintain multicast group membership information, in accordance with an embodiment of the present invention.



FIG. 8B presents a flowchart illustrating the process of an RBridge forwarding a multicast packet in a TRILL network with virtual link aggregation support, in accordance with an embodiment of the present invention.



FIG. 9 illustrates an exemplary architecture of a switch capable of learning multicast group membership information from remote switches and accordingly forwarding multicast packets, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of facilitating scalable and flexible multicast in a TRILL network is solved by learning multicast group membership information from remote RBridges and forwarding multicast accordingly. In some embodiments, an RBridge may learn about multicast group membership information by examining IGMP packets. The RBridge then shares the multicast group membership information with other RBridges in the TRILL network. All RBridges in the TRILL network use the multicast group membership information learnt from local ports and from remote RBridges to collectively make the multicast forwarding decisions.


In some embodiments, a multicast-enabled layer-3 (e.g., IP) router may be coupled to the TRILL network. Under such a scenario, the router sends a multicast packet to one or more end devices coupled to the TRILL network. If the end device's multicast group membership information is not known to the ingress RBridge couple to the router, the default behavior for the ingress RBridge coupled to the router is to flood the packet to all edge and TRILL ports (except for the port on which the multicast packet is received). As a result, all end devices coupled to the RBridge along with all remote RBridges in the TRILL network receive the multicast packet. Each remote RBridge, in turn, forwards the packet to all their respective edge ports.


When an end device decides to join the multicast group, the end device sends a join message to the router. The ingress RBridge coupled to the end device receives the join message, learns the multicast group membership information, and stores the information in a database that stores the locally learned multicast group membership information. The RBridge then forwards the information to all remote RBridges in the TRILL network. Note that VLAN-group membership information learned on local ports is also distributed to other RBridges in the network. Each remote RBridge, including the RBridge coupled to the router, receives the information and stores the information in a database that stores remotely learned multicast group membership information. For any subsequent multicast packet, the ingress RBridge coupled to the router suppresses the default flooding behavior, as the RBridge is aware of the multicast group membership of the end device coupled to the TRILL network. Instead, the ingress RBridge forwards the multicast packet toward the egress RBridge to which the end device is coupled. Note that, since the intermediate RBridges also learn the end device's membership information, they also suppress the default flooding behavior for the multicast packet.


Although the present disclosure is presented using examples based on the TRILL protocol, embodiments of the present invention are not limited to TRILL networks, or networks defined in a particular Open System Interconnection Reference Model (OSI reference model) layer.


The term “RBridge” refers to routing bridges, which are bridges implementing the TRILL protocol as described in IETF Request for Comments (RFC) “Routing Bridges (RBridges): Base Protocol Specification,” available at http://tools.ietf.org/html/rfc6325, which is incorporated by reference herein. Embodiments of the present invention are not limited to the application among RBridges. Other types of switches, routers, and forwarders can also be used.


In this disclosure, the term “edge port” refers to a port on an RBridge which sends/receives data frames in native Ethernet format. The term “TRILL port” refers to a port which sends/receives data frames encapsulated with a TRILL header and outer MAC header.


The term “end device” refers to a network device that is typically not TRILL-capable. “End device” is a relative term with respect to the TRILL network. However, “end device” does not necessarily mean that the network device is an end host. An end device can be a host, a conventional layer-2 switch, or any other type of network device. Additionally, an end device can be coupled to other switches or hosts further away from the TRILL network. In other words, an end device can be an aggregation point for a number of network devices to enter the TRILL network.


The term “RBridge identifier” refers to a group of bits that can be used to identify an RBridge. Note that the TRILL standard uses “RBridge ID” to denote a 48-bit intermediate-system-to-intermediate-system (IS-IS) System ID assigned to an RBridge, and “RBridge nickname” to denote a 16-bit value that serves as an abbreviation for the “RBridge ID.” In this disclosure, “RBridge identifier” is used as a generic term and is not limited to any bit format, and can refer to “RBridge ID” or “RBridge nickname” or any other format that can identify an RBridge.


The term “frame” refers to a group of bits that can be transported together across a network. “Frame” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Frame” can be replaced by other terminologies referring to a group of bits, such as “packet,” “cell,” or “datagram.”


In this disclosure, all the terms related to IGMP are used in a generic sense and are not limited to only the IGMP protocol. Other multicast protocols, such as the Multicast Listener Discovery (MLD) protocol, and different versions of such protocols, such as IGMP v1/v2/v3 and MLD v1/v2, etc., can also be used. The term “IGMP packet” refers to any data segment sent over a network containing any multicast control message. The term “IGMP query” refers to any message sent by a multicast-enabled layer-3 router to discover which end device is participating in a particular multicast group. The term “IGMP join” refers to any message sent by an end host requesting to join a particular multicast group. The term “IGMP leave” refers to any message sent by an end host requesting to leave a particular multicast group. In this disclosure, the term “IGMP control message” can refer to both IGMP join and IGMP leave messages. The term “multicast packet” refers to any data traffic associated with a particular multicast group.


In some embodiments, layer-3 processing capability can be enabled in RBridges in a TRILL network. Hence, the term “router” can refer to a stand-alone layer-3 (e.g., IP) router or the layer-3-capable portion of an RBridge. In this disclosure, the terms layer-3 and IP are used interchangeably.


Network Architecture



FIG. 1 illustrates an exemplary TRILL network that includes a plurality of RBridges that share multicast group membership information among themselves, in accordance with an embodiment of the present invention. As illustrated in FIG. 1, a TRILL network 100 includes RBridges 101, 102, 103, 104, 105, 106, and 107. End devices 112, 114, 116, and 118 are coupled to network 100 via ingress RBridges 102, 101, 104, and 102, respectively. A multicast-enabled router 122 is coupled to a layer-3 network 130 and to network 100 via ingress RBridge 104. Note that in some embodiments, TRILL network 100 can support multiple VLANs. An end device can participate in one or more VLANs and its multicast group membership could vary for each of these different VLANs.


RBridges in network 100 use edge ports to communicate to end devices and TRILL ports to communicate to other RBridges. For example, RBridge 104 is coupled to end device 116 via an edge port and to RBridges 105, 101, and 102 via TRILL ports. An end host coupled to an edge port may be a host machine or network device. For example, end devices 112, 114, and 116 are host machines while device 122 is a layer-3 router.


During operation, router 122 sends a multicast packet to network 100. If RBridge 104 does not have information on the members of this multicast group, RBridge 104 floods the packet to all edge ports except the port on which the packet is received. As a result, local end device 116 receives the packet.


In some embodiments, the packet is flooded to all TRILL ports as well. Under such a scenario, RBridges 101, 102, and 105 receive the packet. These RBridges, in turn, flood the packet to their edge and TRILL ports. For example, RBridge 102 transmits the packet to end device 112. Note that RBridge 102 does not send the packet to RBridge 104 as the packet was received from RBridge 104. Furthermore, if an RBridge receives multiple copies of the same packet, it uses one copy and discards the rest. For example, though RBridge 101 might receive the packet from both RBridges 104 and 102, it sends the packet to end device 114 only once.


In some embodiments, RBridge 104 creates a multicast distribution tree to all other RBridges and distributes the packet over the tree. The tree topology is maintained at each RBridge to distribute contents in network 100. Under such a scenario, each RBridge receives only one copy of the packet and floods the packet only to the ports that are part of the tree. In some embodiments, the tree is constructed as a breadth-first search (BFS) tree. Note that the multicast distribution tree can be a common tree, which includes all RBridges and can be sub-optimal in some cases, and which can be used by all multicast groups in the TRILL network. The multicast distribution tree can also be more specific to and optimized for a VLAN, which could include only RBridges that are members of a multicast group in a particular VLAN.


If an end device (e.g., end device 112) decides to join the multicast group (say, with an optimized distribution tree specific to a VLAN), end device 112 sends a join message to router 122. RBridge 102 receives the join message, learns the multicast group membership information, and stores the information in a local database. RBridge 102 then forwards the learned membership information to remote RBridges 101, 103, 104, 105, 106, and 107. Each remote RBridge receives the information and stores the information in its respective database that stores remotely learned multicast group membership information. As RBridge 104 is aware of the multicast group membership of end device 112, default flooding behavior is suppressed for all subsequent multicast packets from router 122 and these packets are only forwarded to RBridge 102. In other words, there is no default flooding on an RBridge's edge ports. When another end device 114 joins the multicast group, ingress RBridge 101 informs all other RBridges regarding the multicast group membership information. As a result, RBridge 104 forwards subsequent multicast packets from router 122 to RBridges 101 and 102, which forward the packets to end devices 114 and 112, respectively.


During operation that does not involve sharing of multicast group membership information among RBridges, when end device 112 joins a multicast group, RBridge 102 does not forward the information to other RBridges. As a result, only RBridge 102 stops flooding of subsequent multicast packets for the multicast group to its edge ports. Other RBridges, such as RBridges 101 and 104, continue flooding multicast packets to end devices 114 and 116, respectively. Under such a scenario, RBridges in network 100 continue sending unnecessary multicast packets that leads to higher bandwidth consumption.


In one embodiment of the present invention, as illustrated in FIG. 1, if end device 118 decides to join the multicast group, end device 118 sends a join message to router 122. As all other RBridges have learned about the multicast group membership of end device 112 (and hence RBridge 102's participation in the multicast group), the multicast group membership information of end device 118 is kept local to RBridge 102 and not forwarded to other RBridges. Hence, a given RBridge updates other RBridges with the membership information of only the first join message learned at the RBridge. As a result, the number of messages associated with multicast group membership updates is reduced in TRILL network 100.


Similarly, if end device 112 leaves the multicast group, RBridge 102 does not forward the information to other RBridges, as end device is 118 still a member of the multicast group. However, if end device 118 leaves the group as well, RBridge 102 forwards the information to other RBridges. Hence, a given RBridge updates other RBridges with the membership information of only the last leave message learned at the RBridge. This configuration further reduces the number of messages associated with multicast group membership updates in network 100.


In some embodiments, when end device 118 sends the join message to router 122 for a multicast group, all RBridges in network 100 suppress sending subsequent join messages to router 122 for the same multicast group. For example, during operation, when end device 114 sends a join message to router 122 for the same multicast group, ingress RBridge 101 receives the message and recognizes that there is already a member to the multicast group coupled to network 100. RBridge 101 then suppresses sending the join message to router 122. Similarly, when end device 118 sends a leave message to router 122 for the multicast group, the message is suppressed at RBridge 102, as end device 114 is still a member to the multicast group. When end device 114 sends a leave message for the multicast group, RBridge 101 sends the message to router 122, as it is the last leave message from network 100. In some embodiments, RBridges in network 100 form a logical switch fabric. The scheme of sending out only the first join and the last leave messages can be applicable not only to a particular RBridge but to the fabric level as a whole; i.e., it is possible to have multicast control messages suppression at the switch fabric level.


During operation that does not involve selective sharing of multicast group membership information among RBridges, when end device 112 joins a multicast group associated with router 122, RBridge 102 forwards the information to its TRILL ports as well as to router 122. When end device 118 joins the multicast group, RBridge 102 forwards the information as well. Similarly, when either of end devices 118 and 112 leaves the multicast group, RBridge 102 forwards the multicast group update information to all RBridges as well as router 122. As a result, even though router 122 and other RBridges are already aware of the membership information of end device 112, they continue to receive membership information for other end devices.


In some embodiments, the TRILL network may be a virtual cluster switch (VCS). In a VCS, any number of RBridges in any arbitrary topology may logically operate as a single switch. Any new RBridge may join or leave the VCS in a “plug-and-play” fashion without manual configuration.


Note that TRILL is only used as a transport between the switches within network 100. This is because TRILL can readily accommodate native Ethernet frames. Also, the TRILL standards provide a ready-to-use forwarding mechanism that can be used in any routed network with arbitrary topology. Embodiments of the present invention should not be limited to using only TRILL as the transport. Other protocols (such as multi-protocol label switching (MPLS)), either public or proprietary, can also be used for the transport.


Flooding Suppression of Multicast Packets



FIG. 2A illustrates an exemplary network configuration of end devices, including multicast-enabled layer-3 routers, coupled to a TRILL network, in accordance with an embodiment of the present invention. In this example, a TRILL network 200 includes a number of TRILL RBridges 202, 204, and 206. Network 200 also includes RBridges 216, 218, 222, and 224, each with a number of edge ports which can be coupled to external networks. For example, RBridges 216 and 218 are coupled with end devices 252 and 254 via 10GE edge ports. RBridges in network 200 are interconnected with each other using TRILL ports. RBridges 222 and 224 are coupled to multicast-enabled layer-3 routers 232 and 234, respectively. In this example, router 232 is associated with multicast group 262, and router 234 is associated with multicast groups 262 and 264 (denoted using dotted lines). Routers 232 and 234 are coupled to a layer-3 network 240.


Router 232 sends a multicast packet for multicast group 262 to network 200 via ingress RBridge 222. RBridge 222 forwards the packet to all other RBridges in network 200. If an end device coupled to network 200 (e.g., end device 252) joins multicast group 262, all RBridges in network 200 receive the multicast group membership information via ingress RBridge 216. Consequently, RBridge 222 suppresses flooding of all subsequent multicast packets for multicast group 262 from router 232. Note that, under such a scenario, all multicast packets for multicast group 262 are suppressed regardless of the router it is coming from. For example, RBridge 224 is aware of the membership of end device 252 in multicast group 262. Hence, if a multicast packet for multicast group 262 arrives from another router 234 to TRILL network 200 at ingress RBridge 224, the default flooding behavior is suppressed.


However, when router 234 sends a multicast packet for a different multicast group 264 to network 200 via ingress RBridge 224, the packet is flooded across network 200. Only if an end device coupled to network 200 (e.g., end device 254) joins multicast group 264, subsequent multicast packets for multicast group 264 are suppressed. Hence, the flooding of multicast packets is suppressed in a TRILL network depending on membership in a specific multicast group.



FIG. 2B illustrates an exemplary network configuration where layer-3-enabled RBridges in a TRILL network support multicast, in accordance with an embodiment of the present invention. In this example, a TRILL network 201 includes a number of TRILL RBridges 203, 205, and 207. Network 201 also includes RBridges 217 and 219, each with a number of edge ports which can be coupled to external networks. For example, RBridges 217 and 219 are coupled with end devices 253 and 255 via 10GE edge ports. RBridges in network 201 are interconnected with each other using TRILL ports. Also included in network 201 are RBridges 223 and 225, which are layer-3 capable and coupled to an IP network 241 as IP routers 233 and 235, respectively. In this example, router 233 is associated with multicast group 263 and router 235 is associated with multicast groups 263 and 265 (denoted using dotted lines). Routers 233 and 235 are coupled to a layer-3 network 241. Note that RBridge 223 and router 233 are the same physical device. In this scenario, routers 233 and 235 are connected to a TRILL network and could potentially have membership in a common multicast group 263 for TRILL network 201 (assuming TRILL network 201 has one single VLAN). By virtual of layer-3 multicast protocol (e.g., protocol-independent multicast, PIM), a designated-router election process would result in only one of the two routers being able to forward data from an upstream source (such as network 241) downstream into network 201. Similarly, only one router would be elected to forward multicast traffic from network 201 to network 241.


Similar to a layer-3 router participating in a multicast group, the layer-3 enabled portion of RBridge 223, router 233, sends a multicast packet for multicast group 263 to network 201 via corresponding RBridge 223. RBridge 223 operates as a regular TRILL RBridge and floods the packet in network 201. When an end device 253 joins multicast group 263, RBridges 223 and 225 suppress flooding of all multicast packets for multicast group 263. Similarly, when router 235 sends a multicast packet for a different multicast group 265 to network 201 via corresponding RBridge 225, the message is flooded across network 201. When end device 255 joins multicast group 265, the flooding of subsequent multicast packets to multicast group 265 is suppressed.


Multicast Groups Across VLANs



FIG. 3 illustrates an exemplary configuration of end devices belonging to different VLANs coupled to a TRILL network which shares multicast group membership information among RBridges, in accordance with an embodiment of the present invention. In this example, a TRILL network 300 includes TRILL RBridges 312, 314, 316, and 318. End devices 342, 324, and 344 are coupled to RBridge 318, and end devices 326 and 346 are coupled to RBridge 316. RBridges 312 and 314 are coupled to layer-3 routers 352 and 354. Router 352 is associated with multicast groups 362 and 364, and router 354 is associated with multicast group 362. End devices 342, 344, and 346 belong to VLAN 304, and end devices 324 and 326 belong to VLAN 302. TRILL network 300 exposes all underlying VLANs to layer-3 routers connected to network 300. Consequently, both VLANs 302 and 304 are visible at both routers 352 and 354.


During operation, an end device belonging VLAN 302 (e.g., end device 324) sends an IGMP join message for multicast group 362. The message is forwarded via ingress RBridge 318 to both routers 352 and 354. RBridge 318 updates its database for locally learned multicast group membership information and notifies all other RBridges in network 300 about the membership. In some embodiments, RBridge 318 sends the membership information to each RBridge in network 300 using VCS update messages.


Upon receiving the membership information, all other RBridges update their databases for remotely learned multicast group membership information. Each RBridge suppresses flooding of any subsequent multicast packet from both routers for multicast group 362 to VLAN 302. However, RBridges in network 300 continue to flood subsequent multicast packets for multicast group 362 to VLAN 304.


Similarly, router 352 sends multicast packet for multicast group 364. The packets are flooded to both VLANs 302 and 304. If an end device belonging to VLAN 304 (e.g., end device 346) joins multicast group 364, then the flooding is suppressed for VLAN 304. However, the subsequent multicast packets for multicast group 364 are flooded to VLAN 302. If an end device belonging to VLAN 302 (e.g., end device 326) joins multicast group 364, then the flooding is suppressed for VLAN 302 as well.


IGMP Packet Processing



FIG. 4A presents a flowchart illustrating a first process of an RBridge forwarding a multicast packet in a TRILL network based on shared multicast group membership information, wherein the multicast packet is selectively distributed in the TRILL network, in accordance with an embodiment of the present invention. During operation, an RBridge receives a multicast packet from an edge port for a multicast group in a VLAN (operation 402). If the packet is received on an edge port, then the packet is sent from an end device coupled to the RBridge.


The RBridge then checks whether it has learnt about the multicast group membership information for any local or remote end device belonging to the VLAN (operation 406). If so, flooding of the packet is suppressed at the RBridge. The RBridge then forwards the packet to each RBridge coupled to end devices with membership to the multicast group (operation 410). For example, in FIG. 1, assume end devices 112 and 114 are member of a multicast group. In this first process, RBridge 104 forwards the packet to only to RBridges 101 and 102. The RBridge also checks whether any local end device coupled to a local port belongs to the VLAN and has membership in the multicast group (operation 408). If so, the packet is transmitted to each such local end device (operation 424). If no such end device coupled to the TRILL network belonging to the VLAN has membership in the multicast group, the packet is discarded. If the RBridge has not learnt about the multicast group membership information (operation 406), the RBridge transmits the packet to all edge ports belonging to the VLAN (operation 412) and sends the packet to all other RBridges in the TRILL network over a multicast distribution tree (operation 414).



FIG. 4B presents a flowchart illustrating a second process of an RBridge forwarding a multicast packet received from an edge port in a TRILL network based on shared multicast group membership information, wherein the multicast packet is distributed to all other RBridges in the TRILL network, in accordance with an embodiment of the present invention. In this second process, upon receiving a multicast packet in a VLAN for a multicast group (operation 452), even if the RBridge has learnt about the multicast group membership information (operation 456), the RBridge still forwards the packet to all other RBridges in the TRILL network (operation 454), instead of distributing the packet to only those RBridges that have end devices with membership to the multicast group coupled to it (operation 410 in FIG. 4A). For example, in FIG. 1, assume end devices 112 and 114 are member of a multicast group. In this second process, RBridge 104 forwards the packet to all other RBridges instead of only to RBridges 101 and 102. If the RBridge has not learned any membership information of a multicast group, the packet is transmitted to all edge port associated with the VLAN (operation 462). Otherwise, the RBridge checks whether any local member belonging to the VLAN has a membership to the group (operation 458). If so, the packet it forwarded to each local member in the group (operation 474).


Multicast Membership Management


In one embodiment, each RBridge maintains two databases to store multicast group membership information for local and remote end devices belonging to specific VLANs. FIG. 5A illustrates an exemplary database that stores locally learned multicast group membership information associated with each VLAN, in accordance with an embodiment of the present invention. Local multicast database 502 in FIG. 5A stores records for each multicast group for each VLAN (a multicast group and VLAN pair) for local end devices. For example, database 502 stores records 512 and 513 for different multicast groups and VLAN pairs, each containing identifiers for the multicast group and the VLAN. Each such pair can be different from another pair in three ways: 1) different multicast groups but same VLAN, 2) same multicast group but different VLANs, and 3) different multicast groups and different VLANs.


In database 502, records 512 and 513 are for different such pairs. Records 522 and 524 store edge port information of the local end devices associated with the multicast group and VLAN pair corresponding to record 512. Similarly, records 532 and 534 store edge port information of the local end devices associated with the multicast group and VLAN pair corresponding to record 513.



FIG. 5B illustrates an exemplary database that stores remotely learned multicast group membership information associated with each VLAN, in accordance with an embodiment of the present invention. Remote multicast database 504 stores records for multicast group and VLAN pairs in remote RBridges. For example, database 504 includes records 514 and 515 that store identifiers for two different remote multicast group and VLAN pairs. Note that storing only identifiers for remote multicast group and VLAN pairs provides a scalable solution to maintain awareness of remote multicast group membership. As an RBridge is only responsible for forwarding multicast traffic to its local edge ports, it only requires storing the identifiers to suppress flooding to the specific multicast group and VLAN pair.


In some embodiments, records 514 and 515 further include RBridge identifier. For example, records 542 and 544 store the RBridge IDs of RBridges that are coupled to end devices associated with the multicast group and VLAN pair corresponding to record 514. RBridge IDs in records 542 and 544 indicate that the corresponding RBridges have at least one end device coupled to them that belongs to the VLAN and has a membership in the multicast group. Similarly, records 552 and 554 store the RBridge IDs of the RBridges that are coupled to end devices associated with the multicast group and VLAN pair corresponding to record 515.



FIG. 6A presents a flowchart illustrating the process of an RBridge forwarding multicast control messages and locally learned multicast membership information, and updating a database that stores locally learned multicast group membership information, in accordance with an embodiment of the present invention. Upon receiving a control message for a multicast group in a VLAN from an edge port (operation 602), the RBridge examines the message type (operation 606). If the control message is a join message, the RBridge adds a record for the edge port to a local multicast database for the multicast group and VLAN pair corresponding to the join message (operation 608). The RBridge then checks whether the added record is the first record for the multicast group and VLAN pair (operation 610). If so, the RBridge creates a notification message containing the multicast group membership information (operation 616) and sends the notification message to each RBridge in the TRILL network (operation 618). In some embodiments, the notification message is a VCS update message.


If the control message is a leave message (operation 606), the RBridge removes the record from the local multicast database for the multicast group and VLAN pair corresponding to the leave message (operation 612). The RBridge then checks whether there is any record left for the multicast group and VLAN pair in the database (operation 614). No record in the database indicates that the leave message is a last leave message. The RBridge then creates a notification message containing the membership information (operation 616) and forwards the information to each RBridge in the TRILL network (operation 618).


After forwarding the information about the control message to all RBridges (operation 618), the RBridge checks whether the remote multicast database contains any record for the multicast group and VLAN pair (operation 622). If so, then another RBridge has already forwarded the message to layer-3 devices associated with the multicast group and VLAN pair. Hence, the RBridge recognizes the control message not to be the first join or last leave message for the VLAN from the TRILL network to which the RBridge is coupled and does not forward the message. If the remote multicast database does not contain any record for the multicast group and VLAN pair, then the control message is the first join or the last leave message for the VLAN from the TRILL network. Hence, the RBridge forwards the message to all layer-3 devices coupled to the TRILL network and associated with the multicast group and VLAN pair (operation 624). Note that the RBridge notifies other RBridges about the membership information only if the received control message is a first join or a last leave message from a local end device for the multicast group and VLAN pair. Otherwise, the RBridge does not send any notification to other RBridges to reduce the number of IGMP messages in the TRILL network. Similarly, the RBridge forwards the control message to layer-3 devices associated with the multicast group and VLAN pair only if the received control message is a first join or a last leave message for the VLAN from the TRILL network. Otherwise, the RBridge does not send the control message to reduce the number of IGMP messages from the TRILL network. In some embodiments, the RBridge is a member switch in a VCS and it suppresses multicast control messages at both RBridge and VCS fabric level.



FIG. 6B presents a flowchart illustrating the process of an RBridge updating a database that stores remotely learned multicast group membership information, in accordance with an embodiment of the present invention. Upon receiving a notification message from a TRILL port (operation 632), the RBridge retrieves the control message for a multicast group and VLAN pair from the notification message (operation 634). The RBridge then examines the message type (operation 636). If the control message is a join message, the RBridge adds a record to a remote multicast database for the multicast group and VLAN pair corresponding to the join message (operation 642). If the control message is a leave message, the RBridge removes the record from the remote multicast database for the multicast group and VLAN pair corresponding to the leave message (operation 644). The record added to or removed from the remote IGMP database can be the RBridge identifier of the ingress RBridge identifier of the notification message, as described in conjunction with FIG. 5B.


Virtual Link Aggregation



FIG. 7 illustrates an exemplary network where a virtual RBridge identifier is assigned to two physical TRILL RBridges which are coupled to end devices via divided aggregate links, in accordance with an embodiment of the present invention. As illustrated in FIG. 7, a TRILL network 700 includes seven RBridges, 701, 702, 703, 704, 705, 706, and 707. End devices 722 and 724 are both dual-homed and coupled to RBridges 701 and 702. The goal is to allow a dual-homed end station to use both physical links to two separate TRILL RBridges as a single, logical aggregate link, with the same media access control (MAC) address. Such a configuration would achieve true redundancy and facilitate fast protection switching.


RBridges 701 and 702 are configured to operate in a special “trunked” mode for end devices 722 and 724. End devices 722 and 724 view RBridges 701 and 702 as a common virtual RBridge 710, with a corresponding virtual RBridge identifier. Dual-homed end devices 722 and 724 are considered to be logically coupled to virtual RBridge 710 via logical links represented by dotted lines. Virtual RBridge 710 is considered to be logically coupled to both RBridges 701 and 702, optionally with zero-cost links (also represented by dotted lines). Among the links in a link trunk, one link is selected to be a primary link. For example, the primary link for end device 722 can be the link to RBridge 701. RBridges which participate in link aggregation and form a virtual RBridge are referred to as “partner RBridges.” Operation of virtual RBridges for multi-homed end devices is specified in U.S. patent application Ser. No. 12/725,249, entitled “Redundant Host Connection in a Routed Network,” the disclosure of which is incorporated herein in its entirety.


A layer-3 multicast-enabled router 732 is coupled to network 700 via ingress RBridge 704. Router 732 is also coupled to a layer-3 network 730. When end device 722 sends a join message to router 732 for a multicast group, it is received by ingress RBridge 701. RBridge 701 notifies the partner RBridge 702 about the membership, and both RBridges 701 and 702 add the record to their local multicast database. RBridge 701 encapsulates the message in a TRILL packet with the virtual RBridge identifier as the ingress RBridge identifier and forwards the packet to all other RBridges in network 700. All other RBridges add a record to their remote multicast database for the multicast group using the virtual RBridge identifier. Similarly, for leave messages for multicast groups corresponding to the virtual RBridge identifier, partner RBridges remove corresponding records from their local multicast database, and all other RBridges remove corresponding records from their remote multicast database.



FIG. 8A presents a flowchart illustrating the process of an RBridge distributing a multicast control message across a TRILL network with virtual link aggregation support, and accordingly updating databases configured to maintain multicast group membership information, in accordance with an embodiment of the present invention. Upon receiving a multicast control message (operation 802), the RBridge determines the port type from which the frame was received (operation 806). If the control message is received at an edge port, the RBridge checks if the message is from a multi-homed end device (operation 810). If not, the RBridge updates records in the local multicast database (operation 814) and checks if the control message is a first join or a last leave message for a multicast group and VLAN pair (operation 816), as described in conjunction with FIG. 6A. If the packet is from a multi-homed end device (operation 810), the RBridge sends a notification message to each partner RBridge that shares a virtual RBridge (operation 812). The RBridge then updates records in local IGMP database (operation 814) and checks if the control message is a first join or a last leave message for a multicast group and VLAN pair (operation 816). If the control message is either a first join or a last leave message, then the RBridge sends a notification message to each RBridge in the TRILL network (operation 818). Otherwise, the RBridge does not send any notification to other RBridges to reduce the number of IGMP messages in the TRILL network. In some embodiments, the notification message is a VCS notification message.


If the packet is received on a TRILL port (operation 806), then the RBridge checks whether the packet is a notification from a partner RBridge (operation 820). If so, the RBridge updates records in the local IGMP database based on the partner RBridge information (operation 822). For example, in FIG. 7, as end device 722 is multi-homed, it is coupled to partner RBridges 701 and 702 via respective edge ports. If the control message is received at RBridge 701, the partner RBridge 702 is notified. Then RBridge 702 adds the local edge port that couples end device 722 to the local multicast database. However, as the primary link for end device 722 is from RBridge 701, partner RBridge 702 does not forward the multicast packets; they are forwarded by RBridge 701. If the message is not a notification from a partner RBridge, the RBridge updates records in the remote multicast database for the multicast group and VLAN pair (operation 824).



FIG. 8B presents a flowchart illustrating the process of an RBridge forwarding a multicast packet in a TRILL network with virtual link aggregation support, in accordance with an embodiment of the present invention. Upon receiving a multicast packet in a VLAN for a multicast group (operation 852), the RBridge determines the port type from which the frame was received (operation 854). If the frame is received from an edge port, the RBridge further determines whether any local end device belonging to the VLAN is has membership to the multicast group (operation 856). If so, the RBridge then transmits the packet to each local end station that belongs to the VLAN and has membership to the multicast group (operation 862). The RBridge then checks whether any remote end device belonging to the VLAN has a membership in the multicast group (operation 864). If there is such an end device, the RBridge sends the packet to each RBridge the TRILL network which has at least one such remote end device coupled to it (operation 866).


If the packet is received from a TRILL port (operation 854), the RBridge then determines whether any local end device belonging to the VLAN is in the multicast group (operation 858). If so, the RBridge further determines whether the local end device is dual-homed (operation 860). If not, the RBridge transmits the payload to each local end station that belongs to the VLAN and is in the multicast group (operation 862). If the end device is dual-homed, the RBridge then determines whether the frame's ingress RBridge identifier is the same as the identifier to virtual RBridge associated with the dual-homed end station (operation 870). If they are the same, the frame is discarded. Otherwise, the RBridge further determines whether its link to the dual-homed end station is the primary link (operation 872). If the link is the primary link, the RBridge forwards the frame to the dual-homed end station via the link (operation 874). Otherwise, the frame is discarded.


Exemplary Switch System



FIG. 9 illustrates an exemplary architecture of a switch capable of learning multicast group membership information from remote switches and accordingly forwarding multicast packets, in accordance with an embodiment of the present invention. In this example, an RBridge 900 includes a number of TRILL ports 904, a TRILL management module 920, a multicast module 930, an Ethernet frame processor 910, and a storage 940. TRILL management module 920 further includes a TRILL header processing module 922. Multicast module 930 further includes an multicast header processing module 936 and an multicast configuration module 938.


TRILL ports 904 include inter-switch communication channels for communication with one or more RBridges. This inter-switch communication channel can be implemented via a regular communication port and based on any open or proprietary format. Furthermore, the inter-switch communication between RBridges is not required to be direct port-to-port communication.


During operation, TRILL ports 904 receive TRILL frames from (and transmit frames to) other RBridges. TRILL header processing module 922 processes TRILL header information of the received frames and performs routing on the received frames based on their TRILL headers. TRILL management module 920 forwards frames in the TRILL network toward other RBridges and frames destined to a layer-3 node toward the multicast module 930. Multicast header processing module 936 determines whether the frame contains a multicast packet. Multicast configuration module 938 processes the content of the multicast control message and updates the remote multicast database 934 residing in storage 940. Storage 940 can also include TRILL and IP routing information.


In some embodiments, RBridge 900 may form a virtual RBridge, wherein TRILL management module 920 further includes a virtual RBridge configuration module 924. TRILL header processing module 922 generates the TRILL header and outer Ethernet header for ingress frames corresponding to the virtual RBridge. Virtual RBridge configuration module 924 manages the communication with RBridges associated with a virtual RBridge and handles various inter-switch communications, such as link and node failure notifications. Virtual RBridge configuration module 924 allows a user to configure and assign the identifier for the virtual RBridges.


In some embodiments, RBridge 900 may include a number of edge ports 902, as described in conjunction with FIG. 1. Edge ports 902 receive frames from (and transmit frames to) end devices. Ethernet frame processor 910 extracts and processes header information from the received frames. Ethernet frame processor 910 forwards the frames to TRILL management module 920 and multicast module 930. If multicast header processing module 936 determines that a frame contains a multicast control message, IGMP configuration module 938 processes the content of the message and updates the local multicast database 932 residing in storage 940.


In some embodiments, RBridge 900 may include a VCS configuration module 950 that includes a virtual switch management module 954 and a logical switch 952, as described in conjunction with FIG. 1. VCS configuration module 950 maintains a configuration database in storage 940 that maintains the configuration state of every switch within the VCS. Virtual switch management module 954 maintains the state of logical switch 952, which is used to join other VCS switches. In some embodiments, logical switch 952 can be configured to operate in conjunction with Ethernet frame processor 910 as a logical Ethernet switch.


Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory which is coupled to one or more processors in RBridge 900. When executed, these instructions cause the processor(s) to perform the aforementioned functions.


In summary, embodiments of the present invention provide a switch, a method, and a system for learning and sharing multicast group information from remote RBridges in a TRILL network. In one embodiment, the switch includes a storage and a multicast management mechanism. The storage is configured to store an entry indicating a multicast group membership learned at a remote switch. The multicast management mechanism is coupled to the storage and is configured to suppress flooding of packets destined for the multicast group.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: one or more ports;a storage configured to store a first multicast database, wherein a respective entry in the first multicast database includes membership information of a multicast group learned at a remote switch, and wherein the entry is associated with an identifier of the remote switch; anda multicast management module configured to suppress flooding via the one or more ports a packet destined for a multicast group in response to identifying the membership information of the multicast group in an entry in the first multicast database;wherein the switch and the remote switch are member switches of a network of interconnected switches, and wherein a media access control (MAC) address learned via one of the one or more ports is shared with the remote switch.
  • 2. The switch of claim 1, wherein a respective entry in the first multicast database includes a virtual local area network (VLAN) identifier.
  • 3. The switch of claim 1, wherein an entry in the first multicast database is a non-flooding forwarding entry corresponding to the multicast group and the remote switch.
  • 4. The switch of claim 1, wherein the switch and the remote switch participate in a link aggregation, and wherein at least one of the one or more ports participates in the link aggregation; and wherein the membership information of the multicast group learned at the remote switch is treated as membership information of a multicast group learned locally at the switch.
  • 5. The switch of claim 1, further comprising a communication module configured to construct a notification message for the remote switch, wherein the notification message comprises membership information of a multicast group learned locally at the switch.
  • 6. The switch of claim 1, wherein the multicast group is formed based on one or more of the following: Internet Group Management Protocol (IGMP) version 1;IGMP version 2;IGMP version 3;Multicast Listener Discovery (MLD) version 1; andMLD version 2.
  • 7. The switch of claim 1, wherein the multicast management module is configured to perform Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD) snooping.
  • 8. The switch of claim 1, further comprising a communication module configured to construct a notification message for the remote switch in response to receiving a first join message or a last leave message of the multicast group from a locally coupled end device, wherein the notification message comprises state information of the multicast group.
  • 9. A method, comprising: storing a first multicast database in a storage in a switch, wherein a respective entry in the first multicast database includes membership information of a multicast group learned at a remote switch, wherein the entry is associated with an identifier of the remote switch, and wherein the switch includes one or more ports; andsuppressing flooding via the one or more ports a packet destined for a multicast group in response to identifying the membership information of the multicast group in an entry in the first multicast database;wherein the switch and the remote switch are member switches of a network of interconnected switches, and wherein a media access control (MAC) address learned via one of the one or more ports is shared with the remote switch.
  • 10. The method of claim 9, wherein a respective entry in the first multicast database includes a virtual local area network (VLAN) identifier.
  • 11. The method of claim 9, wherein an entry in the first multicast database is a non-flooding forwarding entry corresponding to the multicast group and the remote switch in a data structure.
  • 12. The method of claim 9, further comprising aggregating links coupled to the switch and the remote switch, wherein at least one of the one or more ports participates in aggregating the links; and treating the membership information of the multicast group learned at the remote switch as membership information of a multicast group learned locally at the switch.
  • 13. The method of claim 9, further comprising constructing a notification message for the remote switch, wherein the notification message comprises membership information of a multicast group learned locally at the switch.
  • 14. The method of claim 9, wherein the multicast group is formed based on one of more of the following: Internet Group Management Protocol (IGMP) version 1;IGMP version 2;IGMP version 3;Multicast Listener Discovery (MLD) version 1; andMLD version 2.
  • 15. The method of claim 9, wherein the method further comprises performing Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD) snooping.
  • 16. The method of claim 9, further comprising constructing a notification message for the remote switch in response to receiving a first join message or a last leave message of the multicast group from a locally coupled end device, wherein the notification message comprises state information associated with the multicast group.
  • 17. A switching system, comprising a first switch and a second switch; wherein the first switch comprises: one or more ports;a storage configured to store a first multicast database,wherein a respective entry in the first multicast database includes membership information of a multicast group learned at the second switch, and wherein the entry is associated with an identifier of the second switch; and a multicast management module configured to suppress flooding via the one or more ports a packet destined for a multicast group in response to identifying the membership information of the multicast group in an entry in the first multicast database;wherein the switching system is a network of interconnected switches, and wherein a media access control (MAC) address learned via one of the one or more ports is shared with the second switch.
  • 18. The switching system of claim 17, wherein a respective entry in the first multicast database includes a virtual local area network (VLAN) identifier.
  • 19. The switching system of claim 17, wherein an entry in the first multicast database is a non-flooding forwarding entry corresponding to the multicast group and the second switch.
  • 20. The switching system of claim 17, wherein the first switch and the second switch participate in a link aggregation, and wherein at least one of the one or more ports participates in aggregating the links; and wherein the membership information of the multicast group learned at the second switch is treated by the first switch as membership information of a multicast group learned locally at the first switch.
  • 21. The switching system of claim 17, wherein the first switch further comprises a communication module configured to construct a notification message for the second switch, wherein the notification message comprises membership information of a multicast group learned locally at the first switch.
  • 22. The switching system of claim 17, wherein the multicast group is formed based on one or more of the following: Internet Group Management Protocol (IGMP) version 1;IGMP version 2;IGMP version 3;Multicast Listener Discovery (MLD) version 1; andMLD version 2.
  • 23. The switching system of claim 17, wherein the multicast management module is configured to perform Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD) snooping.
  • 24. The switching system of claim 17, wherein the first switch further comprises a communication module configured to construct a notification message for the second switch in response to receiving a first join message or a last leave message of the multicast group from an end device coupled to the first switch, wherein the notification message comprises state information associated with the multicast group.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/502,143, titled “IGMP Snooping in VCS Cluster,” by inventors Nagarajan Venkatesan, Anoop Ghanwani, Shunjia Yu, Phanidhar Koganti, and Rajiv Krishnamurthy, filed 29 Jun. 2011, which is incorporated by reference herein.

US Referenced Citations (310)
Number Name Date Kind
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5959968 Chin Sep 1999 A
5973278 Wehrli, III et al. Oct 1999 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6438106 Pillar Aug 2002 B1
6542266 Phillips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6873602 Ambe Mar 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser, Jr. Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat Apr 2010 B1
7716370 Devarapalli May 2010 B1
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homechaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937756 Kay May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukia Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8102781 Smith Jan 2012 B2
8102791 Tang Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8295921 Wang Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake, III Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8462774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8599850 J Ha Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian Feb 2002 A1
20020091795 Yip Jul 2002 A1
20030041085 Sato Feb 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20040001433 Gram Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040049699 Griffith Mar 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040165596 Garcia Aug 2004 A1
20040213232 Regan Oct 2004 A1
20050007951 Lapuh Jan 2005 A1
20050044199 Shiga Feb 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050122979 Gross Jun 2005 A1
20050157645 Rabie Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050213561 Yao Sep 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060221960 Borgione Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070116224 Burke May 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070211712 Fitch Sep 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland, III Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080065760 Damm Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080101386 Gray May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Berkvens Jun 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225852 Raszuk Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090037607 Farinacci Feb 2009 A1
20090042270 Dolly Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090129384 Regan May 2009 A1
20090138752 Graham May 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090245137 Hares Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090246137 Hadida Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20100027420 Smith Feb 2010 A1
20100054246 Shah et al. Mar 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukia Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Johnston Jul 2010 A1
20100169467 Shukia Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284414 Agarwal Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035489 McDaniel Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110064086 Xiong Mar 2011 A1
20110064089 Hidaka Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161494 McDysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Da Silva Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120163164 Terry Jun 2012 A1
20120177039 Berman Jul 2012 A1
20120243539 Keesara Sep 2012 A1
20120275347 Banerjee Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120320800 Kamble et al. Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130067466 Combs Mar 2013 A1
20130127848 Joshi May 2013 A1
20130194914 Agarwal Aug 2013 A1
20130250951 Koganti Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20140105034 Sun Apr 2014 A1
Foreign Referenced Citations (5)
Number Date Country
1398920 Mar 2004 EP
1916807 Apr 2008 EP
2001167 Dec 2008 EP
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
Non-Patent Literature Citations (174)
Entry
M. Christensen, et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, Network Working Group, May 2006.
“Switched Virtual Internetworking moves beyond bridges and routers”, Sep. 23, 1994, No. 12, New York, US.
Knight, S. et al. “Virtual Router Redundancy Protocol”, Apr. 1998, XP-002135272.
Eastlake, Donald et al., “RBridges: TRILL Header Options”, Dec. 2009.
Touch, J. et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009.
Perlman, Radia et al., “RBridge VLAN Mapping”, Dec. 2009.
“Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions”, Jan. 2009.
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, XP-002649647, 2009.
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, Mar. 2010.
Perlman, Radia et al., “RBridges: Base Protocol Specification”, Mar. 2010.
“Switched Virtual Internetworking moved beyond bridges and routers”, 8178 Data Communications Sep. 23, 1994, No. 12, New York.
S. Night et al., “Virtual Router Redundancy Protocol”, Network Working Group, XP-002135272, Apr. 1998.
Eastlake 3rd., Donald et al., “RBridges: TRILL Header Options”, Draft-ietf-trill-rbridge-options-00.txt, Dec. 24, 2009.
J. Touch, et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009.
Perlman, Radia et al., “RBridge VLAN Mapping”, Draft-ietf-trill-rbridge-vlan-mapping-01.txt, Dec. 4, 2009.
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions, Jan. 2009.
Perlman, Radia et al., “RBridges: Base Protocol Specification”, draft-ietf-trill-rbridge-protocol-16.txt, Mar. 3, 2010.
Christensen, M. et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, May 2006.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, Oct. 2002.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08”, 2008.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
BROCADE, ‘Fastlron and Turbolron 24x Configuration Guide’, Feb. 16, 2010.
BROCADE, ‘The Effortless Network: Hyperedge Technology for the Campus LAN’ 2012.
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012.
Fastlron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
Foundary Fastlron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 2008.
Knight, ‘Network Based IP VPN Architecture using Virtual Routers’, May 2003.
Knight P et al: ‘Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts’, IEEE Communications Magazine, IEEE Piscataway, US, vol. 42, No. 6, Jun. 1, 2004 pp. 124-131, XP001198207, ISSN: 0163-6804, DOI: 10.1109/MCOM.2004.1304248.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009.
Louati, Wajdi et al., ‘Network-based virtual personal overlay networks using programmable virtual routers’, IEEE Communications Magazine, Jul. 2005.
Narten, T. et al., ‘Problem Statement: Overlays for Network Virtualization d raft-narten-n vo3-over I ay-problem -statement-01’, Oct. 31, 2011.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, from Jaroenchonwanit, Bunjob, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, from Park, Jung H., dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Apr. 26, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jan. 4, 2013.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jun. 7, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated May 24, 2012.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Apr. 25, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Mar. 18, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jul. 31, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Feb. 22, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Oct. 26, 2012.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated May 16, 2013.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Jun. 21, 2013.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jun. 10, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jul. 3, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Jul. 16, 2013.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,864, filed Apr. 22, 2011, dated Sep. 19, 2012.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jun. 19, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Mar. 4, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Sep. 5, 2013.
Office Action for U.S. Appl. No. 13/098,360, filed Apr. 29, 2011, dated May 31, 2013.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Dec. 21, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Jul. 9, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Huang, Nen-Fu et al., ‘An Effective Spanning Tree Algorithm for a Bridged LAN’, Mar. 16, 1992.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Zhai F. Hu et al. ‘RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt’, May 15, 2012.
‘An Introduction to Brocade VCS Fabric Technology’, BROCADE white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
Office action dated Jan. 10, 2014, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Jan. 16, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jul. 31, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jan. 6, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Oct. 2, 2013, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office action dated Dec. 2, 2013, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Nov. 29, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Nov. 12, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
BROCADE ‘Brocade Unveils’The Effortless Network, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012.
Kreeger, L. et al., ‘Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00’, Jan. 30, 2012.
Office Action for U.S. Appl. No. 13/365,808, filed Jul. 18, 2013, dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, from Haile, Awet A., dated Dec. 2, 2012.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated Dec. 5, 2012.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE GLOBECOM Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009, pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011].
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Related Publications (1)
Number Date Country
20130003733 A1 Jan 2013 US
Provisional Applications (1)
Number Date Country
61502143 Jun 2011 US