Method and system for link aggregation across multiple switches

Information

  • Patent Grant
  • 9143445
  • Patent Number
    9,143,445
  • Date Filed
    Wednesday, May 8, 2013
    11 years ago
  • Date Issued
    Tuesday, September 22, 2015
    9 years ago
Abstract
One embodiment of the present invention provides a switch. The switch includes a forwarding mechanism and a control mechanism. During operation, the forwarding mechanism forwards frames based on their Ethernet headers. The control mechanism operates the switch in conjunction with a separate physical switch as a single logical switch and assigns a virtual switch identifier to the logical switch, wherein the virtual switch identifier is associated with a link aggregation group.
Description
BACKGROUND

1. Field


The present disclosure relates to network management. More specifically, the present disclosure relates to a method and system for link aggregation across multiple switches.


2. Related Art


As more mission-critical applications are being implemented in data communication networks, high-availability operation is becoming progressively more important as a value proposition for network architects. It is often desirable to divide a conventional aggregated link (from one device to another) among multiple network devices, such that a node failure or link failure would not affect the operation of the multi-homed device.


Meanwhile, layer-2 (e.g., Ethernet) networking technologies continue to evolve. More routing-like functionalities, which have traditionally been the characteristics of layer-3 (e.g., IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing devices. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.


While TRILL brings many desirable features to layer-2 networks, some issues remain unsolved when TRILL-capable devices are coupled with non-TRILL devices. Particularly, when a non-TRILL device is coupled to multiple TRILL devices using link aggregation, existing technologies do not provide a scalable and flexible solution that takes full advantage of the TRILL network.


SUMMARY

One embodiment of the present invention provides a switch. The switch includes a forwarding mechanism and a control mechanism. During operation, the forwarding mechanism forwards frames based on their Ethernet headers. The control mechanism operates the switch in conjunction with a separate physical switch as a single logical switch and assigns a virtual switch identifier to the logical switch, wherein the virtual switch identifier is associated with a link aggregation group.


In a variation on this embodiment, the switch is a layer-2 switch capable of routing without requiring the network topology to be based on a spanning tree.


In a variation on this embodiment, the switch is a routing bridge configured to operate in accordance with the TRILL protocol.


In a variation on this embodiment, the control mechanism derives the virtual switch identifier based on an identifier for the link aggregation group.


In a variation on this embodiment, the switch includes a frame-marking mechanism configured to mark an ingress-switch field of a frame with the virtual switch identifier, wherein the frame is received from a device coupled to the switch.


In a variation on this embodiment, the virtual switch identifier is a virtual RBridge identifier in compliance with the TRILL protocol.


In a variation on this embodiment, the link aggregation group is identified by a LAG ID in accordance to the IEEE 802.1ax standard.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates an exemplary network where a virtual RBridge identifier is assigned to two physical TRILL RBridges which are coupled to a non-TRILL device via a divided aggregate link, in accordance with an embodiment of the present invention.



FIG. 2 presents a flowchart illustrating the process of configuring the TRILL header of an ingress frame from a dual-homed end station at an ingress physical RBridge, in accordance with an embodiment of the present invention.



FIG. 3 illustrates an exemplary header configuration of an ingress TRILL frame which contains a virtual RBridge nickname in its ingress RBridge nickname field, in accordance with an embodiment of the present invention.



FIG. 4 presents a flowchart illustrating the process of forwarding a unicast TRILL frame at a partner RBridge which participates in link aggregation, in accordance with an embodiment of the present invention.



FIG. 5 illustrates a scenario where one of the physical links of a dual-homed end station experiences a failure, in accordance with an embodiment of the present invention.



FIG. 6 presents a flowchart illustrating the process of handling a link failure that affects an end station associated with a virtual RBridge, in accordance with an embodiment of the present invention.



FIG. 7 illustrates an exemplary architecture of a switch that facilitates assignment of a virtual RBridge ID, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of providing a scalable and flexible way of provisioning multi-switch link aggregation in a TRILL network is solved by forming a logical, virtual RBridge corresponding to a link aggregation group across multiple RBridges and assigning a virtual RBridge identifier based on the link aggregation group (LAG) identifier. For example, in a TRILL network, when an end station is coupled to two separate RBridges and the links to these RBridges form a LAG, a virtual TRILL RBridge identifier (ID) is generated based on the LAG ID, and the end station is considered to be logically coupled to the virtual RBridge. An incoming frame from the end-station is marked with a virtual RBridge nickname as its ingress RBridge nickname and routed through the rest of the TRILL network. To the rest of the TRILL network, such a dual-homed end station appears to be coupled to the virtual RBridge. When one of the aggregated links fails, the affected end station is no longer considered coupled to the virtual RBridge. Instead, the end station would be considered to be coupled to the physical RBridge with an operational link. This configuration allows fast protection switching and timely topology convergence.


Although the present disclosure is presented using examples based on the TRILL protocol, embodiments of the present invention are not limited to TRILL networks, or networks defined in a particular Open System Interconnection Reference Model (OSI reference model) layer.


The term “RBridge” refers to routing bridges, which are bridges implementing the TRILL protocol as described in IETF draft “RBridges: Base Protocol Specification,” available at http://tools.ietf.org/html/draft-ietf-trill-rbridge-protocol-16, which is incorporated by reference herein. Embodiments of the present invention are not limited to the application among RBridges. Other types of switches, routers, and forwarders can also be used.


The term “end station” refers to a network device that is not TRILL-capable. “End station” is a relative term with respect to the TRILL network. However, “end station” does not necessarily mean that the network device is an end host. An end station can be a host, a conventional layer-2 switch, an IP router, or any other type of network device. Additionally, an end station can be coupled to other switches, routers, or hosts further away from the TRILL network. In other words, an end station can be an aggregation point for a number of network devices to enter the TRILL network.


The term “dual-homed end station” refers to an end station that has an aggregate link to two or more TRILL RBridges, where the aggregate link includes multiple physical links to the different RBridges. The aggregate link, which includes multiple physical links, functions as one logical link to the end station. Although the term “dual” is used here, the term “dual-homed end station” does not limit the number of physical RBridges sharing the aggregate link to two. In various embodiments, other numbers of physical RBridges can share the same aggregate link. Where “dual-homed end station” is used in the present disclosure, the term “multi-homed end station” can also be used.


The term “frame” refers to a group of bits that can be transported together across a network. “Frame” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Frame” can be replaced by other terminologies referring to a group of bits, such as “packet,” “cell,” or “datagram.”


The term “RBridge identifier” refers to a group of bits that can be used to identify an RBridge. Note that the TRILL standard uses “RBridge ID” to denote a 48-bit intermediate-system-to-intermediate-system (IS-IS) System ID assigned to an RBridge, and “RBridge nickname” to denote a 16-bit value that serves as an abbreviations for the “RBridge ID.” In this disclosure, “RBridge identifier” is used as a generic term and is not limited to any bit format, and can refer to “RBridge ID” or “RBridge nickname” or any other format that can identify an RBridge.


Network Architecture



FIG. 1 illustrates an exemplary network where a virtual TRILL identifier is assigned to two physical TRILL RBridges which are coupled to a non-TRILL device via a divided aggregate link, in accordance with an embodiment of the present invention. As illustrated in FIG. 1, a TRILL network includes six RBridges, 101, 102, 103, 104, 105, and 106. End station 113 is coupled to RBridge 102; end station 114 is coupled to RBridge 103; and end station 115 is coupled to RBridge 105. End station 112 is dual-homed and coupled to RBridges 104 and 105. The goal is to allow a dual-homed end station to use both physical links to two separate TRILL RBridges as a single, logical aggregate link. Such a configuration would achieve true redundancy and facilitate fast protection switching.


However, in a conventional TRILL network, the dual-home-style connectivity would not provide the desired result, because the TRILL protocol depends on MAC address learning to determine the location of end stations (i.e., to which ingress RBridge an end station is coupled) based on a frame's ingress TRILL RBridge ID. As such, an end station can only appear to be reachable via a single physical RBridge. For example, assume that end station 112 is in communication with end station 113. The ingress RBridge would be RBridges 105 and 104, and the egress RBridge would be RBridge 102. The incoming frames from end station 112 would have either RBridge 104 or RBridge 105 marked as their ingress RBridge ID. When RBridge 102 receives these frames and performs MAC address learning, RBridge 102 would assume that end station 112 is moving and is either coupled to RBridge 104 or RBridge 105 (but not both). RBridge 102 would send the frames from end station 113 to either RBridge 104 or RBridge 105. Consequently, only one of the physical links leading to end station 112 is used, which defeats the purpose of having redundant links between end station 112 and RBridges 104 and 105.


In embodiments of the present invention, as illustrated in FIG. 1, RBridges 104 and 105 are configured to operate in a special “trunked” mode for end station 112. End stations 112 view RBridges 104 and 105 as part of one single logical switch. Dual-homed end station 112 is considered to be logically coupled to virtual RBridge 108 via logical links represented by the dotted line. Virtual RBridge 108 is considered to be logically coupled to both RBridges 104 and 105, optionally with zero-cost links (also represented by dotted lines).


From end station 112's point of view, it forms a link aggregation group (LAG) with the single logical switch represented by RBridges 105 and 104. During link bring-up, the link layer discovery protocol (LLDP) instances on both end station 112 and RBridges 105 and 104 negotiate the LAG ID for LAG 107, which includes the two physical links between end station 112 and RBridges 105 and 104. The corresponding ports on RBridges 105 and 104 are mapped to the same LAG ID. More details about the LAG ID negotiation process can be found in the IEEE 802.1AX standard, available at http://standards.ieee.org/getieee802/download/802.1AX-2008.pdf, which is incorporated by reference herein.


In one embodiment, virtual RBridge 108's identifier can be derived from the LAG ID. That is, once the LAG negotiation is complete, the virtual RBridge ID can be determined. This configuration allows a one-to-one mapping relationship between the virtual RBridge ID and the LAG ID.


Incoming frames from end station 112 are marked with virtual RBridge 108's nickname as their ingress RBridge nickname. As a result, other RBridges in the TRILL network can learn that end station 112 is reachable via virtual RBridge 108. Furthermore, RBridges 104 and 105 can advertise their respective connectivity (optionally via zero-cost links) to virtual RBridge 108. Hence, multi-pathing can be achieved when other RBridges choose to send frames to virtual RBridge 108 (which is marked as the egress RBridge in the frames) via RBridges 104 and 105. In the following description, RBridges which participate in link aggregation and form a virtual RBridge are referred to as “partner RBridges.”


Since the two partner RBridges function as a single logical RBridge, the MAC address reachability learned by each RBridge is shared with the other partner RBridge. For example, during normal operation, end station 112 may choose to send its outgoing frames only via the link to RBridge 105. As a result, only RBridge 105 would learn end station 112's MAC address (and the corresponding port on RBridge 105 to which end station 112 is coupled). This information is then shared by RBridge 105 with RBridge 104. Since the frames coming from end station 112 would have virtual RBridge 108's nickname as their ingress RBridge nickname, when other devices in the network send frames back to end station 112, these frames would have virtual RBridge 108's nickname as their egress RBridge nickname, and these frames might be sent to either RBridge 104 or 105. When RBridge 104 receives such a frame, it can determine that this frame should be sent to its partner RBridge 105, based on the MAC reachability information shared by RBridge 105.


An end station is not required to change the way it is configured for link aggregation. A dual-homed end station only needs to be configured to have a LAG to the RBridges, as would be the case with a conventional, physical RBridge, using an existing link aggregation method. Hence, the dual-homed end station does not need to be aware that the virtual RBridge on the other end of the aggregate link is actually two physical RBridges. Furthermore, the rest of the TRILL network (apart from RBridges 104 and 105) is also not required to be aware that virtual RBridge 108 is actually not a physical RBridge. That is, to the rest of the TRILL network, virtual RBridge 108 is indistinguishable from any of the physical RBridges. Therefore, the present invention does not require extra configuration to the rest of the TRILL network.


Frame Processing



FIG. 2 presents a flowchart illustrating the process of configuring the TRILL header of an ingress frame from a dual-homed end station at an ingress physical RBridge, in accordance with an embodiment of the present invention. During operation, an RBridge participating in link aggregation receives an ingress Ethernet frame from an end station (operation 202). The RBridge then determines the port on which the frame arrives (operation 204). Based on the determine port, the RBridge further determines the LAG ID associated with the port (operation 206). Note that the port-to-LAG ID association is established during the link discovery process.


Subsequently, the RBridge determines the virtual RBridge ID based on the LAG ID (operation 208). (Note that the virtual RBridge ID can be directly derived from the LAG ID.) The RBridge then identifies the destination MAC address of the received frame (operation 210). Based on the destination MAC address, the RBridge performs a lookup on the egress TRILL RBridge nickname (operation 212). Next, the RBridge determines the next-hop TRILL RBridge based on the egress TRILL RBridge nickname (operation 214). (It is assumed that the routing function in the TRILL protocol or other routing protocol is responsible for populating the forwarding information base at each RBridge.)


Subsequently, the RBridge sets the TRILL header of the frame (operation 216). In doing so, the RBridge sets the virtual RBridge as the ingress RBridge for the frame. The egress RBridge of the TRILL header is set based on the result of operation 212.


The RBridge then sets the outer Ethernet header of the frame (operation 218). In doing so, the RBridge sets the MAC address of the next-hop RBridge (the result of operation 214) as the destination MAC address in the outer Ethernet header. The RBridge further sets the MAC address of the local transmitting RBridge as the source MAC address in the outer Ethernet header. After setting the outer Ethernet header, the RBridge transmits the TRILL-encapsulated frame to the next-hop RBridge (operation 220).



FIG. 3A illustrates an exemplary header configuration of an ingress TRILL frame which contains a virtual RBridge nickname in its ingress RBridge nickname field, in accordance with an embodiment of the present invention. In this example, a TRILL-encapsulated frame includes an outer Ethernet header 302, a TRILL header 303, an inner Ethernet header 308, an Ethernet payload 310, and an Ethernet frame check sequence (FCS) 312.


TRILL header 303 includes a version field (denoted as “V”), a reserved field (denoted as “R”), a multi-destination indication field (denoted as “M”), an option-field-length indication field (denoted as “OP-LEN”), and a hop-count field (denoted as “HOP CT”). Also included are an egress RBridge nickname field 304 and an ingress RBridge nickname field 306.


In some embodiments, in addition to carrying the virtual RBridge's nickname in the ingress RBridge nickname field, it is possible to include the physical ingress RBridge nickname in the TRILL option field. This configuration can facilitate end-to-end congestion notification and help with multicast pruning scenarios.


Furthermore, it is also possible to carry virtual RBridge identifier in the TRILL option field, instead of the source RBridge nickname field. The ingress RBridge nickname field of an incoming frame is used to carry the nickname of the physical ingress RBridge (which is one of the partner RBridges forming the virtual RBridge). This configuration allows other RBridges in the TRILL network to identify the actual, physical ingress RBridge as well as the virtual ingress RBridge.


After a partner RBridge encapsulates an ingress frame with the proper TRILL and outer Ethernet headers and transmits the frame to its destination, it is expected to receive frames in the reverse direction from the destination in response to the transmission. FIG. 4 presents a flowchart illustrating the process of receiving and forwarding a unicast TRILL frame at a partner RBridge which participates in link aggregation, in accordance with an embodiment of the present invention.


During operation, a partner RBridge receives a TRILL frame (operation 402). The RBridge then determines whether the frame's egress RBridge nickname corresponds to the local RBridge or a virtual RBridge associated with the local RBridge (operation 403). If the frame's egress RBridge nickname matches neither the local RBridge nor a virtual RBridge associated with the local RBridge (i.e., the frame is not destined to the local RBridge), the RBridge transmits the frame to the next-hop RBridge based on the frame's egress RBridge nickname (operation 405).


On the other hand, if the condition in operation 403 is met, the RBridge then performs a lookup in its MAC-address table to identify an output port corresponding to the frame's destination MAC address in its inner Ethernet header if the egress RBridge nickname matches the local physical RBridge ID (operation 404). If the frame's egress RBridge nickname corresponds to the virtual RBridge, then the RBridge can determine the LAG ID corresponding to the virtual RBridge, and determine the output port associated with that LAG ID.


Note that the MAC reachability information is shared between the two partner RBridges forming the virtual RBridge. Hence, even if the RBridge has not received an ingress frame with the same source MAC address (i.e., the RBridge has not learned the MAC address locally), the RBridge can still determine that the destination MAC address is reachable via a local link based on the MAC reachability information shared from the partner RBridge. Subsequently, the RBridge transmits the frame to the local output port corresponding to the frame's destination MAC address in its inner Ethernet header (operation 408).


Failure Handling



FIG. 5 illustrates a scenario in which one of the physical links of a dual-homed end station experiences a failure, in accordance with an embodiment of the present invention. In this example, assume that an end station 512 is dual-homed with RBridges 505 and 504 via aggregate links. In particular, end station 512 is coupled to RBridge 505 via link 520, and coupled to RBridge 504 via link 522. Links 520 and 522 form a LAG that corresponds to a virtual RBridge 508. RBridge 508's identifier is derived from the LAG ID. Suppose that link 522 fails during operation. RBridge 504 can detect this failure and notify RBridge 505.


As a result, RBridge 505 discontinues marking frames coming from end station 512 with the nickname of virtual RBridge 508. Instead, the source RBridge nickname for the frames from end station 512 are marked with RBridge 505's nickname. In other words, since end station 512 no longer has the aggregate link to both RBridges 505 and 504, virtual RBridge 508 no longer exists for end station 512. After the TRILL-encapsulated frames from end station 512 reach other egress RBridges in the network, these RBridges will learn that the MAC address corresponding to end station 512 is associated with RBridge 505, instead of virtual RBridge 508. Consequently, future frames destined to end station 512 will be sent to RBridge 505. Note that, during the topology convergence process, RBridge 504 may continue to receive frames destined to end station 512. RBridge 504 can flood these frames to all the ports (except the ports from which the frames are received), or optionally forward these frames to RBridge 505 so there is minimal data loss.



FIG. 6 presents a flowchart illustrating the process of handling a link failure that affects an end station associated with a virtual RBridge, in accordance with an embodiment of the present invention. During operation, a partner RBridge detects a physical link failure to an end station associated with the virtual RBridge (operation 602). The RBridge then disassociates the end station with the virtual RBridge (operation 604), and returns to the normal forwarding and/or flooding operation as for non-trunked ports. Furthermore, the RBridge places its own nickname (i.e., the physical ingress RBridge's nickname) in the source RBridge field in the TRILL header of ingress frames from the end station (operation 606). Optionally, the RBridge can broadcast the MAC reachability of the end station via its own RBridge identifier to other RBridges in the TRILL network (operation 608).


Multi-Pathing


Embodiments of the present invention can also facilitate equal-cost or nearly-equal-cost multi-pathing. Take the network topology in FIG. 1 for example. Assume that end station 112 is in communication with end station 114. The shortest path traverses RBridge 104 and RBridge 103. As a result, traffic from end station 114 to end station 112 (which is destined to virtual RBridge 108) would always go through RBridge 104, instead of being split between RBridge 105 and RBridge 104.


In one embodiment, if traffic splitting is desired, the partner RBridges can advertise to the rest of the TRILL network that virtual RBridge 108 is equal to RBridge 104 and RBridge 105, e.g., via a message indicating RBx→{RB2}, where RBx denotes the virtual RBridge nickname, and RB1 and RB2 denote the physical RBridge nicknames. This can be done using control messages supported by existing routing protocols, such as the IS-IS protocol. As a result, for a given set of data flows, RBridge 103 can select RBridge 104 as the egress RBridge, whereas for other flows RBridge 103 can select RBridge 105 as the egress RBridge.


Exemplary Switch System



FIG. 7 illustrates an exemplary architecture of a switch that facilitates assignment of a virtual RBridge ID, in accordance with an embodiment of the present invention. In this example, an RBridge 700 includes a number of communication ports 701, a packet processor 702, a virtual RBridge management module 704, a virtual RBridge configuration module 705, a storage device 706, and a TRILL header generation module 708. During operation, communication ports 701 receive frames from (and transmit frames to) the end stations. Packet processor 702 extracts and processes the header information from the received frames. Packet processor 702 further performs routing on the received frames based on their Ethernet headers, as described in conjunction with FIG. 2. Note that communication ports 701 include at least one inter-switch communication channel for communication with one or more partner RBridges. This inter-switch communication channel can be implemented via a regular communication port and based on any open or proprietary format. Furthermore, the inter-switch communication between partner RBridges is not required to be direct port-to-port communication. Virtual RBridge management module 704 manages the communication with the partner RBridges and handles various inter-switch communication, such as MAC address information sharing and link failure notification.


Virtual RBridge configuration module 705 allows a user to configure and assign the identifier for the virtual RBridges. In one embodiment, virtual RBridge configuration module 705 derives a virtual RBridge ID from a LAG ID which is obtained during the link discovery and configuration process. It is also responsible for communicating with the partner RBridge(s) to share each other's MAC address reachability information, which is stored in storage 706. Furthermore, TRILL header generation module 708 generates the TRILL header for ingress frames corresponding to the virtual RBridge. Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory which is coupled to one or more processors in RBridge 700. When executed, these instructions cause the processor(s) to perform the aforementioned functions.


In summary, embodiments of the present invention provide a method and system for facilitating link aggregation across different switches in a routed network. In one embodiment, a virtual RBridge is formed to accommodate an aggregate link from an end station to multiple physical RBridges. The virtual RBridge is used as the ingress RBridge for ingress frames from the end station. Such configuration provides a scalable and flexible solution to link aggregation across multiple switches.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable nontransitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable nontransitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: processing circuitry;a storage device coupled to the processing circuitry and storing instructions which when executed by the processing circuitry cause the processing circuitry to perform a method, the method comprising: operating the switch in conjunction with a separate physical switch as a single logical switch;deriving a virtual switch identifier based on an identifier for a link aggregation group;assigning the virtual switch identifier to the logical switch; andmarking an ingress-switch field of a frame with the virtual switch identifier;wherein the switch is capable of routing layer-2 frames without requiring a spanning-tree network topology.
  • 2. The switch of claim 1, wherein the switch is configurable to operate in accordance with at least one of: a Transparent Interconnection of Lots of Links (TRILL) protocol;an Internet Protocol (IP); anda Multiprotocol Label Switching (MPLS) protocol.
  • 3. The switch of claim 1, wherein the marking the ingress-switch field of the frame comprises determining the virtual switch identifier based on an input port on which the frame is received.
  • 4. The switch of claim 1, wherein the virtual switch identifier is a virtual RBridge identifier in compliance with a TRILL protocol.
  • 5. The switch of claim 1, wherein the link aggregation group is identified by a link aggregation group (LAG) identifier in accordance with the IEEE 802.1ax standard.
  • 6. The switch of claim 1, wherein the method further comprises advertising to a neighbor a zero-cost link from the switch to the logical switch.
  • 7. The switch of claim 6, wherein the method further comprises: detecting a failure of a link between the device and the separate physical switch; anddisassociating the device from the virtual switch.
  • 8. The switch of claim 6, wherein the method further comprises: detecting a failure of a link between the device and the switch; andnotifying the separate physical switch of the failure.
  • 9. The switch of claim 1, wherein the method further comprises notifying the separate physical switch about the reachability of a device coupled to the switch.
  • 10. The switch of claim 1, wherein the method further comprises advertising that the virtual switch is equivalent to both the switch and the separate physical switch, thereby facilitating multi-path routing to or from a device coupled to both switches.
  • 11. A non-transitory storage medium storing instructions which when executed by a computer system within a switch cause the computer system to perform a method, the method comprising: operating the switch in conjunction with a separate physical switch as a single logical switch;deriving a virtual switch identifier based on an identifier for a link aggregation group;assigning the virtual switch identifier to the logical switch; andmarking an ingress-switch field of a frame with the virtual switch identifier;wherein the switch is capable of routing layer-2 frames without requiring a spanning-tree network topology.
  • 12. The non-transitory storage medium of claim 11, wherein the method further comprises operating the switch in accordance with at least one of: a Transparent Interconnection of Lots of Links (TRILL) protocol; an Internet Protocol (IP); and a Multiprotocol Label Switching (MPLS) protocol.
  • 13. The non-transitory storage medium of claim 11, wherein the marking the ingress-switch field of the frame comprises determining the virtual switch identifier based on an input port on which the frame is received.
  • 14. The non-transitory storage medium of claim 11, wherein the virtual switch identifier is a virtual RBridge identifier in compliance with the TRILL protocol.
  • 15. The non-transitory storage medium of claim 11, wherein the link aggregation group is identified by a link aggregation group (LAG) identifier in accordance with the IEEE 802.1 ax standard.
  • 16. The non-transitory storage medium of claim 11, wherein the method further comprises advertising to a neighbor a zero-cost link from the switch to the logical switch.
  • 17. The non-transitory storage medium of claim 16, wherein the method further comprises: detecting a failure of a link between the device and the separate physical switch; anddisassociating the device from the virtual switch.
  • 18. The non-transitory storage medium of claim 16, wherein the method further comprises: detecting a failure of a link between the device and the switch; andnotifying the separate physical switch of the failure.
  • 19. The non-transitory storage medium of claim 11, wherein the method further comprises notifying the separate physical switch about the reachability of a device coupled to the switch.
  • 20. The non-transitory storage medium of claim 11, wherein the method further comprises advertising that the virtual switch is equivalent to both the switch and the separate physical switch, thereby facilitating multi-path routing to or from a device coupled to both switches.
RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 13/092,864, entitled “METHOD AND SYSTEM FOR LINK AGGREGATION ACROSS MULTIPLE SWITCHES,” by inventors Joseph Juh-En Cheng, Wing Cheung, John Michael Terry, Suresh Vobbilisetty, Surya P. Varanasi, and Parviz Ghalambor, filed 22 Apr. 2011, which claims the benefit of U.S. Provisional Application No. 61/352,720, entitled “Method and Apparatus For Link Aggregation Across Multiple Switches,” by inventors Joseph Juh-En Cheng, Wing Cheung, John Michael Terry, Suresh Vobbilisetty, Surya P. Varanasi, and Parviz Ghalambor, filed 8 Jun. 2010, the disclosure of which is incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 12/725,249, entitled “REDUNDANT HOST CONNECTION IN A ROUTED NETWORK,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010; and U.S. patent application Ser. No. 13/087,239, entitled “VIRTUAL CLUSTER SWITCHING,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011; the disclosures of which are incorporated by reference herein.

US Referenced Citations (325)
Number Name Date Kind
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5878232 Marimuthu Mar 1999 A
5959968 Chin Sep 1999 A
5973278 Wehrill, III Oct 1999 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6438106 Pillar Aug 2002 B1
6542266 Phillips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6771610 Seaman Aug 2004 B1
6873602 Ambe Mar 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow et al. Mar 2006 B1
7097308 Singhal Aug 2006 B2
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo et al. Jul 2009 B1
7558273 Grosser, Jr. Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937756 Kay May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8018938 Fromm Sep 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukla Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8102781 Smith Jan 2012 B2
8102791 Tang Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8295921 Wang Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake, III Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8446914 Cheng et al. May 2013 B2
8462774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8599850 Jha Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
8665886 Gupta et al. Mar 2014 B2
8826385 Congdon Sep 2014 B2
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian Feb 2002 A1
20020091795 Yip Jul 2002 A1
20030041085 Sato Feb 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20040001433 Gram Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040049699 Griffith Mar 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040213232 Regan Oct 2004 A1
20050007951 Lapuh Jan 2005 A1
20050044199 Shiga Feb 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050122979 Gross Jun 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050213561 Yao Sep 2005 A1
20050220096 Friskney Oct 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060007869 Hirota Jan 2006 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060098589 Kreeger May 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060221960 Borgione Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070116224 Burke May 2007 A1
20070116422 Reynolds May 2007 A1
20070177525 Wijnands Aug 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070211712 Fitch Sep 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland, III Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080065760 Damm Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080101386 Gray May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Berkvens Jun 2008 A1
20080159277 Vobbilisetty et al. Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090037607 Farinacci Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090129384 Regan May 2009 A1
20090138752 Graham May 2009 A1
20090161584 Guan Jun 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090245137 Hares et al. Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20100027420 Smith Feb 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukla Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Johnston et al. Jul 2010 A1
20100169467 Shukla Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100299527 Arunan Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20100309912 Mehta Dec 2010 A1
20100329110 Rose Dec 2010 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar et al. Feb 2011 A1
20110044352 Chaitou Feb 2011 A1
20110064086 Xiong Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161494 McDysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110280572 Vobbilisetty Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299413 Chatwani Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dumbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Da Silva Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120163164 Terry Jun 2012 A1
20120177039 Berman Jul 2012 A1
20120243539 Keesara Sep 2012 A1
20120275347 Banerjee Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120320800 Kamble Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327766 Tsai et al. Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130003737 Sinicrope Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130067466 Combs Mar 2013 A1
20130070762 Adams Mar 2013 A1
20130114595 Mack-Crane et al. May 2013 A1
20130127848 Joshi May 2013 A1
20130194914 Agarwal Aug 2013 A1
20130250951 Koganti Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20140105034 Sun Apr 2014 A1
Foreign Referenced Citations (8)
Number Date Country
102801599 Nov 2012 CN
0579567 May 1993 EP
1398920 Mar 2004 EP
1916807 Apr 2008 EP
2001167 Dec 2008 EP
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
2014031781 Feb 2014 WO
Non-Patent Literature Citations (171)
Entry
“Switched Virtual Internetworking moves beyond bridges and routers”, 8178 Data Communications 23(Sep. 1994) No. 12, New York, pp. 66-70, 72, 74, 76, 78, 80.
Perlman, Radia et al., “RBridge VLAN Mapping”, Dec. 4, 2003.
Perlman, Radia et al., “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, 2009.
Perlman, Radia et al., “RBridges: Base Protocol Specification” <draft-ietf-trill-rbridge-protocol-16.txt>, 2010.
S. Nada, Ed et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, Mar. 2010.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, Oct. 2002.
Knight, S. et al., “Virtual Router Redundancy Protocol”, Apr. 1998.
Eastlake 3rd., Donald et al., “RBridges: TRILL Header Options”, <draft-ietf-trill-rbridge-options-00.txt>, Dec. 2009.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, draft-lapuh-network-smlt-08, Jul. 2008.
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions, 2009.
Touch, J. et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009.
Christensen, M. et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, May 2006.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Nov. 12, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jan. 10, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jan. 6, 2014.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, from Pascual Peguero, Natali, dated Feb. 20, 2014.
U.S. Appl. No. 12/312,903 Office Action dated Jun. 13, 2013.
U.S. Appl. No. 13/365,808 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/365,993 Office Action dated Jul. 23, 2013.
U.S. Appl. No. 13/092,873 Office Action dated Jun. 19, 2013.
U.S. Appl. No. 13/184,526 Office Action dated May 22, 2013.
U.S. Appl. No. 13/184,526 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/050,102 Office Action dated May 16, 2013.
U.S. Appl. No. 13/050,102 Office Action dated Oct. 26, 2012.
U.S. Appl. No. 13/044,301 Office Action dated Feb. 22, 2013.
U.S. Appl. No. 13/044,301 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/030,688 Office Action dated Apr. 25, 2013.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 13, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/098,360 Office Action dated May 31, 2013.
U.S. Appl. No. 13/092,864 Office Action dated Sep. 19, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jun. 7, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jan. 4, 2013.
U.S. Appl. No. 13/092,877 Office Action dated Mar. 4, 2013.
U.S. Appl. No. 12/950,974 Office Action dated Dec. 20, 2012.
U.S. Appl. No. 12/950,974 Office Action dated May 24, 2012.
U.S. Appl. No. 13/092,752 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/092,752 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jul. 3, 2013.
U.S. Appl. No. 13/092,460 Office Action dated Jun. 21, 2013.
U.S. Appl. No. 13/042,259 Office Action dated Mar. 18, 2013.
U.S. Appl. No. 13/042,259 Office Action dated Jul. 31, 2013.
U.S. Appl. No. 13/092,580 Office Action dated Jun. 10, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Jul. 16, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/098,490 Office Action dated Dec. 21, 2012
U.S. Appl. No. 13/098,490 Office Action dated Jul. 9, 2013.
U.S. Appl. No. 13/087,239 Office Action dated May 22, 2013.
U.S. Appl. No. 13/087,239 Office Action dated Dec. 5, 2012.
U.S. Appl. No. 12/725,249 Office Action dated Apr. 26, 2013.
U.S. Appl. No. 12/725,249 Office Action dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Brocade Unveils “The Effortless Network”, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network--nasdaq-brcd-0859535, 2012.
Foundry Fastlron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 26, 2008.
Fastlron and Turbolron 24X Configuration Guide Supporting FSX 05.1.00 for FESX, FWSX, and FSX; FGS 04.3.03 for FGS, FLS and FWS; FGS 05.0.02 for FGS-STK and FLS-STK, FCX 06.0.00 for FCX; and TIX 04.1.00 for TI24X, Feb. 16, 2010.
Fastlron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
“The Effortless Network: HyperEdge Technology for the Campus LAN”, 2012.
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization”, draft-narten-nvo3-overlay-problem-statement-01, Oct. 31, 2011.
Knight, Paul et al., “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, IEEE Communications Magazine, Jun. 2004.
“An Introduction to Brocade VCS Fabric Technology”, Brocade white paper, http://community.brocade.com/docs/ Doc-2954, Dec. 3, 2012.
Kreeger, L. et al., “Network Virtualization Overlay Control Protocol Requirements”, Draft-kreeger-nvo3-overlay-cp-00, Jan. 30, 2012.
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003.
Louati, Wajdi et al., “Network-based virtual personal overlay networks using programmable virtual routers”, IEEE Communications Magazine, Jul. 2005.
U.S. Appl. No. 13/092,877 Office Action dated Sep. 5, 2013.
U.S. Appl. No. 13/044,326 Office Action dated Oct. 2, 2013.
Zhai F. Hu et al. “RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt”, May 15, 2012.
Huang, Nen-Fu et al., “An Effective Spanning Tree Algorithm for a Bridged LAN”, Mar. 16, 1992.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office Action dated Jun. 20, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015.
Abawajy J. “An Approach to Support a Single Service Provider Address Image for Wide Area Networks Environment” Centre for Parallel and Distributed Computing, School of Computer Science Carleton University, Ottawa, Ontario, K1S 5B6, Canada.
Perlman, Radia et al., ‘RBridge VLAN Mapping’, TRILL Working Group, Dec. 4, 2009, pp. 1-12.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Perlman, Radia et al., ‘RBridges: Base Protocol Specification; Draft-ietf-trill-rbridge-protocol-16.txt’, Mar. 3, 2010, pp. 1-117.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, date Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, dated Dec. 2, 2012.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated Dec. 5, 2012.
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE Globecom Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009, pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011].
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action forU.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Related Publications (1)
Number Date Country
20130308649 A1 Nov 2013 US
Provisional Applications (1)
Number Date Country
61352720 Jun 2010 US
Continuations (1)
Number Date Country
Parent 13092864 Apr 2011 US
Child 13889637 US