Method and system for extending routing domain to non-routing end stations

Information

  • Patent Grant
  • 8995444
  • Patent Number
    8,995,444
  • Date Filed
    Monday, February 4, 2013
    11 years ago
  • Date Issued
    Tuesday, March 31, 2015
    9 years ago
Abstract
A system is provided for facilitating assignment of a virtual routing node identifier to a non-routing node. During operation, the system assigns to a non-routing node coupled to a switch a virtual routing node identifier unique to the non-routing node. In addition, the system communicates reachability information corresponding to the virtual routing node identifier to other switches in the network.
Description
BACKGROUND

1. Field


The present disclosure relates to network management. More specifically, the present disclosure relates to a method and system for extending the routing domain in a network to a non-routing node.


2. Related Art


Currently, end stations in layer-2 networks have not been able to take advantage of the routing functionalities available in such networks. End stations can only operate as leaf nodes and are constrained to an interface with only one of the routing nodes. Consequently, an end station is required to exchange all the data with that node, and there is no node redundancy (e.g., for protection against routing node failures) available to end stations. Further more, other routing nodes in the network can send data to that end station only via one routing switch to which the end station is connected.


Meanwhile, layer-2 networking technologies continue to evolve. More routing functionalities, which have traditionally been the characteristics of layer-3 (e.g., IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing nodes. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.


However, a switch or end host not running the TRILL protocol cannot benefit from the rich set of functionalities provided in TRILL networks.


SUMMARY

One embodiment of the present invention provides a system for facilitating assignment of a virtual routing node identifier to a non-routing node. During operation, the system assigns to a non-routing node coupled to a switch a virtual routing node identifier unique to the non-routing node. In addition, the system communicates reachability information corresponding to the virtual routing node identifier to other switches in the network.


In a variation on this embodiment, the switch is a routing bridge capable of routing data frames without requiring the network topology to be a spanning tree topology.


In a variation on this embodiment, the virtual routing node identifier is a routing bridge identifier in accordance with the Transparent Interconnection of Lots of Links (TRILL) protocol.


In a variation on this embodiment, the system performs a routing function based on an intermediate system to intermediate system (IS-IS) routing protocol. The system further advertises reachability associated with the virtual routing node identifier in IS-IS link state protocol data units.


In a variation on this embodiment, the system marks an ingress-switch field of a frame received from the non-routing node with the virtual routing node identifier.


In a variation on this embodiment, the non-routing node is also coupled to at least a second routing switch capable of performing a routing function. The first routing switch communicates to the second routing switch and optionally other routing switches the media access control (MAC) address(es) learned from a port to which the none-routing node is coupled.


In a variation on this embodiment, in response to detecting a failure of a link between the non-routing node and the switch, the system discontinues communicating the reachability information to other switches and/or provides an explicit message to inform other switch(es) about the non-reachability to the non-routing node.


In a variation on this embodiment, the system is precluded from forwarding a multi-destination frame to the non-routing node when an ingress-switch field of the frame matches the virtual routing node identifier assigned to the non-routing node.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1A illustrates an exemplary network where virtual RBridge identifiers are assigned to non-TRILL nodes, in accordance with an embodiment of the present invention.



FIG. 1B illustrates an exemplary network that facilitates virtual RBridge identifier assignment to a non-TRILL node coupled to multiple TRILL RBridges via link aggregation, in accordance with an embodiment of the present invention.



FIG. 2 presents a flowchart illustrating the process of configuring the TRILL header of an ingress frame from a non-TRILL node at an ingress physical RBridge, in accordance with an embodiment of the present invention.



FIG. 3 illustrates an exemplary header configuration of an ingress TRILL frame which contains a virtual RBridge nickname in its ingress RBridge nickname field, in accordance with an embodiment of the present invention.



FIG. 4 presents a flowchart illustrating the process of forwarding a TRILL frame at an ingress RBridge which participates in the link aggregation in conjunction with a non-TRILL node which has a virtual RBridge ID, in accordance with an embodiment of the present invention.



FIG. 5 illustrates a scenario where one of the physical links of a link aggregation coupled to a non-TRILL node experiences a failure, in accordance with an embodiment of the present invention.



FIG. 6 presents a flowchart illustrating the process of handling a link failure that affects an end station which is assigned a virtual RBridge ID, in accordance with an embodiment of the present invention.



FIG. 7 illustrates an exemplary architecture of a switch that facilitates assignment of a virtual RBridge ID, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of extending a layer-2 routing domain to a non-routing node is solved by assigning a virtual routing node identifier to the non-routing node. The neighbor routing node(s) of the non-routing node can then advertise the connectivity to the non-routing node as if the non-routing node were part of the routed network.


For example, in a layer-2 network running the TRILL protocol, when an end station which does not participate in the TRILL network is coupled to one or more routing bridges (RBridges), a virtual TRILL RBridge identifier is assigned to this end station. The end station is then considered to be a virtual RBridge capable of running the TRILL protocol. An incoming frame from the end station is marked by the ingress physical RBridge with a virtual RBridge nickname as its ingress RBridge nickname and routed through the rest of the TRILL network. An end station can be coupled to multiple physical RBridges and have the same virtual RBridge ID. Consequently, other RBridges in the TRILL network can use multiple paths to reach this end station using its virtual RBridge ID, because to the rest of the TRILL network, this end station appears to be just like any other RBridge in the network. The assignment of a virtual RBridge identifier allows a non-TRILL-capable end station to participate in the routing domain of a TRILL network, and to be coupled to multiple RBridges in an arbitrary topology. Such configuration provides tremendous flexibility and facilitates redundancy against both link and node failures. For instance, an end station with a virtual RBridge identifier can be coupled to two or more physical RBridges using link aggregation. The physical RBridges can advertise connectivity to the virtual RBridge to their neighbor RBridges. When one of the aggregated links fails, the affected end station can continue operating via the remaining link(s). For the rest of the TRILL network, the virtual RBridge remains reachable.


Although this disclosure is presented using examples based on the TRILL protocol, embodiments of the present invention are not limited to TRILL networks, or networks defined in a particular Open System Interconnection Reference Model (OSI reference model) layer. In particular, although the term “layer-2” is mentioned several times in the examples, embodiments of the present invention are not limited to application to layer-2 networks. Other networking environment, either defined in OSI layers or not defined with a particular layering model, can also use the disclosed embodiments. For instance, these embodiments can apply to Multiprotocol Label Switching (MPLS) networks as well as Storage Area Networks (e.g., Fibre Channel networks).


Furthermore, although intermediate-system-to-intermediate-system (IS-IS) routing protocol is used in the examples, embodiments of the present invention are not limited to a particular routing protocol. Other routing protocols, such as Open Shortest Path First (OSPF), Routing Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), Enhanced IGRP (EIGRP), Border Gateway Protocol (BGP), or other open or proprietary protocols can also be used. In addition, embodiments of the present invention are not limited to the TRILL frame encapsulation format. Other open or proprietary encapsulation format and methods can also be used.


The term “RBridge” refers to routing bridges, which are bridges implementing the TRILL protocol as described in IETF draft “RBridges: Base Protocol Specification,” available at http://tools.ietf.org/html/draft-ietf-trill-rbridge-protocol-14, which is incorporated by reference herein. Embodiments of the present invention are not limited to the application among RBridges. Other types of switches, routers, and forwarders can also be used.


The term “physical RBridge” refers to an RBridge running TRILL protocol, as opposed to a “virtual RBridge” which refers to a non-TRILL end station with a virtual RBridge ID.


The term “virtual RBridge” refers to a non-TRILL end station with a virtual RBridge ID. The physical RBridge(s) to which the non-TRILL end station is coupled can advertise the connectivity to this end station as if it were a regular RBridge.


The term “end station” refers to a network device that is not TRILL-capable. “End station” is a relative term with respect to the TRILL network. “End station” does not necessarily mean that the network device is an end host. An end station can be a host, a conventional layer-2 switch, or any other type of network device. Additionally, an end station can be coupled to other switches, routers, or hosts further away from the TRILL network. In other words, an end station can be an aggregation point for a number of network devices to enter the TRILL network.


The term “multi-homed end station” refers to an end station that has an aggregate link to two or more TRILL RBridges, where the aggregate link includes multiple physical links to the different RBridges. The aggregate link functions as one logical link to the end station. “Multi-homed end station” may also refer to an end station that connects to TRILL RBridges which do not form a logical link aggregation and do not form an association with each other. This could be the case where an end station has multiple logical networking entities (an example is a virtualized server where different servers may be coupled to different networks through different network ports in the system). A single end-station can have multiple virtual RBridge identifier assignments.


The term “frame” refers to a group of bits that can be transported together across a network. “Frame” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Frame” can be replaced by other terminologies referring to a group of bits, such as “packet,” “cell,” or “datagram.”


The term “RBridge identifier” refers to a group of bits that can be used to identify an RBridge. Note that the TRILL standard uses “RBridge ID” to denote the 48-bit intermediate-system-to-intermediate-system (IS-IS) System ID assigned to an RBridge, and “RBridge nickname” to denote the 16-bit value that serves as an abbreviations for the “RBridge ID.” The “RBridge identifier” used in this disclosure is not limited to any bit format, and can refer to “RBridge ID” or “RBridge nickname” or any other format that can identify an RBridge.


Network Architecture



FIG. 1A illustrates an exemplary network where virtual RBridge identifiers are assigned to non-TRILL nodes, in accordance with an embodiment of the present invention. In this example, the network is divided into three layers: a core layer, an aggregation layer, and an access layer. The core layer includes core switches 122 and 124. These switches can be director-class high-throughput switches interconnecting multiple networks. The aggregation layer includes a number of RBridges 101, 102, 103, and 104. These RBridges facilitate traffic aggregation from the access layer and feed to the core layer. The access layer also includes a number of RBridges 105, 106, and 107. The RBridges in the access layer interface with end stations, such as end stations 111, 112, 113, 114, 115, and 116. RBridges 101, 102, 103, 104, 105, 106, and 107 form the TRILL network. An end station can be a host, a switch, or any type of network device. For example, end stations 112 and 113 are both servers, and end station 114 is a switch. Furthermore, the TRILL network can be configured in any arbitrary topology, which can be a mesh and not required to be a spanning tree topology.


In embodiments of the present invention, an end station is assigned a virtual RBridge ID. This configuration allows the end station to be part of the routed TRILL network, and hence take advantage of the topology flexibility. For example, as illustrated in FIG. 1A, servers 112 and 113, and switch 114 are all multi-homed with two physical RBridges. Server 112 is coupled to RBridges 105 and 106; server 113 is coupled to RBridges 105 and 107; and switch 114 is coupled to RBridges 106 and 107. Furthermore, switch 114 is also an aggregation point for hosts 120. Servers 112 and 113, as well as switch 114, each can be assigned a virtual RBridge ID.


Assume that end stations 112, 113, and 114 are assigned respective virtual RBridge IDs. Their neighbor RBridges broadcast their connectivity as virtual RBridges 144, 142, and 140, respectively. As a result, the rest of the TRILL network can view these end stations just like any other RBridge and route traffic toward them via any available path.


Take switch 114 for example. Without virtual RBridge identifier assignment, switch 114 would be “transparent” to the rest of the TRILL network. The frames forwarded by switch 114 to the TRILL network are native Ethernet frames from hosts 120. An RBridge in the TRILL network would associate the Media Access Control (MAC) addresses for hosts 120 with an ingress RBridge (i.e., the first RBridge in the TRILL network that receives these Ethernet frames).


In addition, without virtual RBridge identifier assignment, the multi-homing-style connectivity would not provide the desired result, because the TRILL protocol depends on MAC address learning to determine the location of end stations (i.e., to which ingress RBridge an end station is coupled) based on a frame's ingress TRILL RBridge ID. As such, an end station can only appear to be reachable via a single physical RBridge. For example, assume that end station 111 is in communication with one of the hosts 120. Since switch 114 is coupled to RBridges 106 and 107, the ingress RBridge would be RBridges 106 and 107, and the egress RBridge would be RBridge 105. The incoming frames forwarded by switch 114 would have either RBridge 106 or RBridge 107 marked as their ingress RBridge ID. When RBridge 105 receives these frames and performs MAC address learning, RBridge 105 would assume that the host is moving and is either coupled to RBridge 106 or RBridge 107 (but not both). RBridge 105 would send the frames from end station 111 to either RBridge 106 or RBridge 107. Consequently, only one of the physical links leading to switch 114 is used, which defeats the purpose of having redundant links between switch 114 and RBridges 106 and 107.


The multi-homing configuration of an end station is explained in more detail in the example illustrated in FIG. 1B. In this example, the TRILL network includes five physical RBridges: 161, 162, 163, 164, and 165. A non-TRILL switch 168 is coupled to RBridges 162, 164, and 165, and end stations 172 and 174. During operation, a virtual RBridge identifier is assigned to non-TRILL switch 168, either manually or automatically by one of the coupled RBridges using Link Layer Discovery Protocol (LLDP) or any other configuration/discovery protocol.


Non-TRILL switch 168 has its links to RBridges 162, 164, and 165 configured as a link aggregation (LAG) 170. In other words, switch 168 can distribute ingress traffic (i.e., traffic from end stations 172 and 174 to the TRILL network) among the three links using link aggregation techniques. Such techniques can include any multi-chassis trunking techniques. In addition, RBridges 162, 164, and 165 are configured to process ingress frames from switch 168 such that these frames will have the virtual RBridge nickname in their TRILL header as the ingress RBridge. When these frames are forwarded to the rest of the TRILL network with their respective TRILL headers, other RBridges in the network treat them as being originated from the virtual RBridge.


During operation, each physical RBridge sends TRILL HELLO messages to its neighbor to confirm its health. Each RBridge also sends link state protocol data units (LSPs) to its neighbor, so that link state information can be exchanged and propagated throughout the TRILL network. As illustrated in FIG. 1B, RBridge 162 regularly transmits TRILL HELLO messages to its neighboring RBridges. Since switch 168 is not TRILL enabled, RBridge 162 does not send TRILL HELLO messages to switch 168. In addition, RBridge 162 has a static link state entry for the virtual RBridge associated with switch 168, and periodically announces the reachability to this virtual RBridge in its LSPs to other RBridges. Similarly, RBridges 164 and 165 also maintain static link state entries for the virtual RBridge and announce its reachability in their respective LSPs.


In a TRILL network, the reachability information associated with a MAC address is learned by an RBridge based on the source MAC address in the internal Ethernet header of a received TRILL frame. It is possible that the three RBridges coupled to switch 168 may learn different MAC reachability information about the end stations behind switch 168. To avoid unnecessary flooding (which is the default behavior of a layer-2 switch when it encounters a frame with a destination MAC address that it does not know how to forward), in one embodiment, the MAC address reachability for end stations 172 and 174 via switch 168 is shared among all the RBridges coupled to switch 168. That is, RBridges 162, 164, and 165 share the MAC reachability information of the devices behind switch 168. For example, assume that RBridge 162 receives all the frames originated from end station 172, while RBridge 164 receives all the frames originated from end station 174. RBridge 162 then shares its knowledge about the MAC of end station 172 (i.e., the knowledge that the MAC address for end station 172 is associated with the virtual RBridge and that end station 172 is behind switch 168) with RBridges 164 and 165. Similarly, RBridge 164 shares its knowledge about end station 174's MAC address with RBridges 162 and 165. This way, when a frame destined for, say, end station 172 reaches RBridge 165, RBridge 165 would know that it can forward the frame to switch 168, instead of flooding it to multiple ports.


It should be noted that a non-TRILL switch (such as switch 168) is not required to change the way it is configured for handling link aggregation. A multi-homed end station only needs to be configured as would be the case with a regular link aggregation scheme. Hence, the multi-homed end station does not need to be aware that a virtual RBridge nickname is assigned to all the frames it forwards to the physical RBridge(s). Furthermore, the rest of the TRILL network (apart from RBridges 163, 164, and 165) is also not required to be aware that the virtual RBridge is actually not a physical RBridge. In other words, to the rest of the TRILL network, the virtual RBridge associated with switch 168 is indistinguishable from any of the physical RBridges. Therefore, the present invention does not require extra configuration to the rest of the TRILL network.


Frame Processing



FIG. 2 presents a flowchart illustrating the process of configuring the TRILL header of an ingress frame from a non-TRILL node at an ingress physical RBridge, in accordance with an embodiment of the present invention. During operation, a physical RBridge participating in link aggregation receives an ingress Ethernet frame from a non-TRILL switch which has been assigned a virtual RBridge identifier (operation 202). The RBridge then identifies the destination MAC address of the received frame (operation 204). Based on the destination MAC address, the RBridge performs a lookup on the egress TRILL RBridge nickname (operation 206). Next, the RBridge determines the next-hop TRILL RBridge based on the egress TRILL RBridge nickname (operation 208). (It is assumed that the routing function in the TRILL protocol or other routing protocol is responsible for populating the forwarding information base at each RBridge.)


Subsequently, the RBridge sets the TRILL header of the frame (operation 210). In doing so, the RBridge sets the virtual RBridge as the ingress RBridge for the frame. The egress RBridge of the TRILL header is set based on the result of operation 206.


The RBridge then sets the outer Ethernet header of the frame (operation 212). In doing so, the RBridge sets the MAC address of the next-hop RBridge (the result of operation 208) as the destination MAC address in the outer Ethernet header. The RBridge further sets the MAC address of the local transmitting RBridge as the source MAC address in the outer Ethernet header. After setting the outer Ethernet header, the RBridge transmits the TRILL-encapsulated frame to the next-hop RBridge (operation 214).



FIG. 3 illustrates an exemplary header configuration of an ingress TRILL frame which contains a virtual RBridge nickname in its ingress RBridge nickname field, in accordance with an embodiment of the present invention. In this example, a TRILL-encapsulated frame includes an outer Ethernet header 302, a TRILL header 303, an inner Ethernet header 308, an Ethernet payload 310, and an Ethernet frame check sequence (FCS) 312.


TRILL header 303 includes a version field (denoted as “V”), a reserved field (denoted as “R”), a multi-destination indication field (denoted as “M”), an option-field-length indication field (denoted as “OP-LEN”), and a hop-count field (denoted as “HOP CT”). Also included are an egress RBridge nickname field 304 and an ingress RBridge nickname field 306.


After an ingress physical RBridge encapsulates an ingress frame with the proper TRILL and outer Ethernet headers and transmits the frame to its destination, it is expected to receive frames in the reverse direction from the destination in response to the transmission. FIG. 4 presents a flowchart illustrating the process of receiving and forwarding a TRILL frame at a physical RBridge which participates in the link aggregation in conjunction with a non-TRILL node which has a virtual RBridge ID, in accordance with an embodiment of the present invention.


During operation, a physical RBridge receives a TRILL frame (operation 402). The RBridge then determines whether the frame's egress RBridge nickname corresponds to the local RBridge or a virtual RBridge associated with the local RBridge (operation 403). If the frame's egress RBridge nickname matches neither the local RBridge nor a virtual RBridge identifier associated with a non-TRILL switch coupled to the local RBridge (i.e., the frame is not destined to the local RBridge), the RBridge transmits the frame to the next-hop RBridge based on the frame's egress RBridge nickname (operation 405).


On the other hand, if the condition in operation 403 is met, the RBridge then performs a lookup in its MAC-address table to identify an output port corresponding to the frame's destination MAC address in its inner Ethernet header (operation 404). In the case where the frame's destination MAC address corresponds to an end station coupled to the local RBridge, the frame is processed locally. If the frame's destination MAC address corresponds to an output port leading to the non-TRILL switch coupled to the local RBridge, the local RBridge removes the outer Ethernet header and TRILL header from the frame, and forwards the frame to the identified output port leading to the non-TRILL switch. Note that the MAC reachability information is shared between the RBridges participating in the link aggregation. Hence, even if the local RBridge has not received an ingress frame with the same source MAC address (i.e., the RBridge has not learned the MAC address locally), the RBridge can still determine that the destination MAC address is reachable via a local link based on the MAC reachability information shared from the other RBridge(s). Subsequently, the RBridge transmits the frame to the output port corresponding to the frame's destination MAC address in its inner Ethernet header (operation 408).


Failure Handling


One advantage of assigning a virtual RBridge identifier to a non-TRILL switch is to facilitate link aggregation across multiple physical RBridges, which in turn provides protection against both link and node failures. FIG. 5 illustrates a scenario where one of the physical links of a link aggregation coupled to a non-TRILL node experiences a failure, in accordance with an embodiment of the present invention. In this example, a non-TRILL switch 568 is coupled to three physical RBridges 562, 564, and 566 via a link aggregation 570. Non-TRILL switch 568 is assigned a virtual RBridge ID. Suppose the link between switch 568 and RBridge 566 fails. As a result, RBridge 566 will notify its neighbor RBridges about the non-reachability of switch 568. Meanwhile, LAG 570 remains effective with RBridges 562 and 564. Switch 568's virtual RBridge nickname can still be used as the ingress RBridge nickname in the TRILL headers of its ingress frames.


RBridge 566 may still receive some frames destined to switch 568 before the TRILL network topology converges. Since RBridges 562 and 564 can both be used to reach switch 568, RBridge 566 can forward these frames to RBridge 562 or 564. Thus, minimum service interruption can be achieved during link failure. Similarly, in the case of node failure (e.g., when RBridge 566 fails), switch 568 can continue operation with LAG 570.



FIG. 6 presents a flowchart illustrating the process of handling a link failure that affects an end station which is assigned a virtual RBridge ID, in accordance with an embodiment of the present invention. During operation, a physical RBridge detects a failure to a physical link to an end station associated with the virtual RBridge (operation 602). The physical RBridge then updates its TRILL forwarding information base to reflect this topology change (604). Subsequently, the RBridge sends link state PDUs to its neighbor RBridges to update the link state (operation 606). Note that the non-TRILL end station corresponding to the virtual RBridge identifier does not need to be re-configured. It only needs to re-distribute the outgoing frames to the remaining links within the LAG coupling to other physical RBridges.


Multi-Destination Scenario


In conventional TRILL networks, a multi-destination frame is delivered using one or more distribution trees. Each tree has a root RBridge. Typically, an ingress RBridge marks a multi-destination TRILL frame's egress RBridge nickname field with the nickname of the root RBridge for the distribution tree, and sets the M bit of the frame to “1.” As a result, the frame is distributed through the TRILL network based on a TREE topology associated with the root RBridge.


It is possible that a multi-destination frame is generated by a non-TRILL switch which is coupled to a number of physical RBridges and is assigned a virtual RBridge identifier. Since the non-TRILL switch is regarded as an RBridge by the rest of the TRILL network, its virtual RBridge identifier will be included in the distribution trees. Take the network illustrated in FIG. 1B for example. A distribution tree can have RBridge 162 as its root. The tree can include the following links: (162, 164), (162, 161), (162, 163), (163, 165), (165, 168). In this case, since non-TRILL switch 168 does not follow the TRILL multi-destination distribution protocol, when it forwards a multi-destination frame (which may be generated by end station 172 or 174), its default behavior is to send the frame onto one of the links within LAG 170. The link used by non-TRILL switch 168 may or may not be part of the desired distribution tree. This default behavior, if not mitigated, could cause unwanted loops in the network. For instance, suppose non-TRILL switch 168 sends a multi-destination frame onto the link toward RBridge 164. Since RBridge 165 is adjacent to RBridge 164 in the distribution tree, RBridge 164 would forward this frame to RBridge 165. Upon receiving this frame, RBridge 165 would by default forward it non-TRILL switch 168, which is part of the distribution tree. As a result, a loop occurs.


One way to solve this looping problem is to provide proper encapsulation at the ingress physical RBridge (such as RBridges 162, 164, and 165). In one embodiment, at the ingress physical RBridge, the multi-destination frame is encapsulated with a multi-destination TRILL header whose M bit is set to “1.” In this TRILL header, the ingress RBridge nickname field contains the virtual RBridge nickname assigned to non-TRILL switch 168. The egress RBridge nickname field contains the nickname of RBridge 162, which is the root of the distribution tree. The ingress physical RBridge can then forward this frame on behalf of non-TRILL switch 168. In addition, when a physical RBridge receives a multi-destination frame with an ingress RBridge nickname matching the virtual RBridge nickname assigned to a non-TRILL switch connected to the physical RBridge, the physical RBridge is precluded from forwarding this frame to the non-TRILL network. For example, in the scenario described above, RBridge 165 would be precluded from forwarding the multi-destination frame to non-TRILL switch 168, because the frame's ingress RBridge nickname is the same as the virtual RBridge nickname assigned to non-TRILL switch 168. This constraint ensures that no loop can occur from these multi-destination frames.


Exemplary Switch System



FIG. 7 illustrates an exemplary architecture of a switch that facilitates assignment of a virtual RBridge ID, in accordance with an embodiment of the present invention. In this example, a RBridge 700 includes a number of communication ports 701, a packet processor 702, a virtual RBridge management module 704, a virtual RBridge configuration module 705, a storage device 706, and a TRILL header generation module 708. During operation, communication ports 701 receive frames from (and transmit frames to) the end stations. Packet processor 702 extracts and processes the header information from the received frames. Note that communication ports 701 include at least one inter-switch port for communication with one or more RBridges participating in a link aggregation. Virtual RBridge management module 704 manages the communication with the partner RBridges and handles various inter-switch communication, such as MAC address information sharing and link failure notification.


Virtual RBridge configuration module 705 allows a user to configure and assign the identifier for the virtual RBridges. It is also responsible for communicating with the partner RBridge(s) to share each other's MAC address reachability information, which is stored in storage 706. Furthermore, TRILL header generation module 708 generates the TRILL header for ingress frames corresponding to the virtual RBridge.


In summary, embodiments of the present invention provide a method and system for facilitating link aggregation or diverse connections to different networks across different switches in a routed network. In one embodiment, a virtual RBridge is formed to accommodate an aggregate link from an end station to multiple physical RBridges. The virtual RBridge is used as the ingress RBridge for ingress frames from the end station. Such configuration provides a scalable and flexible solution to link aggregation across multiple switches.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable nontransitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: a processor;a storage device coupled to the processor and storing instructions which when executed by the processor cause the processor to perform a method, the method comprising: generating a virtual routing node identifier for a non-routing node, wherein the virtual routing node identifier indicates a multi-switch link aggregation;constructing a frame comprising reachability information corresponding to the virtual routing node identifier;marking an ingress-node field of a frame received from the non-routing node with the virtual routing node identifier; andprecluding a received multi-destination frame from being forwarded to the non-routing node when an ingress-node field of the multi-destination frame matches the virtual routing node identifier, thereby avoiding duplication of the multi-destination frame.
  • 2. The switch of claim 1, wherein the switch is configured to route layer-2 data frames without requiring the network topology to be a spanning tree topology.
  • 3. The switch of claim 1, wherein the virtual routing node identifier is a routing bridge identifier in accordance with a Transparent Interconnection of Lots of Links (TRILL) protocol.
  • 4. The switch of claim 1, wherein the method further comprises communicating to a second switch which is part of the multi-switch link aggregation a MAC address learned from a port to which the non-routing node is coupled.
  • 5. The switch of claim 1, wherein in response to detecting a failure of a link between the non-routing node and the switch, the method further comprises discontinuing communicating reachability information to another switch of the multi-switch link aggregation.
  • 6. A non-transitory storage device storing instructions which when executed by a processor cause the processor to perform a method, comprising: generating, at a switch, a virtual routing node identifier for a non-routing node, wherein the virtual routing node identifier indicates a multi-switch link aggregation;constructing a frame comprising reachability information corresponding to the virtual routing node identifier;marking an ingress-node field of a frame received from the non-routing node with the virtual routing node identifier; andprecluding a received multi-destination frame from being forwarded to the non-routing node when an ingress-node field of the multi-destination frame matches the virtual routing node identifier, thereby avoiding duplication of the multi-destination frame.
  • 7. The non-transitory storage device of claim 6, wherein the method further comprises routing layer-2 data frames without requiring the network topology to be a spanning tree topology.
  • 8. The non-transitory storage device of claim 6, wherein the virtual routing node identifier is a routing bridge identifier in accordance with a Transparent Interconnection of Lots of Links (TRILL) protocol.
  • 9. The non-transitory storage device of claim 6, wherein the method further comprises communicating to a second switch which is part of the multi-switch link aggregation a MAC address learned from a port to which the non-routing node is coupled.
  • 10. The non-transitory storage device of claim 6, wherein in response to detecting a failure of a link between the non-routing node and the switch, the method further comprises discontinuing communicating reachability information to another switch of the multi-switch link aggregation.
  • 11. A switch, comprising: a virtual-node-management module configured to: store a virtual routing node identifier for a non-routing node, wherein the virtual routing node identifier indicates a multi-switch link aggregation;construct a frame comprising reachability information corresponding to the virtual routing node identifier; andmark an ingress-node field of a frame received from the non-routing node with the virtual routing node identifier; anda packet processor coupled to the virtual-node-management module and precluded from forwarding a received multi-destination frame to the non-routing node when an ingress-node field of the multi-destination frame matches the virtual routing node identifier, thereby avoiding duplication of the multi-destination frame.
  • 12. The switch of claim 11, further comprising a routing mechanism coupled to the virtual-node-management module and configured to route layer-2 data frames without requiring the network topology to be a spanning tree topology.
  • 13. The switch of claim 11, wherein the virtual routing node identifier is a routing bridge identifier in accordance with a Transparent Interconnection of Lots of Links (TRILL) protocol.
  • 14. The switch of claim 11, wherein the virtual-node-management module is further configured to communicate to a second switch which is part of the multi-switch link aggregation a MAC address learned from a port to which the non-routing node is coupled.
  • 15. The switch of claim 11, wherein the virtual-node-management module is configured to discontinue communicating reachability information to another switch of the multi-switch link aggregation in response to detecting a failure of a link between the non-routing node and the switch.
RELATED APPLICATION

This application is a continuation application of application Ser. No. 12/730,749, entitled “METHOD AND SYSTEM FOR EXTENDING ROUTING DOMAIN TO NON-ROUTING END STATIONS,” by inventors Pankaj K. Jha and Mitri Halabi, filed 24 Mar. 2010, the disclosure of which is incorporated by reference herein.

US Referenced Citations (212)
Number Name Date Kind
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185241 Sun Feb 2001 B1
6438106 Pillar Aug 2002 B1
6542266 Phillips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6873602 Ambe Mar 2005 B1
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser, Jr. Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7716370 Devarapalli May 2010 B1
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937756 Kay May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukla Nov 2011 B1
8078704 Lee Dec 2011 B2
8102781 Smith Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8301686 Appajodu Oct 2012 B1
8351352 Eastlake, III Jan 2013 B1
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8462774 Page Jun 2013 B2
8520595 Yadav Aug 2013 B2
8599850 Jha Dec 2013 B2
20020021701 Lavian Feb 2002 A1
20020091795 Yip Jul 2002 A1
20030041085 Sato Feb 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20040001433 Gram Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040165595 Holmgren Aug 2004 A1
20040213232 Regan Oct 2004 A1
20050007951 Lapuh Jan 2005 A1
20050044199 Shiga Feb 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050169188 Cometto Aug 2005 A1
20050213561 Yao Sep 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060018302 Ivaldi Jan 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060184937 Abels Aug 2006 A1
20060242311 Mai Oct 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070097968 Du May 2007 A1
20070116224 Burke May 2007 A1
20070177597 Ju Aug 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland, III Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080065760 Damm Mar 2008 A1
20080101386 Gray May 2008 A1
20080133760 Berkvens et al. Jun 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285555 Ogasahara Nov 2008 A1
20090037607 Farinacci Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090138752 Graham May 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090245137 Hares Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090323708 Ihle et al. Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan et al. Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100165877 Shukia Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100169467 Shukia Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20110019678 Mehta et al. Jan 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110064086 Xiong Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231574 Saunderson et al. Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299532 Yu Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Da Silva Feb 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra et al. May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi et al. May 2012 A1
20120177039 Berman Jul 2012 A1
20120243539 Keesara Sep 2012 A1
20120294192 Masood Nov 2012 A1
20120320800 Kamble Dec 2012 A1
20130034015 Jaiswal Feb 2013 A1
20130067466 Combs Mar 2013 A1
20130259037 Hari Oct 2013 A1
20140105034 Sun Apr 2014 A1
Foreign Referenced Citations (4)
Number Date Country
102801599 Nov 2012 CN
1916807 Apr 2008 EP
2001167 Dec 2008 EP
2010111142 Sep 2010 WO
Non-Patent Literature Citations (85)
Entry
“Switched Virtual Internetworking moved beyond bridges and routers”, 8178 Data Communications Sep. 23, 1994, No. 12, New York.
S. Night et al., “Virtual Router Redundancy Protocol”, Network Working Group, XP-002135272, Apr. 1998.
Eastlake 3rd., Donald et al., “RBridges: TRILL Header Options”, Draft-ietf-trill-rbridge-options-00.txt, Dec. 24, 2009.
J. Touch, et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009.
Perlman, Radia et al., “RBridge VLAN Mapping”, Draft-ietf-trill-rbridge-vlan-mapping-01.txt, Dec. 4, 2009.
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions.
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, XP-002649647, 2009.
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, Mar. 2010.
Perlman, Radia et al., “RBridges: Base Protocol Specification”, draft-ietf-trill-rbridge-protocol-16.txt, Mar. 3, 2010.
Christensen, M. et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, May 2006.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, Oct. 2002.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08”, 2008.
Zhai F. Hu et al. “RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt”, May 15, 2012.
Huang, Nen-Fu et al., “An Effective Spanning Tree Algorithm for a Bridged LAN”, Mar. 16, 1992.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office Action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office Action dated Jun. 20, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Nov. 12, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jan. 10, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jan. 6, 2014.
U.S. Appl. No. 12/312,903 Office Action dated Jun. 13, 2013.
U.S. Appl. No. 13/365,808 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/365,993 Office Action dated Jul. 23, 2013.
U.S. Appl. No. 13/092,873 Office Action dated Jun. 19, 2013.
U.S. Appl. No. 13/184,526 Office Action dated May 22, 2013.
U.S. Appl. No. 13/184,526 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/050,102 Office Action dated May 16, 2013.
U.S. Appl. No. 13/050,102 Office Action dated Oct. 26, 2012.
U.S. Appl. No. 13/044,301 Office Action dated Feb. 22, 2013.
U.S. Appl. No. 13/044,301 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/030,688 Office Action dated Apr. 25, 2013.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/098,360 Office Action dated May 31, 2013.
U.S. Appl. No. 13/092,864 Office Action dated Sep. 19, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jun. 7, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jan. 4, 2013.
U.S. Appl. No. 13/092,877 Office Action dated Mar. 4, 2013.
U.S. Appl. No. 12/950,974 Office Action dated Dec. 20, 2012.
U.S. Appl. No. 12/950,974 Office Action dated May 24, 2012.
U.S. Appl. No. 13/092,752 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/092,752 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jul. 3, 2013.
U.S. Appl. No. 13/092,460 Office Action dated Jun. 21, 2013
U.S. Appl. No. 13/042,259 Office Action dated Mar. 18, 2013.
U.S. Appl. No. 13/042,259 Office Action dated Jul. 31, 2013.
U.S. Appl. No. 13/092,580 Office Action dated Jun. 10, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Jul. 16, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/098,490 Office Action dated Dec. 21, 2012.
U.S. Appl. No. 13/098,490 Office Action dated Jul. 9, 2013.
U.S. Appl. No. 13/087,239 Office Action dated May 22, 2013.
U.S. Appl. No. 13/087,239 Office Action dated Dec. 5, 2012.
U.S. Appl. No. 12/725,249 Office Action dated Apr. 26, 2013.
U.S. Appl. No. 12/725,249 Office Action dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Brocade Unveils “The Effortless Network”, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network—nasdaq-brcd-0859535, 2012.
Foundry FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 26, 2008.
FastIron and TurboIron 24X Configuration Guide Supporting FSX 05.1.00 for FESX, FWSX, and FSX; FGS 04.3.03 for FGS, FLS and FWS; FGS 05.0.02 for FGS-STK and FLS-STK, FCX 06.0.00 for FCX; and TIX 04.1.00 for TI24X, Feb. 16, 2010.
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
“The Effortless Network: HyperEdge Technology for the Campus LAN”, 2012.
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization”, draft-narten-nvo3-overlay-problem-statement-01, Oct. 31, 2011.
Knight, Paul et al., “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, IEEE Communications Magazine, Jun. 2004.
“An Introduction to Brocade VCS Fabric Technology”, Brocade white paper, http://community.brocade.com/docs/Doc-2954, Dec. 3, 2012.
Kreeger, L. et al., “Network Virtualization Overlay Control Protocol Requirements”, Draft-kreeger-nvo3-overlaycp-00, Jan. 30, 2012.
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003.
Louati, Wajdi et al., “Network-based virtual personal overlay networks using programmable virtual routers”, IEEE Communications Magazine, Jul. 2005.
U.S. Appl. No. 13/092,877 Office Action dated Sep. 5, 2013.
U.S. Appl. No. 13/044,326 Office Action dated Oct. 2, 2013.
Related Publications (1)
Number Date Country
20130148491 A1 Jun 2013 US
Continuations (1)
Number Date Country
Parent 12730749 Mar 2010 US
Child 13758902 US