Transparent interconnection of Ethernet fabric switches

Information

  • Patent Grant
  • 9806949
  • Patent Number
    9,806,949
  • Date Filed
    Friday, August 29, 2014
    10 years ago
  • Date Issued
    Tuesday, October 31, 2017
    6 years ago
Abstract
One embodiment of the present invention provides a switch. The switch includes a fabric switch module and a border module. The fabric switch module maintains a membership in a first fabric switch. The fabric switch includes a plurality of switches and operates as a single logical switch. The border module determines that the egress switch identifier in a first encapsulation header of a first packet is associated with a switch outside of the fabric switch. The first packet is forwardable in the first fabric switch based on the first encapsulation header. In response to the determination, the border module changes the ingress switch identifier in the first encapsulation header of the first packet to a first virtual switch identifier associated with a first virtual switch. This first virtual switch externally represents the first fabric switch.
Description
BACKGROUND

Field


The present disclosure relates to network design. More specifically, the present disclosure relates to a method for a constructing a scalable switching system.


Related Art


The exponential growth of the Internet has made it a popular delivery medium for a variety of applications running on physical and virtual devices. Such applications have brought with them an increasing demand for bandwidth. As a result, equipment vendors race to build larger and faster switches with versatile capabilities. However, the size of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. Furthermore, switches with higher capability are usually more complex and expensive. More importantly, because an overly large and complex system often does not provide economy of scale, simply increasing the size and capability of a switch may prove economically unviable due to the increased per-port cost.


A flexible way to improve the scalability of a switch system is to build a fabric switch. A fabric switch is a collection of individual member switches. These member switches form a single, logical switch that can have an arbitrary number of ports and an arbitrary topology. As demands grow, customers can adopt a “pay as you grow” approach to scale up the capacity of the fabric switch.


Meanwhile, layer-2 (e.g., Ethernet) switching technologies continue to evolve. More routing-like functionalities, which have traditionally been the characteristics of layer-3 (e.g., Internet Protocol or IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing devices. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.


While a fabric switch brings many desirable features to a network, some issues remain unsolved in efficiently interconnecting a plurality of fabric switches.


SUMMARY

One embodiment of the present invention provides a switch. The switch includes a fabric switch module and a border module. The fabric switch module maintains a membership in a first fabric switch. The fabric switch includes a plurality of switches and operates as a single logical switch. The border module determines that the egress switch identifier in a first encapsulation header of a first packet is associated with a switch outside of the fabric switch. The first packet is forwardable in the first fabric switch based on the first encapsulation header. In response to the determination, the border module changes the ingress switch identifier in the first encapsulation header of the first packet to a first virtual switch identifier associated with a first virtual switch. This first virtual switch externally represents the first fabric switch.


In a variation on this embodiment, the egress switch identifier in the first encapsulation header is a second virtual switch identifier associated with a second virtual switch, which externally represents a second fabric switch.


In a further variation, routing information of the first fabric switch indicates that the second virtual switch is reachable via the switch.


In a further variation, the border module determines that the egress switch identifier in a second encapsulation header of a second packet is the first virtual switch identifier. In response to the determination, the border module changes the egress switch identifier in the second encapsulation header of the second packet to a switch identifier which identifies a member switch in the first fabric switch.


In a further variation, the ingress switch identifier in the second encapsulation header of the second packet is the second virtual switch identifier.


In a further variation, the switch also includes a forwarding module which determines that the egress switch identifier in a third encapsulation header of a third packet is a switch identifier of the switch. The ingress switch identifier in the second encapsulation header of the third packet is the second virtual switch identifier. The switch also includes a learning module which learns a media access control (MAC) address from an inner packet of the third packet and stores the learned MAC address in association with the second virtual switch identifier in a storage device.


In a further variation on this embodiment, the switch also includes a forwarding module which determines an external switch as a next-hop switch for the first packet based on the first encapsulation header. This external switch is not a member switch of the first fabric switch.


In a further variation on this embodiment, the first encapsulation header is one or more of: (i) a Transparent Interconnection of Lots of Links (TRILL) header, wherein the ingress and egress switch identifiers of the first encapsulation header are TRILL routing bridge (RBridge) identifiers; and (ii) a Internet Protocol (IP) header, wherein the ingress and egress switch identifiers of the first encapsulation header are IP addresses.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 illustrates exemplary transparent interconnections of fabric switches, in accordance with an embodiment of the present invention.



FIG. 2A illustrates an exemplary forwarding of a packet with an unknown destination between transparently interconnected fabric switches, in accordance with an embodiment of the present invention.



FIG. 2B illustrates an exemplary forwarding of a packet with a known destination between transparently interconnected fabric switches, in accordance with an embodiment of the present invention.



FIG. 3A presents a flowchart illustrating the process of an edge switch forwarding a packet with an unknown destination received from a local end device, in accordance with an embodiment of the present invention.



FIG. 3B presents a flowchart illustrating the process of an egress border switch forwarding a packet with an unknown destination, in accordance with an embodiment of the present invention.



FIG. 3C presents a flowchart illustrating the process of an ingress border switch forwarding a packet with an unknown destination, in accordance with an embodiment of the present invention.



FIG. 4A presents a flowchart illustrating the process of an edge switch forwarding a packet with a known destination received from a local end device, in accordance with an embodiment of the present invention.



FIG. 4B presents a flowchart illustrating the process of an egress border switch forwarding a packet with a known destination, in accordance with an embodiment of the present invention.



FIG. 4C presents a flowchart illustrating the process of an ingress border switch forwarding a packet with a known destination, in accordance with an embodiment of the present invention.



FIG. 5 illustrates an exemplary switch with transparent fabric switch interconnection support, in accordance with an embodiment of the present invention.





In the figures, like reference numerals refer to the same figure elements.


DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of efficiently coupling a plurality of fabric switches is solved by representing a fabric switch as a single switch to another other fabric switch. For example, the single switch can be a virtual switch. Member switches of this other fabric switch view the virtual switch as another switch with fabric encapsulation support and forward a packet to the virtual switch based on the encapsulation header of the packet. In this way, the packet is transparently forwarded between two fabric switches.


With existing technologies, the member switches of a fabric switch are associated with the same fabric identifier of the fabric switch. When a new member switch joins the fabric switch, the fabric identifier becomes associated with the new member switch. Once a fabric switch is formed, its forwarding information (e.g., the learned media access control (MAC) addresses and corresponding virtual local area network (VLAN) tags) is shared among its member switches. However, when the number of member switches in the fabric switch increases, the performance of the fabric switch may deteriorate. For example, a respective member switch maintains configuration data and forwarding information of a respective other member switch. As the number of member switches becomes large, managing such information can require significant hardware and/or software resources, leading to deterioration of the performance of the fabric switch.


On the other hand, instead of a large fabric switch, the switches can form a plurality of interconnected fabric switches. As a result, few member switches (can be referred to as border switches) participate in a plurality of fabric switches, leading to additional hardware and management constraints on those border switches. For example, to ensure proper traffic isolation, a border switch can be aware of the VLANs of a respective fabric switch. Furthermore, when this border switch forwards traffic across different fabric switches, the border switch learns MAC addresses of end devices coupled to different fabric switches. Moreover, these border switches may need to participate in multiple instances of routing protocols in different fabric switches. As a result, interconnecting a plurality of fabric switches may not scale well.


To solve this problem, a fabric switch is represented as a virtual switch to its neighbor fabric switches. For example, a fabric switch externally appears as a single virtual switch, which supports the fabric encapsulation, to other fabric switches and/or compliant networks. This allows the other fabric switches and/or compliant networks to forward a packet to that fabric switch based on the encapsulation header of that packet without learning forwarding and configuration information of individual switches. The border member switches of the fabric switch, which adjoin the other fabric switches and/or compliant networks, can translate between the switch identifiers of virtual and physical switches, and perform the corresponding route lookup for corresponding network. As a result, a large number of switches can form a large network, which is isolated into small manageable fabric switches.


Upon receiving a packet from an adjoining fabric switch and/or compliant network, a border member switch decides how to forward that packet within the local fabric switch. Furthermore, when a member switch of the local fabric switch learns MAC addresses of end devices coupled to a remote fabric switch, the member switch stores the learned MAC address with the virtual switch representing that remote fabric switch instead of the individual member switch to which that end device is coupled. As a result, the member switch does not need to maintain configuration and routing information of individual member switches of the remote fabric switch. It should be noted that a remote fabric switch is a fabric switch in which a member switch of a local fabric switch does not participate and whose fabric identifier is not associated with the member switch.


In a fabric switch, any number of switches coupled in an arbitrary topology may logically operate as a single switch. The fabric switch can be an Ethernet fabric switch or a virtual cluster switch (VCS), which can operate as a single Ethernet switch. Any member switch may join or leave the fabric switch in “plug-and-play” mode without any manual configuration. In some embodiments, a respective switch in the fabric switch is a Transparent Interconnection of Lots of Links (TRILL) routing bridge (RBridge). In some further embodiments, a respective switch in the fabric switch is an Internet Protocol (IP) routing-capable switch (e.g., an IP router). The TRILL protocol is described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 6325, titled “Routing Bridges (RBridges): Base Protocol Specification,” available at http://datatracker.ietf.org/doc/rfc6325/, which is incorporated by reference herein.


It should be noted that a fabric switch is not the same as conventional switch stacking. In switch stacking, multiple switches are interconnected at a common location (often within the same rack), based on a particular topology, and manually configured in a particular way. These stacked switches typically share a common address, e.g., an IP address, so they can be addressed as a single switch externally. Furthermore, switch stacking requires a significant amount of manual configuration of the ports and inter-switch links. The need for manual configuration prohibits switch stacking from being a viable option in building a large-scale switching system. The topology restriction imposed by switch stacking also limits the number of switches that can be stacked. This is because it is very difficult, if not impossible, to design a stack topology that allows the overall switch bandwidth to scale adequately with the number of switch units.


In contrast, a fabric switch can include an arbitrary number of switches with individual addresses, can be based on an arbitrary topology, and does not require extensive manual configuration. The switches can reside in the same location, or be distributed over different locations. These features overcome the inherent limitations of switch stacking and make it possible to build a large “switch farm,” which can be treated as a single, logical switch. Due to the automatic configuration capabilities of the fabric switch, an individual physical switch can dynamically join or leave the fabric switch without disrupting services to the rest of the network.


Furthermore, the automatic and dynamic configurability of the fabric switch allows a network operator to build its switching system in a distributed and “pay-as-you-grow” fashion without sacrificing scalability. The fabric switch's ability to respond to changing network conditions makes it an ideal solution in a virtual computing environment, where network loads often change with time.


In this disclosure, the term “fabric switch” refers to a number of interconnected physical switches which form a single, scalable logical switch. These physical switches are referred to as member switches of the fabric switch. In a fabric switch, any number of switches can be connected in an arbitrary topology, and the entire group of switches functions together as one single, logical switch. This feature makes it possible to use many smaller, inexpensive switches to construct a large fabric switch, which can be viewed as a single logical switch externally. Although the present disclosure is presented using examples based on a fabric switch, embodiments of the present invention are not limited to a fabric switch. Embodiments of the present invention are relevant to any computing device that includes a plurality of devices operating as a single device.


The term “end device” can refer to any device external to a fabric switch. Examples of an end device include, but are not limited to, a host machine, a conventional layer-2 switch, a layer-3 router, or any other type of network device. Additionally, an end device can be coupled to other switches or hosts further away from a layer-2 or layer-3 network. An end device can also be an aggregation point for a number of network devices to enter the fabric switch. An end device can also host one or more virtual machines.


The term “switch” is used in a generic sense, and it can refer to any standalone or fabric switch operating in any network layer. “Switch” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. Any device that can forward traffic to an external device or another switch can be referred to as a “switch.” Any physical or virtual device (e.g., a virtual machine/switch operating on a computing device) that can forward traffic to an end device can be referred to as a “switch.” Examples of a “switch” include, but are not limited to, a layer-2 switch, a layer-3 router, a TRILL RBridge, or a fabric switch comprising a plurality of similar or heterogeneous smaller physical and/or virtual switches.


The term “edge port” refers to a port on a fabric switch which exchanges data frames with a network device outside of the fabric switch (i.e., an edge port is not used for exchanging data frames with another member switch of a fabric switch). The term “inter-switch port” refers to a port which sends/receives data frames among member switches of a fabric switch. If a switch is not a member of the local fabric switch and is capable of forwarding based on the encapsulation header of the fabric encapsulation, the inter-switch port coupling this switch can be referred to as a “border inter-switch port.” The terms “interface” and “port” are used interchangeably.


The term “switch identifier” refers to a group of bits that can be used to identify a switch. Examples of a switch identifier include, but are not limited to, a media access control (MAC) address, an Internet Protocol (IP) address, and an RBridge identifier. Note that the TRILL standard uses “RBridge ID” (RBridge identifier) to denote a 48-bit intermediate-system-to-intermediate-system (IS-IS) System ID assigned to an RBridge, and “RBridge nickname” to denote a 16-bit value that serves as an abbreviation for the “RBridge ID.” In this disclosure, “switch identifier” is used as a generic term, is not limited to any bit format, and can refer to any format that can identify a switch. The term “RBridge identifier” is also used in a generic sense, is not limited to any bit format, and can refer to “RBridge ID,” “RBridge nickname,” or any other format that can identify an RBridge.


The term “packet” refers to a group of bits that can be transported together across a network. “Packet” should not be interpreted as limiting embodiments of the present invention to layer-3 networks. “Packet” can be replaced by other terminologies referring to a group of bits, such as “message,” “frame,” “cell,” or “datagram.” The terms “packet” and “frame” are used interchangeably.


Network Architecture



FIG. 1 illustrates exemplary transparent interconnections of fabric switches, in accordance with an embodiment of the present invention. As illustrated in FIG. 1, a network 100 includes fabric switches 102, 103, and 104. These fabric switches are interconnected via a network 101. In some embodiments, network 101 is a fabric switch as well (under such circumstances, network 101 is also referred to as fabric switch 101). Fabric switch 102 includes member switches 122, 124, and 126; fabric switch 103 includes member switches 132, 134, 136, and 138; and fabric switch 104 includes member switches 142, 144, and 146. Network 101 includes switches 112, 114, 116, and 118. It should be noted that if network 101 is a fabric switch, switches 112, 114, 116, and 118 operate as member switches of fabric switch 101. End device 162 is coupled to fabric switch 104 via switch 144 and end device 164 is coupled to fabric switch 102 via switch 122. A member switch, such as switches 144 or 122, which couples an end device via an edge port can be referred to as an edge switch.


In some embodiments, fabric switches 102, 103, and 104 internally operate as respective TRILL networks (e.g., forward data packet based on the TRILL protocol). Then network 101 can be a compatible TRILL network, or a fabric switch which internally operates as a TRILL network. A respective member switch of network 101 and fabric switches 102, 103, and 104 can then be a TRILL RBridge (e.g., has an RBridge identifier which identifies a member switch in the corresponding fabric switch). In some further embodiments, fabric switches 102, 103, and 104 internally operate as respective IP networks (e.g., forward data packet based on the IP protocol). Then network 101 can be a compatible IP network, or a fabric switch which internally operates as an IP network. A respective member switch of network 101 and fabric switches 102, 103, and 104 can then be an IP-capable switch (e.g., has an IP address which identifies a member switch in the corresponding fabric switch and/or a larger network). An IP-capable switch can calculate and maintain a local IP routing table (e.g., a routing information base or RIB), and is capable of forwarding packets based on its IP addresses.


Switches in a fabric switch use edge ports to communicate with end devices (e.g., non-member switches) and inter-switch ports to communicate with other member switches. Data communication via an edge port can be based on Ethernet and via an inter-switch port can be based on IP and/or TRILL protocol. For example, switch 122 of fabric switch 102 is coupled to end device 164 via an edge port and to switches 124 and 128 via inter-switch ports and one or more links. Switch 122 can communicate with end device 164 based on Ethernet and with switch 124 and 126 based on IP or TRILL. It should be noted that control message exchange via inter-switch ports can be based on a different protocol (e.g., Internet Protocol (IP) or Fibre Channel (FC) protocol).


Furthermore, a switch in a fabric switch coupled with a switch in another fabric switch or a compatible network via a border inter-switch port. For example, switch 124 of fabric switch 102 is coupled with switch 112 of fabric switch 101 (or a compatible network 101) via a border inter-switch port. Forwarding to and/or via a compatible network does not require decapsulation of a fabric encapsulation. For example, if fabric encapsulation for fabric switch 102 is based on the TRILL protocol, switch 124 can forward a TRILL-encapsulated packet to and/or via network 101 (i.e., to switch 112) based on TRILL forwarding.


In some embodiments, fabric switch 102 is assigned a fabric switch identifier. A respective member switch of fabric switch 102 is associated with that fabric switch identifier. This allows a member switch to indicate that it is a member of fabric switch 102. In some embodiments, whenever a new member switch joins fabric switch 102, the fabric switch identifier is automatically associated with that new member switch. Similarly, fabric switch 103 and 104 (and fabric switch 101) are assigned corresponding fabric switch identifiers. Furthermore, a respective member switch of fabric switch 102 is assigned a switch identifier (e.g., an RBridge identifier, a Fibre Channel (FC) domain ID (identifier), or an IP address). This switch identifier identifies the member switch in fabric switch 102. Similarly, a respective member switch of fabric switch 103 and 104 (and fabric switch 101) is assigned a switch identifier.


With existing technologies, switches 122, 124, and 126 of fabric switch 102 are associated with the same fabric identifier of fabric switch 102. When a new member switch joins fabric switch 102, the fabric identifier becomes associated with that new member switch. Once fabric switch 102 is formed, its forwarding information (e.g., the learned MAC addresses and corresponding virtual local area network (VLAN) tags) is shared among member switches 122, 124, and 126. As a result, a respective member switch of fabric switch 102 maintains a large number of learned MAC address and its association with a member switch. As a result, if the number of member switches in fabric switch 102 increases, the performance of fabric switch 102 may deteriorate. For example, switch 122 maintains configuration data and forwarding information of switches 124 and 126. This allows switch 122 to forward packets to MAC address associated with switch 124 and 126. As the number of member switches in fabric switch 102 becomes large, managing such information can deteriorate the performance of fabric switch 102.


On the other hand, instead of a large fabric switch, the switches in network 100 can form a plurality of interconnected fabric switches 101, 102, 103, and 104. As a result, few member switches, which can be referred to as border switches, may participate in a plurality of fabric switches. For example, border switches 124 and 126 may participate in fabric switch 101 in addition to their local fabric switch 102. As a result, switch 124 and 126 maintain learned MAC addresses, forwarding information, and configuration information of both fabric switches 101 and 102. This leads to additional hardware and management constraints on switches 124 and 126.


Furthermore, to ensure proper traffic isolation, border switches 124 and 126 is aware of VLAN configurations of both fabric switches 101 and 102. For example, to ensure VLAN continuity, when forwarding a packet to switch 112 in fabric switch 101, switch 124 checks whether the VLAN of the packet is configured in switch 112. As a result, in addition to VLAN configurations of switches 122 and 126, switch 124 maintains VLAN configurations of switches 112, 114, 116, and 118. Similarly, border switches 112 and 114 of fabric switch 101 can be aware of VLAN configurations of both fabric switches 101 and 102. This can also lead to additional hardware and management constraints on these border switches.


Moreover, if border switch 112 receives traffic from end device 164 via switch 124, switch 112 learns MAC addresses of end device 164 and its association information with switch 122. Switch 112 shares this information with switches 114, 116, and 118. As a result, switches 112, 114, 116, and 118 maintain MAC address of end device 164 and its association with switch 122, as well as forwarding information for switch 122. To ensure packet forwarding between fabric switches 101 and 102, border switches 112, 114, 124, and 126 participate in respective instances of routing protocols of fabric switches 101 and 102. For example, border switch 112 computes route to switch 118 based on the routing protocol instance of fabric switch 101 and route to switch 122 based on the routing protocol instance of fabric switch 102. As a result, interconnecting a plurality of fabric switches may not scale well.


To solve this problem, fabric switch 102 is represented as a virtual switch 120 (denoted with dotted lines) to other fabric switches and compatible networks of network 100. For example, fabric switch 102 appears as virtual switch 120 to network 101, and fabric switches 103 and 104. Switches 112 and 114 consider themselves to be coupled to virtual switch 120. In other words, interconnections between switches 112 and 114 with switches 124 and 126 are represented to switches 112 and 114 as interconnections between switches 112 and 114 with virtual switch 120. Similarly, fabric switch 103 is represented as a virtual switch 130 (denoted with dotted lines) to other fabric switches and compatible networks of network 100. Fabric switch 103 appears as virtual switch 130 to network 101, and fabric switches 102 and 104. Switches 116 and 118 consider themselves to be coupled to virtual switch 130. In the same way, fabric switch 104 is represented as a virtual switch 140 (denoted with dotted lines) to other fabric switches and compatible networks of network 100. Fabric switch 104 appears as virtual switch 140 to network 101, and fabric switches 102 and 103. Switch 116 considers itself to be coupled to virtual switch 140. If network 101 operates as a fabric switch, network 101 is represented as a virtual member switch 110 (denoted with dashed lines) to other fabric switches of network 100.


Switches in fabric switch 104 consider virtual switch 120 to be reachable via switch 142. Routing information in fabric switch 104 indicates that virtual switch 120 is reachable via switch 142. Routing, forwarding, and failure recovery of a fabric switch is specified in U.S. patent application Ser. No. 13/087,239, titled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011, the disclosure of which is incorporated herein in its entirety. Similarly, switches in fabric switch 103 consider virtual switch 120 to be reachable via switches 132 and 136, and switches in fabric switch 101 consider virtual switch 120 to be reachable via switches 112 and 114, and compatible network 101. For the packets from fabric switch 102, switches 124 and 126 can translate between the switch identifier of virtual switch 120 and a corresponding physical switch, and perform the route lookup. As a result, in the links between fabric switch 102 and compatible network 101, the ingress and/or egress switch identifiers of a packet can be virtual switch identifiers. This allows a large number of switches to form a large network 100, which is isolated into small manageable fabric switches 102, 103, and 104 interconnected via compatible network 101.


During operation, end device 164 sends packet to end device 162. This packet can be an Ethernet frame. Switch 122 receives the packet via an edge port and encapsulates the packet with a fabric encapsulation (e.g., TRILL or IP encapsulation) and forwards. If switch 122 does not know the MAC address of end device 162, switch 122 assigns an “all-switch” switch identifier as the egress switch identifier of the encapsulation header. An “all-switch” switch identifier indicates that the fabric-encapsulated packet should be forwarded to a respective switch in a network. For example, if switch 132 receives a packet with an “all-switch” switch identifier as the egress identifier, switch 132 forwards the packet to switches 134, 136, and 138 of fabric switch 103.


When the encapsulated packet from switch 122 reaches border switch 124 (or 126), switch 124 modifies the encapsulation header by changing the ingress switch identifier of the encapsulation header from the switch identifier (e.g., an RBridge identifier or an IP address) of switch 122 to a virtual switch identifier (e.g., a virtual RBridge identifier or a virtual IP address) of virtual switch 120. Switch 124 forwards that packet to switch 112. In this way, switch 124 operates as an egress border switch, which forwards a fabric-encapsulated packet to outside of the local fabric switch (e.g., fabric switch 104) via a border inter-switch port. It should be noted that forwarding includes determining an egress (or output) port associated with the destination address and transmitting via the determined egress port.


Upon receiving the fabric-encapsulated packet, switch 112 determines that the egress switch identifier is an “all-switch” switch identifier and forwards the fabric-encapsulated packet to a respective switch in network 101. Upon receiving the packet, switch 116 and 118 forward the packet to switch 142 of fabric switch 104 and switch 132 of fabric switch 103, respectively. It appears to switch 116 and 118 that the packet is forwarded to virtual switch 140 and 130, respectively. In this way, compliant network 101 can forward the packet based on the encapsulation header among fabric switches coupled to network 101. Upon receiving the fabric-encapsulated packet, switch 142 determines that the egress switch identifier is an “all-switch” switch identifier and forwards the fabric-encapsulated packet to a respective switch in fabric switch 104. In this way, switch 142 operates as an ingress border switch, which forwards a fabric-encapsulated packet received via a border inter-switch in the local fabric switch (e.g., fabric switch 104).


Switch 142 can also decapsulate the fabric-encapsulated packet to obtain the inner packet (i.e., the Ethernet packet from end device 164) and determine whether any local end device is the destination of the inner packet. Switch 142 can learn the MAC address of end device 164 and stores the learned MAC address in association with the virtual switch identifier of virtual switch 120. Similarly, switch 144 receives the fabric-encapsulated packet, decapsulates the packet to obtain the inner packet, and determines whether any local end device is the destination of the inner packet. Switch 144 learns the MAC address of end device 164 and stores the learned MAC address in association with the virtual switch identifier of virtual switch 120. Switch 144 also determines that destination end device 162 is locally coupled (e.g., either based on a populated table, previous MAC address learning, or flooding), and forwards the inner packet to end device 162. In this way, fabric switches 102, 103, and 104 are transparently interconnected via compatible network 101.


In some embodiments, switches in network 101 decapsulate the fabric-encapsulated packet to determine whether any local end device is the destination of the inner packet (i.e., the Ethernet packet from end device 164). If a switch in network 101 learns the MAC address of end device 164, that switch stores the learned MAC address in association with the ingress switch identifier, which is a virtual switch identifier, in the encapsulation header. For example, if network 101 is a TRILL network and switch 112 receives a TRILL-encapsulated packet from fabric switch 102, upon decapsulating the TRILL header, switch 112 can learn the MAC address of the inner packet and stores the learned MAC address in association with the virtual switch identifier of virtual switch 120.


The routes (e.g., can be configured or computed based on a routing protocol) in fabric switch 102 indicate that virtual switch 120 is reachable via switch 142. Hence, to send a packet to end device 164, a switch in fabric switch 104 forwards the packet to switch 142. For example, if end device 162 sends a packet to end device 164, switch 144 receives the packet, encapsulates the packet in a fabric encapsulation, and assigns the virtual switch identifier of virtual switch 120 as the egress switch identifier of the encapsulation header. When the encapsulated packet reaches border switch 142, switch 142 modifies the encapsulation header by changing the ingress switch identifier of the encapsulation header from the switch identifier of switch 144 to a virtual switch identifier of virtual switch 140. Switch 142 forwards that packet to switch 116. Switch 116 determines that virtual switch 120 is reachable via switches 112 and 114. Suppose that switch 116 forwards the packet to switch 112, which in turn, forwards the packet to switch 124. Switch 124 changes the egress switch identifier of the encapsulation header from the virtual switch identifier of virtual switch 120 to switch identifier of switch 122, and forwards the packet.


Hence, in network 100, border switches 124 and 126 of fabric switch 102 do not need to participate in routing instances of fabric switches 103 and 104, and maintain forwarding information for individual switches of fabric switches 103 and 104. Since switches 124 and 126 are at the edge between network 101 and fabric switch 102, switches 124 and 126 determines how to forward packets received from network 101 within fabric switch 102. Furthermore, when switch 124 or 126 learns the MAC address of an end device coupled to a remote fabric (e.g., end device 162 coupled to fabric switch 104), switch 124 or 126 associates the learned MAC address with the virtual switch identifier of fabric switch 104 rather than the switch identifier of switch 144.


In some embodiments, network 101 can be any network which allows forwarding of fabric encapsulated packets based on encapsulation headers. For example, if the fabric encapsulation is based on the TRILL protocol, switches 112, 114, 116, and 118 can forward a packet based on the ingress and egress TRILL RBridge identifiers in the TRILL header of a TRILL encapsulated packet. This allows fabric switches 102, 103, and 104 to interconnect via a compatible network 101 without requiring network 101 to be a fabric switch. As result, network 101 can provide interconnection among fabric switches 102, 103, and 104 without providing connectivity within a fabric switch. Border switches (e.g., switches 124 and 126 of fabric switch 102) forward fabric-encapsulated packets to network 101 via corresponding border inter-switch ports.


Since border switches 124 and 126 translate between virtual and physical switch identifiers, in the links between fabric switch 102 and compatible network 101, the ingress and/or egress switch identifiers of the encapsulation headers of the packets can be virtual switch identifiers. Hence, compatible network 101 can only view virtual switches 120, 130, and 140 coupled to it instead of fabric switches 102, 103, and 104, respectively. As a result, network 101 can forward traffic only based on the fabric encapsulation without requiring to learn MAC addresses of encapsulated packets. In this way, fabric switches 102, 103, and 104 are transparently interconnected via compatible network 101.


Data Communication



FIG. 2A illustrates an exemplary forwarding of a packet with an unknown destination between transparently interconnected fabric switches, in accordance with an embodiment of the present invention. During operation, end device 162 sends an Ethernet frame 202 to end device 164. Suppose that the destination address of Ethernet frame 202 (i.e., the MAC address of end device 164) is unknown to fabric switch 104. Edge switch 144 receives the packet via an edge port. Switch 144 learns the MAC address of end device 162 and adds the MAC address to its local MAC address table (can also be referred to as forwarding table) in association with the edge port (e.g., based on a port identifier). Switch 144 also generates a notification message comprising the learned MAC address and sends the notification message to switches 142 and 146. In turn, switches 142 and 146 learn the MAC address of end device 162 and add the MAC address to their respective local MAC address tables in association with switch identifier 204 (e.g., an RBridge identifier or an IP address) of switch 144. In some embodiments, switches 142 and 146 further associate the MAC address of end device 162 with the edge port of switch 144 (e.g., based on a port identifier).


Switch 144 encapsulates Ethernet frame 202 with a fabric encapsulation (e.g., TRILL or IP encapsulation) to create fabric-encapsulated packet 222 (operation 232). Since switch 144 does not know the destination (i.e., has not learned the destination MAC address), switch 144 assigns an “all-switch” switch identifier 206 as the egress switch identifier and switch identifier 204 of switch 144 as the ingress switch identifier of the encapsulation header. Switch 144 forwards packet 222 to a respective switch in fabric switch 104. It should be noted that forwarding includes determining an egress (or output) port associated with the destination address and transmitting via the determined egress port.


When packet 222 reaches border switch 142, switch 142 modifies the encapsulation header by changing the ingress switch identifier of the encapsulation header from switch identifier 204 of switch 144 to a virtual switch identifier 206 (e.g., a virtual RBridge identifier or a virtual IP address) of virtual switch 140. Switch 142 forwards packet 222 to switch 116 of network 101 (not shown in FIG. 2A). In this way, switch 142 operates as an egress border switch. Switch 116 forwards packet 222 to a respective switch of network 101. In some embodiments, upon receiving packet 222, a respective switch in network 101 decapsulates packet 222 to extract frame 202 and forwards packet 222 to the virtual switches it is coupled to, as described in conjunction with FIG. 1.


Since virtual switch 120 is reachable via both switches 112 and 114, switches in compatible network 101 can use equal cost multiple path (ECMP) to determine via which switch packet 222 should be forwarded. Suppose that, switch 112 receives and forwards packet 222 to virtual switch 120. In turn, switch 124 of fabric switch 102 receives packet 222 forwarded by switch 112. In this way, border switch 142 of fabric switch 104 transparently forwards fabric-encapsulated packet 222 to border switch 124 of fabric switch 102 via compatible network 101.


Upon receiving packet 222, switch 124 determines that the egress switch identifier is an “all switch” switch identifier and forwards packet 222 to a respective switch in fabric switch 102. In this way, switch 124 operates as an ingress border switch. Switch 124 can also decapsulate packet 222 to obtain the inner packet (i.e., Ethernet frame 202) and determine whether any local end device corresponds to the destination MAC address of Ethernet frame 202. Switch 142 also learns the MAC address of end device 162 and stores the learned MAC address in association with the virtual switch identifier 208 of virtual switch 140. Switch 122 receives packet 222, decapsulates packet 222 to obtain Ethernet frame 202 (operation 236), and determines whether any local end device corresponds to the destination MAC address of Ethernet frame 202. Switch 122 learns the MAC address of end device 162 and stores the learned MAC address in association with the virtual switch identifier 208 of virtual switch 140. Switch 122 also determines that destination end device 164 is locally coupled (e.g., either based on a populated table, previous MAC address learning, or flooding), and forwards Ethernet frame 202 to end device 164.



FIG. 2B illustrates an exemplary forwarding of a packet with a known destination between transparently interconnected fabric switches, in accordance with an embodiment of the present invention. During operation, end device 164 sends an Ethernet frame 212 to end device 162. Edge switch 122 receives the packet via an edge port and encapsulates the packet with a fabric encapsulation (e.g., TRILL or IP encapsulation) to create fabric-encapsulated packet 224 (operation 242). As described in conjunction with FIG. 2A, upon receiving fabric-encapsulated packet 222, switch 122 learns the MAC address of end device 162. Hence, the destination MAC address of Ethernet frame 212 is known to switch 122.


Since switch 122 has associated the MAC address of end device 162 with virtual switch identifier 208 of virtual switch 140, switch 122 assigns virtual switch identifier 208 as the egress switch identifier and a switch identifier 214 (e.g., an RBridge identifier or an IP address) of switch 122 as the ingress switch identifier of the encapsulation header. Switch 122 determines that virtual switch 140 is reachable via switch 124 and forwards packet 224 to switch 124. Route to a respective physical or virtual switch can be configured by a user (e.g., a network administrator) or computed based on a routing protocol (e.g., Intermediate System to Intermediate System (IS-IS)). In some embodiments, the configured or computed routes in fabric switch 102 indicate that virtual switch 140 is reachable via switch 124 and 126.


When packet 224 reaches border switch 124, switch 124 modifies the encapsulation header by changing the ingress switch identifier of the encapsulation header from switch identifier 214 of switch 122 to a virtual switch identifier 216 (e.g., a virtual RBridge identifier or a virtual IP address) of virtual switch 120. Switch 124 forwards packet 224 to switch 112 of network 101 (not shown in FIG. 2B). In this way, switch 124 operates as an egress border switch. Switch 112 determines that virtual switch 140 is reachable via switch 116 and forwards packet 224 to switch 116. Switch 112 forwards the packet in network 101 based on the encapsulation header of packet 224 without decapsulating the fabric encapsulation and learning the MAC address from Ethernet frame 212. Switch 116 receives and forwards packet 224 to virtual switch 140. In turn, border switch 142 of fabric switch 104 receives packet 224 forwarded by switch 116. In this way, border switch 124 of fabric switch 102 transparently forwards fabric-encapsulated packet 224 to border switch 142 of fabric switch 104 via compatible network 101.


Upon receiving the fabric-encapsulated packet, switch 142 determines that the egress switch identifier is virtual switch identifier 208 of virtual switch 140. Switch 142 decapsulate packet 224 to obtain the inner packet (i.e., Ethernet frame 212) and identifies the switch with which the destination MAC address of Ethernet frame 212 (i.e., the MAC address of end device 162) is associated. Switch 142 has associated the MAC address of end device 162 with switch identifier 204 of switch 144, as described in conjunction with FIG. 2A. Switch 142 modifies the encapsulation header by changing the egress switch identifier of the encapsulation header from virtual switch identifier 208 of virtual switch 140 to switch identifier 204 of switch 144 (operation 246). In this way, switch 142 operates as an ingress border switch. In some embodiments, switch 142 re-encapsulates Ethernet frame 212 in an encapsulation header, and assigns the switch identifier 204 as the egress switch identifier and virtual switch identifier 216 as the ingress switch identifier of the encapsulation header.


Switch 142 also learns the MAC address of end device 164 and stores the learned MAC address in association with the virtual switch identifier 216 of virtual switch 120. Switch 142 forwards packet 224 to switch 144. Switch 144 receives packet 224, decapsulates packet 224 to obtain Ethernet frame 212 (operation 248), and determines whether any local end device corresponds to the destination MAC address of Ethernet frame 212. Switch 144 also learns the MAC address of end device 164 and stores the learned MAC address in association with the virtual switch identifier 216 of virtual switch 120. Switch 144 determines that destination end device 162 is locally coupled (e.g., either based on a populated table, previous MAC address learning, or flooding), and forwards Ethernet frame 212 to end device 162.


Forwarding of a Packet with Unknown Destination


In the example in FIG. 2A, edge switch 144 receives a packet (i.e., Ethernet frame 202) with an unknown destination from a local end device 162. Switch 144 encapsulates this packet in a fabric encapsulation and forwards the fabric-encapsulated packet in fabric switch 104. Egress border switch 142 receives the fabric-encapsulated packet and forwards via a border inter-switch port to switch 116. This fabric-encapsulated packet is forwarded in network 101 and reaches ingress border switch 124 of fabric switch 102, which, in turn, forwards the fabric-encapsulated packet in fabric switch 102.



FIG. 3A presents a flowchart illustrating the process of an edge switch forwarding a packet with an unknown destination received from a local end device, in accordance with an embodiment of the present invention. During operation, the switch receives a packet with an unknown destination from a local device via an edge port (operation 302). The switch encapsulates the packet with fabric encapsulation and assign an “all switch” switch identifier as the egress switch identifier of the encapsulation header (operation 304). For example, if the fabric encapsulation is based on the TRILL protocol, the “all switch” switch identifier can be a multicast RBridge identifier. The switch sets the local switch identifier as the ingress switch identifier of the encapsulation header (operation 306) and sends the fabric-encapsulated packet based on the fabric “all switch” forwarding policy (operation 308). Examples of a fabric “all switch” forwarding policy include, but are not limited to, forwarding via fabric multicast tree, forwarding via a multicast tree rooted at an egress switch, unicast forwarding to a respective member of the fabric switch, and broadcast forwarding in the fabric switch.



FIG. 3B presents a flowchart illustrating the process of an egress border switch forwarding a packet with an unknown destination, in accordance with an embodiment of the present invention. During operation, the switch receives a fabric-encapsulated packet with “all switch” switch identifier as the egress switch identifier of the encapsulation header via an inter-switch port (operation 332). The switch then identifies the ingress switch identifier in the encapsulation header in (a copy of) the fabric-encapsulated packet (operation 334) and modifies the encapsulation header by replacing the identified switch identifier with a virtual switch identifier of a local virtual switch (operation 336). This local virtual switch represents the local fabric switch in which the switch is a member switch. The switch then identifies a local border inter-switch port associated with (e.g., mapped to) the egress switch identifier of the encapsulation header as an egress port for the packet (operation 338) and forwards the packet via the identified port (operation 340).


In some embodiments, the switch also decapsulates (a copy of) the fabric-encapsulated packet to extract the inner packet (operation 342). This allows the switch to determine whether any local end device is the destination of the inner packet. The switch then checks whether the destination MAC address of the inner packet has been locally learned (e.g., the switch has learned the destination MAC address from a local edge port) (operation 344). If the packet has been locally learned, the switch identifies an egress edge port associated with the destination MAC address of the inner packet and forwards the inner packet via the identified port (operation 346). Otherwise, the switch forwards the inner packet via a respective local edge port (operation 348).



FIG. 3C presents a flowchart illustrating the process of an ingress border switch forwarding a packet with an unknown destination, in accordance with an embodiment of the present invention. During operation, the switch receives a fabric-encapsulated packet with “all switch” switch identifier as the egress switch identifier of the encapsulation header via a border inter-switch port (operation 352). The switch then identifies an “all switch” switch identifier as the egress switch identifier in the encapsulation header in the fabric-encapsulated packet (operation 354) and sends the fabric-encapsulated packet based on the fabric “all switch” forwarding policy (operation 356).


In some embodiments, the switch also decapsulates (a copy of) the fabric-encapsulated packet to extract the inner packet (operation 358). This allows the switch to determine whether any local end device is the destination of the inner packet. The switch then checks whether the destination MAC address of the inner packet has been locally learned (e.g., the switch has learned the destination MAC address from a local edge port) (operation 360). If the packet has been locally learned, the switch identifies an egress edge port corresponding to the destination MAC address of the inner packet and forwards the inner packet via the identified port (operation 362). Otherwise, the switch forwards the inner packet via a respective local edge port (operation 364).


Forwarding of a Packet with Known Destination


In the example in FIG. 2B, edge switch 122 receives a packet (i.e., Ethernet frame 212) with a known destination from a local end device 164. Switch 122 encapsulates this packet in a fabric encapsulation and forwards the fabric-encapsulated packet in fabric switch 102. Egress border switch 124 receives the fabric-encapsulated packet and forwards via a border inter-switch port to switch 112. This fabric-encapsulated packet is forwarded in network 101 and reaches ingress border switch 142 of fabric switch 104, which, in turn, forwards the fabric-encapsulated packet in fabric switch 104.



FIG. 4A presents a flowchart illustrating the process of an edge switch forwarding a packet with a known destination received from a local end device, in accordance with an embodiment of the present invention. During operation, the switch receives a packet (e.g., an Ethernet frame) from a local device via an edge port (operation 402). The switch obtains the switch identifier mapped to the destination MAC address of the packet from a local MAC table (operation 404). The switch encapsulates the packet with a fabric encapsulation (operation 406). It should be noted that this switch identifier can be a virtual switch identifier if the destination is behind a border inter-switch port.


The switch sets the local switch identifier as the ingress switch identifier and the identified switch identifier as the egress switch identifier of the encapsulation header (operation 408). The switch identifies the next-hop switch identifier(s) mapped to the identified switch identifier (e.g., from a local forwarding table) and selects a switch identifier from the identified next-hop switch identifier(s) (operation 410). The switch identifies an egress port corresponding to the selected next-hop switch identifier (e.g., from a local forwarding table) and forwards the encapsulated packet via the identified port (operation 412).



FIG. 4B presents a flowchart illustrating the process of an egress border switch forwarding a packet with a known destination, in accordance with an embodiment of the present invention. During operation, the switch receives a fabric-encapsulated packet via an inter-switch port (operation 432). The switch then identifies the egress switch identifier in the encapsulation header in the fabric-encapsulated packet (operation 434) and checks whether the identified switch identifier is an external switch identifier (e.g., not associated with a switch in the local fabric switch) (operation 436). It should be mentioned that a respective member switch in a fabric switch stores the switch identifier of a respective member switch of the local fabric switch. The switch modifies the encapsulation header by changing the ingress switch identifier of the encapsulation header in with a virtual switch identifier of a local virtual switch (operation 438). The switch then identifies a local border inter-switch port associated with the egress switch identifier of the encapsulation header as an egress port for the packet (operation 440) and forwards the packet via the identified port (operation 442).



FIG. 4C presents a flowchart illustrating the process of an ingress border switch forwarding a packet with a known destination, in accordance with an embodiment of the present invention. During operation, the switch receives a fabric-encapsulated packet via a border inter-switch port (operation 452). The switch then identifies the egress switch identifier in the encapsulation header of the fabric-encapsulated packet (operation 354) and checks whether the egress switch identifier is a local virtual switch identifier (operation 456). A local virtual switch identifier is associated with the virtual switch represented by the local fabric switch (i.e., the fabric switch in which the switch is a member).


If the egress switch identifier is a local virtual switch identifier, the switch modifies the encapsulation header by changing the identified switch identifier with a switch identifier associated with the destination MAC address of the inner packet of the fabric-encapsulated packet (operation 460). In some embodiments, the switch decapsulates the fabric-encapsulated packet and re-encapsulates the inner packet to perform operation 460. The switch identifies an egress port associated with the egress switch identifier (operation 462) and forwards the encapsulated packet via the identified port (operation 464).


Exemplary Switch



FIG. 5 illustrates an exemplary switch with transparent fabric switch interconnection support, in accordance with an embodiment of the present invention. In this example, a switch 500 includes a number of communication ports 502, a packet processor 510, a border module 520, and a storage device 550. Packet processor 510 extracts and processes header information from the received frames.


In some embodiments, switch 500 maintains a membership in a fabric switch, as described in conjunction with FIG. 1, wherein switch 500 also includes a fabric switch module 560. Fabric switch module 560 maintains a configuration database in storage device 550 that maintains the configuration state of every switch within the fabric switch. Fabric switch module 560 maintains the state of the fabric switch, which is used to join other switches. In some embodiments, switch 500 can be configured to operate in conjunction with a remote switch as an Ethernet switch.


Communication ports 502 can include inter-switch communication channels for communication within the fabric switch. This inter-switch communication channel can be implemented via a regular communication port and based on any open or proprietary format. Communication ports 502 can also include one or more border inter-switch communication ports for communication via compatible networks. Communication ports 502 can include one or more TRILL ports capable of receiving frames encapsulated in a TRILL header. Communication ports 502 can also include one or more IP ports capable of receiving IP packets. An IP port is capable of receiving an IP packet and can be configured with an IP address. Packet processor 510 can process TRILL-encapsulated frames and/or IP packets.


During operation, border module 520 determines that an egress switch identifier in the encapsulation header of a packet is associated with a switch outside of the fabric switch. In response to the determination, border module 520 changes the ingress switch identifier in the encapsulation header to a virtual switch identifier associated with a virtual switch representing the fabric switch. On the other hand, if border module 520 determines that an egress switch identifier in an encapsulation header of a packet is the virtual switch identifier, border module 520 changes the egress switch identifier in the encapsulation header to a switch identifier which identifies a member switch in the fabric switch.


In some embodiments, switch 500 includes a forwarding module 530 which determines that an egress switch identifier in the encapsulation header of a packet is a switch identifier of the switch. The ingress switch identifier in the encapsulation header can be a virtual switch identifier associated with a remote fabric switch. Forwarding module 530 can also determine an external switch as a next-hop switch in a compatible network for a fabric-encapsulated packet. Switch 500 can further include a learning module 540 which learns the MAC address from the inner packet of the packet and stores the learned MAC address in association with the virtual switch identifier associated with the remote fabric switch in storage device 550.


Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory which is coupled to one or more processors in switch 500. When executed, these instructions cause the processor(s) to perform the aforementioned functions.


In summary, embodiments of the present invention provide a switch and a method for transparently interconnecting fabric switches. In one embodiment, the switch includes a fabric switch module and a border module. The fabric switch module maintains a membership in a first fabric switch. A fabric switch includes a plurality of switches and operates as a single switch. The border module determines that the egress switch identifier in a first encapsulation header of a first packet is associated with a switch outside of the fabric switch. The first packet is forwarded in the first fabric switch based on the first encapsulation header. In response to the determination, the border module changes the ingress switch identifier in the first encapsulation header of the first packet to a first virtual switch identifier associated with a first virtual switch. This first virtual switch externally represents the first fabric switch.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: control circuitry configured to:maintain a membership in a first network of interconnected switches, wherein the first network of interconnected switches is identified by a fabric identifier; anddetermine that a virtual switch representing a second network of interconnected switches is a next-hop switch reachable via a local port;border circuitry configured to: determine that an egress switch identifier in a first encapsulation header of a first packet is assigned to the virtual switch, wherein the first packet is forwardable in the first network of interconnected switches based on the first encapsulation header; andupdate the first packet by replacing an ingress switch identifier in the first encapsulation header with a first virtual switch identifier representing the first network of interconnected switches, wherein the egress switch identifier is a second virtual switch identifier assigned to the virtual switch, and wherein the first and second virtual switch identifiers are distinct from a switch identifier identifying a physical switch in a network of interconnected switches; andforwarding circuitry configured to determine the local port as an egress port for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 2. The switch of claim 1, wherein the second virtual switch identifier assigned to the virtual switch is a switch identifier in a routed network.
  • 3. The switch of claim 2, wherein routing information of a second switch in the first network of interconnected switches indicates that the virtual switch is reachable via the switch.
  • 4. The switch of claim 2, wherein the border circuitry is further configured to: determine that an egress switch identifier in a second encapsulation header of a second packet is the first virtual switch identifier; andin response to the determination, replace the egress switch identifier in the second encapsulation header of the second packet with a switch identifier which identifies a member switch in the first network of interconnected switches.
  • 5. The switch of claim 4, wherein an ingress switch identifier in the second encapsulation header of the second packet is the second virtual switch identifier.
  • 6. The switch of claim 2, wherein the forwarding circuitry is further configured to determine that an egress switch identifier in a third encapsulation header of a third packet is a switch identifier of the switch, wherein an ingress switch identifier in the second encapsulation header of the third packet is the second virtual switch identifier; and wherein the switch further comprises learning circuitry configured to: learn a media access control (MAC) address from an inner packet of the third packet; andstore the learned MAC address in association with the second virtual switch identifier in a storage device.
  • 7. The switch of claim 1, wherein the forwarding circuitry is further configured to determine the local port an egress port by looking up the next-hop switch for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 8. The switch of claim 1, wherein the first encapsulation header is one or more of: a Transparent Interconnection of Lots of Links (TRILL) header, wherein the ingress and egress switch identifiers of the first encapsulation header are TRILL routing bridge (RBridge) identifiers; anda Internet Protocol (IP) header, wherein the ingress and egress switch identifiers of the first encapsulation header are IP addresses.
  • 9. A method, comprising: maintaining a membership of a switch in a first network of interconnected switches, wherein the first network of interconnected switches is identified by a fabric identifier;determining that a virtual switch representing a second network of interconnected switches is a next-hop switch reachable via a local port of the switch;determining that an egress switch identifier in a first encapsulation header of a first packet is assigned to the virtual switch, wherein the first packet is forwardable in the first network of interconnected switches based on the first encapsulation header;updating the first packet by replacing an ingress switch identifier in the first encapsulation header with a first virtual switch identifier representing the first network of interconnected switches, wherein the egress switch identifier is a second virtual switch identifier assigned to the virtual switch, and wherein the first and second virtual switch identifiers are distinct from a switch identifier identifying a physical switch in a network of interconnected switches; anddetermining the local port as an egress port for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 10. The method of claim 9, wherein the second virtual switch identifier assigned to the virtual switch is a switch identifier in a routed network.
  • 11. The method of claim 10, wherein routing information of a second switch in the first network of interconnected switches indicates that the virtual switch is reachable via the switch.
  • 12. The method of claim 10, further comprising: determining that an egress switch identifier in a second encapsulation header of a second packet is the first virtual switch identifier; andin response to the determination, replacing the egress switch identifier in the second encapsulation header of the second packet with a switch identifier which identifies a member switch in the first network of interconnected switches.
  • 13. The method of claim 12, wherein an ingress switch identifier in the second encapsulation header of the second packet is the second virtual switch identifier.
  • 14. The method of claim 10, further comprising: determining that an egress switch identifier in a third encapsulation header of a third packet is a switch identifier of the switch, wherein an ingress switch identifier in the second encapsulation header of the third packet is the second virtual switch identifier;learning a media access control (MAC) address from an inner packet of the third packet; andstoring the learned MAC address in association with the second virtual switch identifier in a storage device.
  • 15. The method of claim 9, wherein determining the local port an egress port includes looking up the next-hop switch for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 16. The method of claim 9, wherein the first encapsulation header is one or more of: a Transparent Interconnection of Lots of Links (TRILL) header, wherein the ingress and egress switch identifiers of the first encapsulation header are TRILL routing bridge (RBridge) identifiers; anda Internet Protocol (IP) header, wherein the ingress and egress switch identifiers of the first encapsulation header are IP addresses.
  • 17. A computer system; comprising: a processor;a storage device coupled to the processor and storing instructions that when executed by the processor cause the processor to perform a method, the method comprising:maintaining a membership of a switch in a first network of interconnected switches, wherein the first network of interconnected switches is identified by a fabric identifier;determining that a virtual switch representing a second network of interconnected switches is a next-hop switch reachable via a local port of the switch;determining that an egress switch identifier in a first encapsulation header of a first packet is assigned to the virtual switch, wherein the first packet is forwardable in the first network of interconnected switches based on the first encapsulation header; andupdating the first packet by replacing an ingress switch identifier in the first encapsulation header with a first virtual switch identifier representing the first network of interconnected switches, wherein the egress switch identifier is a second virtual switch identifier assigned to the virtual switch, and wherein the first and second virtual switch identifiers are distinct from a switch identifier identifying a physical switch in a network of interconnected switches; anddetermining the local port as an egress port for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 18. The computer system of claim 17, wherein the second virtual switch identifier assigned to the virtual switch is a switch identifier in a routed network.
  • 19. The computer system of claim 18, wherein routing information of a second switch in the first network of interconnected switches indicates that the virtual switch is reachable via the switch.
  • 20. The computer system of claim 18, wherein the method further comprises: determining that an egress switch identifier in a second encapsulation header of a second packet is the first virtual switch identifier; andin response to the determination, replacing the egress switch identifier in the second encapsulation header of the second packet with a switch identifier which identifies a member switch in the first network of interconnected switches.
  • 21. The computer system of claim 20, wherein an ingress switch identifier in the second encapsulation header of the second packet is the second virtual switch identifier.
  • 22. The computer system of claim 18, wherein the method further comprises: determining that an egress switch identifier in a third encapsulation header of a third packet is a switch identifier of the switch, wherein an ingress switch identifier in the second encapsulation header of the third packet is the second virtual switch identifier;learning a media access control (MAC) address from an inner packet of the third packet; andstoring the learned MAC address in association with the second virtual switch identifier in a storage device.
  • 23. The computer system of claim 17, wherein the method further comprises determining the local port an egress port includes looking up the next-hop switch for the updated first packet based on the egress switch identifier in the first encapsulation header.
  • 24. The computer system of claim 17, wherein the first encapsulation header is one or more of: a Transparent Interconnection of Lots of Links (TRILL) header, wherein the ingress and egress switch identifiers of the first encapsulation header are TRILL routing bridge (RBridge) identifiers; anda Internet Protocol (IP) header, wherein the ingress and egress switch identifiers of the first encapsulation header are IP addresses.
  • 25. A non-transitory computer-readable storage medium storing instructions which when executed by a computer cause the computer to perform a method, the method comprising: maintaining a membership of a switch in a first network of interconnected switches, wherein the first network of interconnected switches is identified by a fabric identifier;determining that a virtual switch representing a second network of interconnected switches is a next-hop switch reachable via a local port of the switch;determining that an egress switch identifier in a first encapsulation header of a first packet is assigned to the virtual switch, wherein the first packet is forwardable in the first network of interconnected switches based on the first encapsulation header;updating the first packet by replacing an ingress switch identifier in the first encapsulation header with a first virtual switch identifier representing the first network of interconnected switches, wherein the egress switch identifier is a second virtual switch identifier assigned to the virtual switch, and wherein the first and second virtual switch identifiers are distinct from a switch identifier identifying a physical switch in a network of interconnected switches; anddetermining the local port as an egress port for the updated first packet based on the egress switch identifier in the first encapsulation header.
RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 61/874,919, titled “Transparent Inter Ethernet Fabric Switch Routing,” by inventors Venkata R. K. Addanki, Mythilikanth Raman, and Shunjia Yu, filed 6 Sep. 2013, the disclosure of which is incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 13/087,239, titled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011; and U.S. patent application Ser. No. 12/725,249, titled “Redundant Host Connection in a Routed Network,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010, the disclosures of which are incorporated by reference herein.

US Referenced Citations (478)
Number Name Date Kind
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5878232 Marimuthu Mar 1999 A
5959968 Chin Sep 1999 A
5973278 Wehrli, III Oct 1999 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6438106 Pillar Aug 2002 B1
6498781 Bass Dec 2002 B1
6542266 Phillips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6771610 Seaman Aug 2004 B1
6870840 Hill Mar 2005 B1
6873602 Ambe Mar 2005 B1
6937576 DiBenedetto Aug 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7061877 Gummalla Jun 2006 B1
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7397768 Betker Jul 2008 B1
7397794 Lacroute Jul 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7653056 Dianes Jan 2010 B1
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720076 Dobbins May 2010 B2
7729296 Choudhary Jun 2010 B1
7787480 Mehta et al. Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937438 Miller May 2011 B1
7937756 Kay May 2011 B2
7945941 Sinha May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8018938 Fromm Sep 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukla Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8090805 Chawla Jan 2012 B1
8102781 Smith Jan 2012 B2
8102791 Tang Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan et al. Oct 2012 B1
8295921 Wang Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8451717 Venkataraman May 2013 B2
8462774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8553710 White Oct 2013 B1
8595479 Radhakrishnan Nov 2013 B2
8599850 Jha Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
8619788 Sankaran Dec 2013 B1
8625616 Vobbilisetty Jan 2014 B2
8705526 Hasan Apr 2014 B1
8706905 McGlaughlin Apr 2014 B1
8717895 Koponen May 2014 B2
8724456 Hong May 2014 B1
8792501 Rustagi Jul 2014 B1
8798055 An Aug 2014 B1
8804736 Drake Aug 2014 B1
8806031 Kondur Aug 2014 B1
8826385 Congdon Sep 2014 B2
8937865 Kumar Jan 2015 B1
8948181 Kapadia Feb 2015 B2
9019976 Gupta Apr 2015 B2
9178793 Marlow Nov 2015 B1
9231890 Vobbilisetty Jan 2016 B2
9401818 Venkatesh Jul 2016 B2
9438447 Basso Sep 2016 B2
20010005527 Vaeth Jun 2001 A1
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian Feb 2002 A1
20020027885 Ben-Ami Mar 2002 A1
20020039350 Wang Apr 2002 A1
20020054593 Morohashi May 2002 A1
20020087723 Williams Jul 2002 A1
20020091795 Yip Jul 2002 A1
20030026290 Umayabashi Feb 2003 A1
20030041085 Sato Feb 2003 A1
20030097470 Lapuh May 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030152075 Hawthorne, III Aug 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20030216143 Roese Nov 2003 A1
20040001433 Gram Jan 2004 A1
20040003094 See Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040049699 Griffith Mar 2004 A1
20040057430 Paavolainen Mar 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040165596 Garcia Aug 2004 A1
20040213232 Regan Oct 2004 A1
20040225725 Enomoto Nov 2004 A1
20050007951 Lapuh Jan 2005 A1
20050025179 McLaggan Feb 2005 A1
20050044199 Shiga Feb 2005 A1
20050074001 Mattes Apr 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050111352 Ho May 2005 A1
20050122979 Gross Jun 2005 A1
20050152335 Lodha Jul 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050207423 Herbst Sep 2005 A1
20050213561 Yao Sep 2005 A1
20050220096 Friskney Oct 2005 A1
20050265330 Suzuki Dec 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060007869 Hirota Jan 2006 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060039366 Ghosh Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060092860 Higashitaniguchi May 2006 A1
20060098589 Kreeger May 2006 A1
20060126511 Youn Jun 2006 A1
20060140130 Kalkunte Jun 2006 A1
20060155828 Ikeda Jul 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060221960 Borgione Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070061817 Atkinson Mar 2007 A1
20070083625 Chamdani Apr 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070098006 Parry May 2007 A1
20070116224 Burke May 2007 A1
20070116422 Reynolds May 2007 A1
20070121617 Kanekar May 2007 A1
20070156659 Lim Jul 2007 A1
20070177525 Wijnands Aug 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070206762 Chandra Sep 2007 A1
20070211712 Fitch Sep 2007 A1
20070226214 Smits Sep 2007 A1
20070230472 Jesuraj Oct 2007 A1
20070258449 Bennett Nov 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland, III Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080056300 Williams Mar 2008 A1
20080065760 Damm Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080095160 Yadav Apr 2008 A1
20080101386 Gray May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Berkvens Jun 2008 A1
20080159260 Vobbilisetty Jul 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186968 Farinacci Aug 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225852 Raszuk Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285458 Lysne Nov 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080304498 Jorgensen Dec 2008 A1
20080304519 Koenen Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090022069 Khan Jan 2009 A1
20090037607 Farinacci Feb 2009 A1
20090037977 Gai Feb 2009 A1
20090041046 Hirata Feb 2009 A1
20090042270 Dolly Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090129384 Regan May 2009 A1
20090138577 Casado May 2009 A1
20090138752 Graham May 2009 A1
20090161584 Guan Jun 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090225752 Mitsumori Sep 2009 A1
20090232031 Vasseur Sep 2009 A1
20090245137 Hares et al. Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090252061 Small Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20090328392 Tripathi Dec 2009
20100027420 Smith Feb 2010 A1
20100046471 Hattori Feb 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100085981 Gupta Apr 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukla Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Johnston Jul 2010 A1
20100169467 Shukla Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100189119 Sawada Jul 2010 A1
20100195529 Liu Aug 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta et al. Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100258263 Douxchamps Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100272107 Papp Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284414 Agarwal Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290464 Assarpour Nov 2010 A1
20100290472 Raman Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100299527 Arunan Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20100309912 Mehta Dec 2010 A1
20100316055 Belanger Dec 2010 A1
20100329110 Rose Dec 2010 A1
20100329265 Lapuh Dec 2010 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035489 McDaniel Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110044352 Chaitou Feb 2011 A1
20110051723 Rabie Mar 2011 A1
20110055274 Scales et al. Mar 2011 A1
20110058547 Waldrop Mar 2011 A1
20110064086 Xiong Mar 2011 A1
20110064089 Hidaka Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085562 Bao Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134924 Hewson Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe et al. Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161494 McDysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110225540 d'Entremont Sep 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231570 Altekar Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty et al. Nov 2011 A1
20110268125 Vobbilisetty Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110280572 Vobbilisetty Nov 2011 A1
20110286357 Haris Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299413 Chatwani Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317559 Kern Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120039163 Nakajima Feb 2012 A1
20120075991 Sugita Mar 2012 A1
20120099567 Hart Apr 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120136999 Roitshtein May 2012 A1
20120147740 Nakash Jun 2012 A1
20120158997 Hsu Jun 2012 A1
20120163164 Terry Jun 2012 A1
20120170491 Kern Jul 2012 A1
20120177039 Berman Jul 2012 A1
20120230225 Matthews Sep 2012 A1
20120243359 Katoch Sep 2012 A1
20120243539 Keesara Sep 2012 A1
20120250502 Brolin Oct 2012 A1
20120275347 Banerjee Nov 2012 A1
20120281700 Koganti Nov 2012 A1
20120287785 Kamble Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120320800 Kamble Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327766 Tsai et al. Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130003535 Sarwar Jan 2013 A1
20130003549 Matthews Jan 2013 A1
20130003608 Lei Jan 2013 A1
20130003737 Sinicrope Jan 2013 A1
20130003738 Koganti Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130034094 Cardona Feb 2013 A1
20130044629 Biswas Feb 2013 A1
20130058354 Casado Mar 2013 A1
20130067466 Combs Mar 2013 A1
20130070762 Adams Mar 2013 A1
20130083693 Himura Apr 2013 A1
20130097345 Munoz Apr 2013 A1
20130114595 Mack-Crane et al. May 2013 A1
20130124707 Ananthapadmanabha May 2013 A1
20130124750 Anumala May 2013 A1
20130127848 Joshi May 2013 A1
20130145008 Kannan Jun 2013 A1
20130194914 Agarwal Aug 2013 A1
20130219473 Schaefer Aug 2013 A1
20130223221 Xu Aug 2013 A1
20130250951 Koganti Sep 2013 A1
20130250958 Watanabe Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130266015 Qu Oct 2013 A1
20130272135 Leong Oct 2013 A1
20130294451 Li Nov 2013 A1
20130301425 Udutha Nov 2013 A1
20130301642 Radhakrishnan et al. Nov 2013 A1
20130315125 Ravishankar Nov 2013 A1
20130322427 Stiekes Dec 2013 A1
20140044126 Sabhanatarajan Feb 2014 A1
20140059225 Gasparakis Feb 2014 A1
20140071987 Janardhanan Mar 2014 A1
20140086253 Yong Mar 2014 A1
20140105034 Sun Apr 2014 A1
20140112122 Kapadia Apr 2014 A1
20140140243 Ashwood-Smith May 2014 A1
20140241147 Rajagopalan Aug 2014 A1
20140258446 Bursell Sep 2014 A1
20140269701 Kaushik Sep 2014 A1
20140269720 Srinivasan Sep 2014 A1
20140269733 Venkatesh Sep 2014 A1
20140355477 Velayudhan Dec 2014 A1
20150009992 Zhang Jan 2015 A1
20150110111 Song Apr 2015 A1
20150110487 Fenkes Apr 2015 A1
20150139234 Hu May 2015 A1
20150172098 Agarwal Jun 2015 A1
20150188753 Anumala Jul 2015 A1
20170026197 Venkatesh Jan 2017 A1
Foreign Referenced Citations (37)
Number Date Country
1735062 Feb 2006 CN
1777149 May 2006 CN
101064682 Oct 2007 CN
101459618 Jun 2009 CN
101471899 Jul 2009 CN
101548511 Sep 2009 CN
101645880 Feb 2010 CN
102098237 Jun 2011 CN
102148749 Aug 2011 CN
102301663 Dec 2011 CN
102349268 Feb 2012 CN
102378176 Mar 2012 CN
102404181 Apr 2012 CN
102415065 Apr 2012 CN
102415065 Apr 2012 CN
102801599 Nov 2012 CN
102801599 Nov 2012 CN
102088388 Apr 2014 CN
0579567 May 1993 EP
0993156 Apr 2000 EP
1398920 Mar 2004 EP
1398920 Mar 2004 EP
2001167 Aug 2007 EP
1916807 Apr 2008 EP
2854352 Apr 2015 EP
2874359 May 2015 EP
2008056838 May 2008 WO
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
2010111142 Sep 2010 WO
2011132568 Oct 2011 WO
2011140028 Nov 2011 WO
2011140028 Nov 2011 WO
WO 2011140028 Nov 2011 WO
2012033663 Mar 2012 WO
2012093429 Jul 2012 WO
2014031781 Feb 2014 WO
Non-Patent Literature Citations (222)
Entry
Mahalingam “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks” Oct. 17, 2013 pp. 1-22, Sections 1, 4 and 4.1.
Office action dated Apr. 30, 2015, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Apr. 1, 2015, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated May 21, 2015, U.S. Appl. No. 13/288,822, filed Nov. 3, 2011.
Siamak Azodolmolky et al. “Cloud computing networking: Challenges and opportunities for innovations”, IEEE Communications Magazine, vol. 51, No. 7, Jul. 1, 2013.
Office Action dated Apr. 1, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office action dated Jun. 8, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Jun. 10, 2015, U.S. Appl. No. 13/890,150, filed May 8, 2013.
Eastlake, D. et al., ‘RBridges: TRILL Header Options’, Dec. 24, 2009, pp. 1-17, TRILL Working Group.
Perlman, Radia et al., ‘RBridge VLAN Mapping’, TRILL Working Group, Dec. 4, 2009, pp. 1-12.
‘An Introduction to Brocade VCS Fabric Technology’, Brocade white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012.
‘Switched Virtual Networks. Internetworking Moves Beyond Bridges and Routers’ Data Communications, McGraw Hill. New York, US, vol. 23, No. 12, Sep. 1, 1994 (Sep. 1, 1994), pp. 66-70,72,74, XP000462385 ISSN: 0363-6399.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jan. 10, 2014, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Jan. 16, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jul. 31, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Jan. 6, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Oct. 2, 2013, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office action dated Dec. 2, 2013, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Nov. 29, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office action dated Nov. 12, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Brocade, ‘Brocade Fabrics OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions’, pp. 1-6, 2009 Brocade Communications Systems, Inc.
Brocade, ‘FastIron and TurboIron 24x Configuration Guide’, Feb. 16, 2010.
Brocade, ‘The Effortless Network: Hyperedge Technology for the Campus LAN’ 2012.
Brocade ‘Brocade Unveils ’The Effortless Network, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012.
Christensen, M. et al., ‘Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches’, May 2006.
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
Foundary FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 2008.
Huang, Nen-Fu et al., ‘An Effective Spanning Tree Algorithm for a Bridged LAN’, Mar. 16, 1992.
Knight, ‘Network Based IP VPN Architecture using Virtual Routers’, May 2003.
Knight P et al: ‘Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts’, IEEE Communications Magazine, IEEE Service Center, Piscataway, US, vol. 42, No. 6, Jun. 1, 2004 (Jun. 1, 2004), pp. 124-131, XP001198207, ISSN: 0163-6804, DOI: 10.1109/MCOM.2004.1304248.
Knight S et al: ‘Virtual Router Redundancy Protocol’ Internet Citation Apr. 1, 1998 (Apr. 1, 1998), XP002135272 Retrieved from the Internet: URL:ftp://ftp.isi.edu/in-notes/rfc2338.txt [retrieved on Apr. 10, 2000].
Kreeger, L. et al., ‘Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00’, Jan. 30, 2012.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT)’, draft-lapuh-network-smlt-08, Jul. 2008.
Lapuh, Roger et al., ‘Split Multi-Link Trunking (SMLT)’, Network Working Group, Oct. 2012.
Louati, Wajdi et al., ‘Network-based virtual personal overlay networks using programmable virtual routers’, IEEE Communications Magazine, Jul. 2005.
Narten, T. et al., ‘Problem Statement: Overlays for Network Virtualization d raft-narten-n vo3-over I ay-problem -statement-01’, Oct. 31, 2011.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, from Park, Jung H., dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Apr. 26, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jan. 4, 2013.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jun. 7, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated May 24, 2012.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Apr. 25, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Mar. 18, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jul. 31, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Feb. 22, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Oct. 26, 2012.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated May 16, 2013.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Jun. 21, 2013.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jun. 10, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jul. 3, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Jul. 16, 2013.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,864, filed Apr. 22, 2011, dated Sep. 19, 2012.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jun. 19, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Mar. 4, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Sep. 5, 2013.
Office Action for U.S. Appl. No. 13/098,360, filed Apr. 29, 2011, dated May 31, 2013.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Dec. 21, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Jul. 9, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/365,808, filed Jul. 18, 2013, dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Perlman, Radia et al., ‘Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology’, 2009.
Perlman, Radia et al., ‘RBridges: Base Protocol Specification; Draft-ietf-trill-rbridge-protocol-16.txt’, Mar. 3, 2010, pp. 1-117.
S. Nadas et al., ‘Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6’, Internet Engineering Task Force, Mar. 2010.
‘RBridges: Base Protocol Specification’, IETF Draft, Perlman et al., Jun. 26, 2009.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, from Jaroenchonwanit, Bunjob, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, from Haile, Awet A., dated Dec. 2, 2012.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated Dec. 5, 2012.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE Globecom Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009 (Nov. 30, 2009), pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011].
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015.
Abawajy J. “An Approach to Support a Single Service Provider Address Image for Wide Area Networks Environment” Centre for Parallel and Distributed Computing, School of Computer Science Carleton University, Ottawa, Ontario, K1S 5B6, Canada.
Office Action dated Jun. 18, 2015, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Jun. 16, 2015, U.S. Appl. No. 13/048,817, filed Mar. 15, 2011.
Touch, J. et al., ‘Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement’, May 2009, Network Working Group, pp. 1-17.
Zhai F. Hu et al. ‘RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt’, May 15, 2012.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 13/598,204, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Aug. 21, 2015, U.S. Appl. No. 13/776,217, filed Feb. 25, 2013.
Office Action dated Aug. 19, 2015, U.S. Appl. No. 14/156,374, filed Jan. 15, 2014.
Office Action dated Sep. 2, 2015, U.S. Appl. No. 14/151,693, filed Jan. 9, 2014.
Office Action dated Sep. 17, 2015, U.S. Appl. No. 14/577,785, filed Dec. 19, 2014.
Office Action dated Sep. 22, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated Nov. 5, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Oct. 19, 2015, U.S. Appl. No. 14/215,996, filed Mar. 17, 2014.
Office Action dated Sep. 18, 2015, U.S. Appl. No. 13/345,566, filed Jan. 6, 2012.
Office Action for U.S. Appl. No. 14/660,803, dated May 17, 2017.
Office Action for U.S. Appl. No. 14/488,173, dated May 12, 2017.
Office Action for U.S. Appl. No. 13/288,822, dated May 26, 2017.
Office Action for U.S. Appl. No. 14/329,447, dated Jun. 8, 2017.
Office Action dated Jan. 31, 2017, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office Action dated Jan. 27, 2017, U.S. Appl. No. 14/216,292, filed Mar. 17, 2014.
Office Action dated Jan. 26, 2017, U.S. Appl. No. 13/786,328, filed Mar. 5, 2013.
Office Action dated Dec. 2, 2016, U.S. Appl. No. 14/512,268, filed Oct. 10, 2014.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/899,849, filed May 22, 2013.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated Nov. 30, 2016, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Nov. 21, 2016, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/822,380, filed Aug. 10, 2015.
“Network based IP VPN Architecture using Virtual Routers” Paul Knight et al.
Yang Yu et al “A Framework of using OpenFlow to handle transient link failure”, TMEE, 2011 International Conference on, IEEE, Dec. 16, 2011.
Office Action for U.S. Appl. No. 15/227,789, dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/822,380, dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/704,660, dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/510,913, dated Mar. 3, 2017.
Office Action for U.S. Appl. No. 14/473,941, dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/329,447, dated Feb. 10, 2017.
Office Action for U.S. Appl. No. 14/662,095, dated Mar. 24, 2017.
Office Action for U.S. Appl. No. 15/005,967, dated Mar. 31, 2017.
Office Action for U.S. Appl. No. 15/215,377, dated Apr. 7, 2017.
Office Action for U.S. Appl. No. 13/098,490, dated Apr. 6, 2017.
Office Action for U.S. Appl. No. 14/662,092, dated Mar. 29, 2017.
Office Action for U.S. Appl. No. 14/817,097, dated May 4, 2017.
Office Action for U.S. Appl. No. 14/872,966, dated Apr. 20, 2017.
Office Action for U.S. Appl. No. 14/680,915, dated May 3, 2017.
Office Action for U.S. Appl. No. 14/792,166, dated Apr. 26, 2017.
Related Publications (1)
Number Date Country
20150071122 A1 Mar 2015 US
Provisional Applications (1)
Number Date Country
61874919 Sep 2013 US