The present disclosure is related to U.S. patent application Ser. No. 12/725,249, entitled “REDUNDANT HOST CONNECTION IN A ROUTED NETWORK,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010; and
U.S. patent application Ser. No. 13/087,239, entitled “VIRTUAL CLUSTER SWITCHING,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011;
the disclosures of which are incorporated by reference herein.
1. Field
The present disclosure relates to network design. More specifically, the present disclosure relates to a method for a constructing a scalable switching system that facilitates automatic configuration.
2. Related Art
The relentless growth of the Internet has brought with it an insatiable demand for bandwidth. As a result, equipment vendors race to build larger, faster, and more versatile switches to move traffic. However, the size of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. More importantly, because an overly large system often does not provide economy of scale due to its complexity, simply increasing the size and throughput of a switch may prove economically unviable due to the increased per-port cost.
One way to increase the throughput of a switch system is to use switch stacking. In switch stacking, multiple smaller-scale, identical switches are interconnected in a special pattern to form a larger logical switch. However, switch stacking requires careful configuration of the ports and inter-switch links. The amount of required manual configuration becomes prohibitively complex and tedious when the stack reaches a certain size, which precludes switch stacking from being a practical option in building a large-scale switching system. Furthermore, a system based on stacked switches often has topology limitations which restrict the scalability of the system due to fabric bandwidth considerations.
One embodiment of the present invention provides a switch system. The switch includes a port to couple to a second switch and a control mechanism configured. During operation, the control mechanism receives from the second switch a set of configuration information. Based on the received configuration information, the control mechanism invites the second switch to join a virtual cluster switch.
In a variation on this embodiment, the virtual cluster switch comprises one or more physical switches which are allowed to be coupled in an arbitrary topology. In addition, the virtual cluster switch appears to be one single switch.
In a variation on this embodiment, the received configuration information comprises an indication of whether the second switch is part of a virtual cluster switch.
In a further variation, the received configuration information further comprises an identifier for the virtual cluster switch.
In a variation on this embodiment, the received configuration information comprises an identifier for the second switch.
In a variation on this embodiment, the control mechanism maintains a global configuration database which stores configuration information for a number of member switches in the virtual cluster switch.
In a further variation, the received configuration information comprises a unique identifier associated with an entry in the global configuration database which corresponds to the second switch.
In a further variation, the control mechanism reserves a slot in the global configuration database based on the unique identifier.
One embodiment of the present invention provides a virtual cluster switch. The virtual cluster switch includes a plurality of switches which are allowed to be coupled in an arbitrary topology. The virtual cluster switch also includes a control mechanism residing on a respective switch and configured to allow a second switch to join the virtual cluster switch without requiring manual configuration. Furthermore, the virtual cluster switch appears externally as a single switch.
In a variation on this embodiment, the control mechanism exchanges configuration information with the second switch
In a variation on this embodiment, a respective switch in the switching system receives an automatically configured identifier associated with a logical switch formed on the respective switch.
In a further variation, the logical switch is a logical FC switch. In addition, the identifier is an FC switch domain ID.
In a further variation, the packets are transported between switches based on a TRILL protocol. The respective switch is assigned a TRILL RBridge identifier that corresponds to the FC switch domain ID.
In a variation on this embodiment, a respective switch maintains a copy of configuration information of all the switches in the switching system.
In a variation on this embodiment, the switching system includes a name service which maintains records of MAC addresses learned by a respective switch.
The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
In embodiments of the present invention, the problem of building a versatile, cost-effective, and scalable switching system is solved by running a control plane with automatic configuration capabilities (such as the Fibre Channel control plane) over a conventional transport protocol, thereby allowing a number of switches form a switch cluster that can be represented as a single, scalable logical switch without requiring burdensome manual configuration. As a result, one can form a large-scale logical switch (referred to as a “virtual cluster switch” or VCS herein) using a number of smaller physical switches. The automatic configuration capability provided by the control plane running on each physical switch allows any number of switches to be connected in an arbitrary topology without requiring tedious manual configuration of the ports and links. This feature makes it possible to use many smaller, inexpensive switches to construct a large cluster switch, which can be viewed as a single logical switch externally.
It should be noted that a virtual cluster switch is not the same as conventional switch stacking. In switch stacking, multiple switches are interconnected at a common location (often within the same rack), based on a particular topology, and manually configured in a particular way. These stacked switches typically share a common address, e.g., IP address, so they can be addressed as a single switch externally. Furthermore, switch stacking requires a significant amount of manual configuration of the ports and inter-switch links. The need for manual configuration prohibits switch stacking from being a viable option in building a large-scale switching system. The topology restriction imposed by switch stacking also limits the number of switches that can be stacked. This is because it is very difficult, if not impossible, to design a stack topology that allows the overall switch bandwidth to scale adequately with the number of switch units.
In contrast, a VCS can include an arbitrary number of centralized or distributed switches with individual addresses, can be based on an arbitrary topology, and does not require extensive manual configuration. The switches can reside in the same location, or be distributed over different locations. These features overcome the inherent limitations of switch stacking and make it possible to build a large “switch farm” which can be treated as a single, logical switch. Due to the automatic configuration capabilities of the VCS, an individual physical switch can dynamically join or leave the VCS without disrupting services to the rest of the network.
Furthermore, the automatic and dynamic configurability of VCS allows a network operator to build its switching system in a distributed and “pay-as-you-grow” fashion without sacrificing scalability. The VCS's ability to respond to changing network conditions makes it an ideal solution in a virtual computing environment, where network loads often change with time.
Although this disclosure is presented using examples based on the Transparent Interconnection of Lots of Links (TRILL) as the transport protocol and the Fibre Channel (FC) fabric protocol as the control-plane protocol, embodiments of the present invention are not limited to TRILL networks, or networks defined in a particular Open System Interconnection Reference Model (OSI reference model) layer. For example, a VCS can also be implemented with switches running multi-protocol label switching (MPLS) protocols for the transport. In addition, the terms “RBridge” and “switch” are used interchangeably in this disclosure. The use of the term “RBridge” does not limit embodiments of the present invention to TRILL networks only. The TRILL protocol is described in IETF draft “RBridges: Base Protocol Specification,” available at http://tools.ietf.org/html/draft-ietf-trill-rbridge-protocol, which is incorporated by reference herein.
The terms “virtual cluster switch,” “virtual cluster switching,” and “VCS” refer to a group of interconnected physical switches operating as a single logical switch. The control plane for these physical switches provides the ability to automatically configure a given physical switch, so that when it joins the VCS, little or no manual configuration is required.
The term “RBridge” refers to routing bridges, which are bridges implementing the TRILL protocol as described in IETF draft “RBridges: Base Protocol Specification.” Embodiments of the present invention are not limited to the application among RBridges. Other types of switches, routers, and forwarders can also be used.
The terms “frame” or “packet” refer to a group of bits that can be transported together across a network. “Frame” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Packet” should not be interpreted as limiting embodiments of the present invention to layer-3 networks. “Frame” or “packet” can be replaced by other terminologies referring to a group of bits, such as “cell” or “datagram.”
A physical switch may dedicate a number of ports for external use (i.e., to be coupled to end hosts or other switches external to the VCS) and other ports for inter-switch connection. Viewed externally, VCS 100 appears to be one switch to a device from the outside, and any port from any of the physical switches is considered one port on the VCS. For example, port groups 110 and 112 are both VCS external ports and can be treated equally as if they were ports on a common physical switch, although switches 105 and 107 may reside in two different locations.
The physical switches can reside at a common location, such as a data center or central office, or be distributed in different locations. Hence, it is possible to construct a large-scale centralized switching system using many smaller, inexpensive switches housed in one or more chassis at the same location. It is also possible to have the physical switches placed at different locations, thus creating a logical switch that can be accessed from multiple locations. The topology used to interconnect the physical switches can also be versatile. VCS 100 is based on a mesh topology. In further embodiments, a VCS can be based on a ring, fat tree, or other types of topologies.
In one embodiment, the protocol architecture of a VCS is based on elements from the standard IEEE 802.1Q Ethernet bridge, which is emulated over a transport based on the Fibre Channel Framing and Signaling-2 (FC-FS-2) standard. The resulting switch is capable of transparently switching frames from an ingress Ethernet port from one of the edge switches to an egress Ethernet port on a different edge switch through the VCS.
Because of its automatic configuration capability, a VCS can be dynamically expanded as the network demand increases. In addition, one can build a large-scale switch using many smaller physical switches without the burden of manual configuration. For example, it is possible to build a high-throughput fully non-blocking switch using a number of smaller switches. This ability to use small switches to build a large non-blocking switch significantly reduces the cost associated switch complexity.
The forwarding of the Ethernet frame between ingress switch 202 and egress switch 204 is performed via inter-switch ports 208 and 210. The frame transported between the two inter-switch ports is encapsulated in an outer MAC header and a TRILL header, in accordance with the TRILL standard. The protocol stack associated with a given inter-switch port includes the following (from bottom up): MAC layer, TRILL layer, FC-FS-2 layer, FC E-Port layer, and FC link services (FC-LS) layer. The FC-LS layer is responsible for maintaining the connectivity information of a physical switch's neighbor, and populating an FC routing information base (RIB) 222. This operation is similar to what is done in an FC switch fabric. The FC-LS protocol is also responsible for handling joining and departure of a physical switch in VCS 200. The operation of the FC-LS layer is specified in the FC-LS standard, which is available at http://www.t11.org/ftp/t11/member/fc/ls/06-393v5.pdf, the disclosure of which is incorporated herein in its entirety.
During operation, when FDB 214 returns the egress switch 204 corresponding to the destination MAC address of the ingress Ethernet frame, the destination egress switch's identifier is passed to a path selector 218. Path selector 218 performs a fabric shortest-path first (FSPF)-based route lookup in conjunction with RIB 222, and identifies the next-hop switch within VCS 200. In other words, the routing is performed by the FC portion of the protocol stack, similar to what is done in an FC switch fabric.
Also included in each physical switch are an address manager 216 and a fabric controller 220. Address manager 216 is responsible for configuring the address of a physical switch when the switch first joins the VCS. For example, when switch 202 first joins VCS 200, address manager 216 can negotiate a new FC switch domain ID, which is subsequently used to identify the switch within VCS 200. Fabric controller 220 is responsible for managing and configuring the logical FC switch fabric formed on the control plane of VCS 200.
One way to understand the protocol architecture of VCS is to view the VCS as an FC switch fabric with an Ethernet/TRILL transport. Each physical switch, from an external point of view, appears to be a TRILL RBridge. However, the switch's control plane implements the FC switch fabric software. In other words, embodiments of the present invention facilitate the construction of an “Ethernet switch fabric” running on FC control software. This unique combination provides the VCS with automatic configuration capability and allows it to provide the ubiquitous Ethernet services in a very scalable fashion.
For example, RBridge 412 is coupled with hosts 420 and 422 via 10GE ports. RBridge 414 is coupled to a host 426 via a 10GE port. These RBridges have TRILL-based inter-switch ports for connection with other TRILL RBridges in VCS 400. Similarly, RBridge 416 is coupled to host 428 and an external Ethernet switch 430, which is coupled to an external network that includes a host 424. In addition, network equipment can also be coupled directly to any of the physical switches in VCS 400. As illustrated here, TRILL RBridge 408 is coupled to a data storage 417, and TRILL RBridge 410 is coupled to a data storage 418.
Although the physical switches within VCS 400 are labeled as “TRILL RBridges,” they are different from the conventional TRILL RBridge in the sense that they are controlled by the FC switch fabric control plane. In other words, the assignment of switch addresses, link discovery and maintenance, topology convergence, routing, and forwarding can be handled by the corresponding FC protocols. Particularly, each TRILL RBridge's switch ID or nickname is mapped from the corresponding FC switch domain ID, which can be automatically assigned when a switch joins VCS 400 (which is logically similar to an FC switch fabric).
Note that TRILL is only used as a transport between the switches within VCS 400. This is because TRILL can readily accommodate native Ethernet frames. Also, the TRILL standards provide a ready-to-use forwarding mechanism that can be used in any routed network with arbitrary topology (although the actual routing in VCS is done by the FC switch fabric protocols). Embodiments of the present invention should be not limited to using only TRILL as the transport. Other protocols (such as multi-protocol label switching (MPLS) or Internet Protocol (IP)), either public or proprietary, can also be used for the transport.
In one embodiment, a VCS is created by instantiating a logical FC switch in the control plane of each switch. After the logical FC switch is created, a virtual generic port (denoted as G_Port) is created for each Ethernet port on the RBridge. A G_Port assumes the normal G_Port behavior from the FC switch perspective. However, in this case, since the physical links are based on Ethernet, the specific transition from a G_Port to either an FC F_Port or E_Port is determined by the underlying link and physical layer protocols. For example, if the physical Ethernet port is connected to an external device which lacks VCS capabilities, the corresponding G_Port will be turned into an F_Port. On the other hand, if the physical Ethernet port is connected to a switch with VCS capabilities and it is confirmed that the switch on the other side is part of a VCS, then the G_Port will be turned into an E_port.
Similarly, RBridge 416 contains a virtual, logical FC switch 512. Corresponding to the physical Ethernet ports coupled to host 428 and external switch 430, logical FC switch 512 has a logical F_Port coupled to host 428, and a logical FL_Port coupled to switch 430. In addition, a logical N_Port 510 is created for host 428, and a logical NL_Port 508 is created for switch 430. Note that the logical FL_Port is created because that port is coupled to a switch (switch 430), instead of a regular host, and therefore logical FC switch 512 assumes an arbitrated loop topology leading to switch 430. Logical NL_Port 508 is created based on the same reasoning to represent a corresponding NL_Port on switch 430. On the VCS side, logical FC switch 512 has two logical E_Ports, which to be coupled with other logical FC switches in the logical FC switch fabric in the VCS.
In the example illustrated in
The physical edge ports 522 and 524 are mapped to logical F_Ports 532 and 534, respectively. In addition, physical fabric ports 526 and 528 are mapped to logical E_Ports 536 and 538, respectively. Initially, when logical FC switch 521 is created (for example, during the boot-up sequence), logical FC switch 521 only has four G_Ports which correspond to the four physical ports. These G_Ports are subsequently mapped to F_Ports or E_Ports, depending on the devices coupled to the physical ports.
Neighbor discovery is the first step in VCS formation between two VCS-capable switches. It is assumed that the verification of VCS capability can be carried out by a handshake process between two neighbor switches when the link is first brought up.
In general, a VCS presents itself as one unified switch composed of multiple member switches. Hence, the creation and configuration of VCS is of critical importance. In one embodiment, the VCS configuration is based on a distributed database, which is replicated and distributed over all switches. In other words, each VCS member switch maintains a copy of the VCS configuration database, and any change to the database is propagated to all the member switches.
In one embodiment, a VCS configuration database includes a global configuration table (GT) of the VCS and a list of switch description tables (STs), each of which describes a VCS member switch. In its simplest form, a member switch can have a VCS configuration database that includes a global table and one switch description table, e.g., [<GT><ST>]. A VCS with multiple switches will have a configuration database that has a single global table and multiple switch description tables, e.g., [<GT><ST0><ST1> . . . <STn−1>]. The number n corresponds to the number of member switches in the VCS. In one embodiment, the GT can include at least the following information: the VCS ID, number of nodes in the VCS, a list of VLANs supported by the VCS, a list of all the switches (e.g., list of FC switch domain IDs for all active switches) in the VCS, and the FC switch domain ID of the principal switch (as in a logical FC switch fabric). A switch description table can include at least the following information: the IN_VCS flag, indication whether the switch is a principal switch in the logical FC switch fabric, the FC switch domain ID for the switch, the FC world-wide name (WWN) for the corresponding logical FC switch; the mapped ID of the switch, and optionally the IP address of the switch.
In addition, each switch's global configuration database is associated with a transaction ID. The transaction ID specifies the latest transaction (e.g., update or change) incurred to the global configuration database. The transaction IDs of the global configuration databases in two switches can be compared to determine which database has the most current information (i.e., the database with the more current transaction ID is more up-to-date). In one embodiment, the transaction ID is the switch's serial number plus a sequential transaction number. This configuration can unambiguously resolve which switch has the latest configuration.
As illustrated in
In one embodiment, each switch also has a VCS-mapped ID (denoted as “mappedID”), which is a switch index within the VCS. This mapped ID is unique and persistent within the VCS. That is, when a switch joins the VCS for the first time, the VCS assigns a mapped ID to the switch. This mapped ID persists with the switch, even if the switch leaves the VCS. When the switch joins the VCS again at a later time, the same mapped ID is used by the VCS to retrieve previous configuration information for the switch. This feature can reduce the amount of configuration overhead in VCS. Also, the persistent mapped ID allows the VCS to “recognize” a previously configured member switch when it re-joins the VCS, since a dynamically assigned FC fabric domain ID would change each time the member switch joins and is configured by the VCS.
In the example illustrated in
The “IN_VCS” value in default switch configuration table 604 indicates whether the member switch is part of a VCS. A switch is considered to be “in a VCS” when it is assigned one of the FC switch domains by the FC switch fabric with two or more switch domains. If a switch is part of an FC switch fabric that has only one switch domain, i.e., its own switch domain, then the switch is considered to be “not in a VCS.” The “SWITCH_MAC” value indicates the MAC address of the switch. Also included in default switch configuration table 604 are interface details for the switch. These details can include a number of parameters for individual edge ports on the switch. Such parameters can include, for example, quality-of-service (QoS) related parameters, VLAN configuration information, and access-control configuration information.
When a switch is first connected to a VCS, the logical FC switch fabric formation process running on a neighboring switch which is part of the VCS allocates a new FC switch domain ID to the joining switch. In one embodiment, only the switches directly connected to the new switch participate in the VCS join operation.
Note that in the case where the global configuration database of a joining switch is current and in sync with the global configuration database of the VCS based on a comparison of the transaction IDs of the two databases (e.g., when a member switch is temporarily disconnected from the VCS and reconnected shortly afterward), a trivial merge is performed. That is, the joining switch can be connected to the VCS, and no change or update to the global VCS configuration database is required.
Sometimes, a network administrator might change a port on a VCS member switch from an edge port to a fabric port, i.e., use a port that is previously used to couple to edge devices to couple to another VCS member switch. In this case, in one embodiment, the prior configuration information of the edge port (e.g., QoS parameters, VLAN configuration, access-control information, etc.) is not deleted. Instead, the prior configuration information is stored as a “shadow” configuration. This “shadow” configuration can be restored as a default configuration for the port if the port is later changed back to be an edge port. In addition, this shadow configuration can be part of the global VCS configuration database, and can be accessed and edited by an administrator from any VCS member switch using, for example, a command line interface (CLI).
The distributed global configuration database can allow a VCS member switch to be remotely managed from any other member switch. For example, a configuration command of a given member switch can be issued from a host connected to any member switch in the VCS. Such configuration command might include information on VLAN configuration, QoS configuration, and/or access-control configuration. In one embodiment, the change to a switch's configuration is tentatively transmitted to the switch. After the switch confirms and validates the change, a commit-change command is transmitted to all the member switches in the VCS, so the global configuration database can be updated throughout the VCS. In a further embodiment, the change is tentatively transmitted to all the member switches in the VCS, and the commit-change command is only sent out after all the switches confirm and validate the tentatively change.
When a switch joins the VCS via a link, both neighbors on each end of the link present to the other switch a VCS four-tuple of <Prior VCS_ID, SWITCH_MAC, mappedID, IN_VCS> from a prior incarnation, if any. Otherwise, the switch presents to the counterpart a default tuple. If the VCS_ID value was not set from a prior join operation, a VCS_ID value of −1 is used. In addition, if a switch's IN_VCS flag is set to 0, it sends out its interface configuration to the neighboring switch. In the example in
After the above PRE-INVITE operation, a driver switch for the join process is selected. By default, if a switch's IN_VCS value is 1 and the other switch's IN_VCS value is 0, the switch with IN_VCS=1 is selected as the driver switch. If both switches have their IN_VCS values as 1, then nothing happens, i.e., the PRE-INVITE operation would not lead to an INVITE operation. If both switches have their IN_VCS values as 0, then one of the switches is elected to be the driving switch (for example, the switch with a lower FC switch domain ID value). The driving switch's IN_VCS value is then set to 1 and drives the join process.
After switch 702 is selected as the driver switch, switch 702 then attempts to reserve a slot (i.e., a switch description table) in the VCS configuration database corresponding to the mappedID value in switch 704's PRE-INVITE information. Next, switch 702 searches the VCS configuration database for switch 704's MAC address in any mappedID slot. If such a slot is found, switch 702 copies all information from the identified slot into the reserved slot. Otherwise, switch 702 copies the information received during the PRE-INVITE from switch 704 into the VCS configuration database. The updated VCS configuration database is then propagated to all the switches in the VCS as a prepare operation in the database (note that the update is not committed to the database yet).
Subsequently, the prepare operation may or may not result in configuration conflicts, which may be flagged as warnings or fatal errors. Such conflicts can include inconsistencies between the joining switch's local configuration or policy setting and the VCS configuration. For example, a conflict arises when the joining switch is manually configured to allow packets with a particular VLAN value to pass through, whereas the VCS does not allow this VLAN value to enter the switch fabric from this particular RBridge (for instance, when this VLAN value is reserved for other purposes). A conflict can also arise when the joining switch's access-control policy is inconsistent with the VCS's access-control policy. In one embodiment, the prepare operation is handled locally and/or remotely in concert with other VCS member switches. If there is an un-resolvable conflict, switch 702 sends out a PRE-INVITE-FAILED message to switch 704. Otherwise, switch 702 generates an INVITE message with the VCS's merged view of the switch (i.e., the updated VCS configuration database).
Upon receiving the INVITE message, switch 704 either accepts or rejects the INVITE. The INVITE can be rejected if the configuration in the INVITE is in conflict with what switch 704 can accept. If the INVITE is acceptable, switch 704 sends back an INVITE-ACCEPT message in response. The INVITE-ACCEPT message then triggers a final database commit throughout all member switches in the VCS. In other words, the updated VCS configuration database is updated, replicated, and distributed to all the switches in the VCS.
If more than one switch in a VCS has connectivity to the new joining switch, all these neighboring member switches may send PRE-INVITE to the new joining switch. The joining switch can send out only one PRE-INVITE to a randomly selected neighboring member switch to complete the join process. Various use cases of the join process are described below. In the following description, a “joining switch” refers to a switch attempting to join a VCS. A “neighboring VCS member switch” or “neighboring member switch” refers to a VCS member switch to which the joining switch is connected.
VCS Pre-Provisioned to Accept a Switch.
A VCS is pre-configured (e.g., the global configuration database) with the MAC address of a joining switch with an optionally pre-allocated mapped ID for the joining switch. The joining switch may be allowed to carry any value in the VCS_ID field of its existing configuration. The neighboring VCS member switch can assign an FC switch domain ID and the proper VCS ID to the joining switch in the INVITE message. In one embodiment, the joining switch may be pre-provisioned to join an existing VCS (e.g., with the parameters in the default switch configuration table, such as mappedID, VCS_ID, and IN_VCS, populated with values corresponding to the VCS). If the pre-provisioned parameters do not guarantee a slot with the same mappedID in the global configuration database when the switch joins the VCS, the switch can revert to the default joining procedure described below.
Default Switch Joins a VCS.
A default switch is one that has no records of any previous joining with a VCS. A switch can become a default switch if it is forced into a factory default state. A joining default switch can present its initial configuration information (for example, its interface configuration details) to a neighboring VCS member switch. In one embodiment, a slot in the VCS configuration database is selected based on a monotonically incrementing number, which is used as the mapped ID for the joining switch. The corresponding FC switch domain ID which is allocated to the joining switch and the joining switch's MAC is updated accordingly in this slot. The neighboring VCS member switch then initiates a prepare transaction, which propagates to all VCS member switches and requires an explicit validation of the joining switch's configuration information from each VCS member switch. If the prepare transaction fails, a PRE-INVITE-FAILED message is sent to the joining switch and the joining process is aborted.
The neighboring VCS member switch then tentatively updates the reserved slot in the global configuration database with the allocated FC switch domain ID and the joining switch's MAC address (operation 726). Next, the neighboring VCS member switch transmits the joining switch's tentative configuration to all member switches in the VCS (operation 728), and determines whether the joining switch's configuration information is confirmed and validated by all VCS member switches (operation 730). If the joining switch's configuration is confirmed, the neighboring member switch then commits the changes to the global configuration database and completes the join process (operation 732). Otherwise, the join process is aborted and the tentative changes to the global configuration database are discarded (operation 734).
Switch Re-Joins a Previously Joined VCS.
If for some reason a switch is joining a VCS to which the switch previously belongs (for example, due to a link failure), the FC switch domain ID that is re-allocated to the joining switch will most likely be the same. When such a switch joins the VCS, the neighboring VCS member switch first checks whether the joining switch's VCS_ID is the same as the existing VCS_ID on the member switch. If the two VCS_ID values are the same, the neighboring member switch attempts to find a slot in the global configuration database with the same mappedID value which was received from the joining switching during the tuple-exchange process. If such a slot in the global database is available, the slot is reserved for the joining switch. In addition, the global configuration database is searched for a match to the joining switch's MAC address. If a match is found in another slot, the configuration information from that slot is copied to the reserved slot. Subsequently, the join process continues as described in
Subsequently, the neighboring member switch determines whether the global configuration database contains a slot with the same MAC address as the joining switch (operation 748). If there is such a slot, which means that the global configuration database contains a slot which has been used previously for the same joining switch's configuration information, such information is copied from the identified slot to the reserved slot (operation 750). Otherwise, the neighboring member switch proceeds to complete the join process as illustrated in
Switch Joins Another VCS.
This use case occurs when a switch is disconnected from one VCS and then connected to a different VCS without being reset to the default state. This scenario can also occur when a switch is connected to a VCS while it is participating in another VCS. In such cases, there will be a VCS_ID mismatch in the join process. In addition, the IN_VCS field in the joining switch's configuration table might or might not be set. If the IN_VCS field is not set, which means that the joining switch is not currently participating in a VCS, the join process can assign the switch a new VCS_ID corresponding to the VCS the switch is joining. In one embodiment, if the IN_VCS filed is set in the joining switch's configuration, which means that the joining switch is currently participating in a different VCS, the join process is disallowed. Optionally, the joining switch can complete the joining process after being set to the default state.
Initial Joining of Two Switches which are Both not in a VCS.
When two switches are connected together and both of them are not in a VCS, an election process can be used to let one of them be the driving switch in the VCS formation process. In one embodiment, the switch with a lower FC switch domain ID would have its IN_VCS field set to “1” and drives the join process.
Joining of two VCSs.
In one embodiment, two VCSs are allowed to merge together. Similar to the FC switch fabric formation process, the logical FC switches in both VCSs would select a new principal FC switch. This newly selected principal FC switch then re-assigns FC switch domain IDs to all the member switches. After the FC switch domain IDs are assigned, a “fabric up” message which is broadcast to all the member switches starts the VCS join process.
During the join process, the principal FC switch's IN_VCS field is set to “1,” whereas all other member switches' IN_VCS fields are set to “0.” Subsequently, each member switch can join the VCS (which initially only contains the switch with the principal FC switch) using the “switch joins another VCS” procedure described above.
Removal of a Switch from VCS.
When a switch is removed from a VCS, its neighboring member switch typically receives a “domain-unreachable” notification at its logical FC switch. Upon receiving this notification, the neighboring member switch disables this switch from the global VCS configuration database and propagates this change to all other member switches. Optionally, the neighboring member switch does not clear the slot previously used by the removed switch in the global configuration database. This way, if the departure of the switch is only temporary, the same slot in the configuration database can still be used when the switch re-joins the VCS.
If the VCS is temporarily disjoint due to a link failure, the logical FC infrastructure in the member switches can detect the disconnection of the switch(es) and issues a number of “domain-unreachable” notifications. When the disjoint switch is reconnected to the VCS, a comparison between the switch's configuration information and the corresponding slot information in the global VCS configuration database allows the switch to be added to the VCS using the same slot (i.e., the slot with the same mappedID) in the global configuration database.
General Operation.
If the system determines that itself is already part of a VCS (i.e., its IN_VCS=1) (operation 764), the system then further determines whether there is an existing slot in the global configuration database with the same mappedID as the joining switch (operation 774). If such a slot exists, the system then sends the INVITE to the joining switch (operation 775) and determines whether there is any un-resolved conflict between the configuration information stored in this slot and the information provided by the joining switch (operation 780). If so, the system revokes the INVITE (operation 782). Otherwise, the system updates the global configuration database with the joining switch's configuration information and propagates the update to all other member switches (operation 784).
If there is no slot in the global configuration database with the same mappedID as the joining switch (operation 774), the system allocates an interim slot in the global configuration database (operation 776), and sends an INVITE to the joining switch (operation 778). After receiving an INVITE acceptance from the joining switch (operation 779), the system then updates the global configuration database (operation 784) and completes the join process.
In one embodiment, each VCS switch unit performs source MAC address learning, similar to what an Ethernet bridge does. Each {MAC address, VLAN} tuple learned on a physical port on a VCS switch unit is registered into the local Fibre Channel Name Server (FC-NS) via a logical Nx_Port interface corresponding to that physical port. This registration binds the address learned to the specific interface identified by the Nx_Port. Each FC-NS instance on each VCS switch unit coordinates and distributes all locally learned {MAC addresses, VLAN} tuple with every other FC-NS instance in the fabric. This feature allows the dissemination of locally learned {MAC addresses, VLAN} information to every switch in the VCS. In one embodiment, the learned MAC addresses are aged locally by individual switches.
If the FC-NS returns a valid result, the switch forwards the frame to the identified N_Port or NL_Port (operation 808). Otherwise, the switch floods the frame on the TRILL multicast tree as well as on all the N_Ports and NL_Ports that participate in that VLAN (operation 810). This flood/broadcast operation is similar to the broadcast process in a conventional TRILL RBridge, wherein all the physical switches in the VCS will receive and process this frame, and learn the source address corresponding to the ingress RBridge. In addition, each receiving switch floods the frame to its local ports that participate in the frame's VLAN (operation 812). Note that the above operations are based on the presumption that there is a one-to-one mapping between a switch's TRILL identifier (or nickname) and its FC switch domain ID. There is also a one-to-one mapping between a physical Ethernet port on a switch and the corresponding logical FC port.
Upon receiving frame 935, switch 938 determines that it is the destination RBridge based on frame 935's TRILL header. Correspondingly, switch 938 strips frame 935 of its outer Ethernet header and TRILL header, and inspects the destination MAC address of its inner Ethernet header. Switch 938 then performs a MAC address lookup and determines the correct output port leading to host 940. Subsequently, the original Ethernet frame 933 is transmitted to host 940.
As described above, the logical FC switches within the physical VCS member switches may send control frames to one another (for example, to update the VCS global configuration database or to notify other switches of the learned MAC addresses). In one embodiment, such control frames can be FC control frames encapsulated in a TRILL header and an outer Ethernet header. For example, if the logical FC switch in switch 944 is in communication with the logical FC switch in switch 938, switch 944 can sends a TRILL-encapsulated FC control frame 942 to switch 946. Switch 946 can forward frame 942 just like a regular data frame, since switch 946 is not concerned with the payload in frame 942.
During operation, packet processor 1002 extracts the source and destination MAC addresses of incoming frames, and attaches property Ethernet or TRILL headers to outgoing frames. Virtual FC switch management module 1004 maintains the state of logical FC switch 1005, which is used to join other VCS switches using the FC switch fabric protocols. Virtual FC switch management module 1004 also performs the switch join and merge functions described above. VCS configuration database 1006 maintains the configuration state of every switch within the VCS. TRILL header generation module 1008 is responsible for generating property TRILL headers for frames that are to be transmitted to other VCS member switches.
The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.
The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.
The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/345,953, entitled “Fabric Formation for Virtual Cluster Switching,” by inventors Shiv Haris and Phanidhar Koganti, filed 18 May 2010, and U.S. Provisional Application No. 61/380,807, entitled “Fabric Formation for Virtual Cluster Switching,” by inventors Shiv Haris and Phanidhar Koganti, filed 8 Sep. 2010, the disclosures of which are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5390173 | Spinney | Feb 1995 | A |
5802278 | Isfeld | Sep 1998 | A |
6041042 | Bussiere | Mar 2000 | A |
6085238 | Yuasa et al. | Jul 2000 | A |
6104696 | Kadambi | Aug 2000 | A |
6185241 | Sun | Feb 2001 | B1 |
6542266 | Phillips | Apr 2003 | B1 |
6873602 | Ambe | Mar 2005 | B1 |
6975581 | Medina | Dec 2005 | B1 |
7016352 | Chow | Mar 2006 | B1 |
7173934 | Lapuh | Feb 2007 | B2 |
7206288 | Cometto | Apr 2007 | B2 |
7310664 | Merchant | Dec 2007 | B1 |
7313637 | Tanaka | Dec 2007 | B2 |
7330897 | Baldwin | Feb 2008 | B2 |
7380025 | Riggins | May 2008 | B1 |
7430164 | Bare | Sep 2008 | B2 |
7453888 | Zabihi | Nov 2008 | B2 |
7480258 | Shuen | Jan 2009 | B1 |
7558195 | Kuo | Jul 2009 | B1 |
7558273 | Grosser, Jr. | Jul 2009 | B1 |
7571447 | Ally | Aug 2009 | B2 |
7688960 | Aubuchon | Mar 2010 | B1 |
7690040 | Frattura | Mar 2010 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7787480 | Mehta | Aug 2010 | B1 |
7792920 | Istvan | Sep 2010 | B2 |
7808992 | Homchaudhuri | Oct 2010 | B2 |
7836332 | Hara | Nov 2010 | B2 |
7843907 | Abou-Emara | Nov 2010 | B1 |
7860097 | Lovett | Dec 2010 | B1 |
7898959 | Arad | Mar 2011 | B1 |
7924837 | Shabtay | Apr 2011 | B1 |
7949638 | Goodson | May 2011 | B1 |
7957386 | Aggarwal | Jun 2011 | B1 |
8027354 | Portolani | Sep 2011 | B1 |
8054832 | Shukla | Nov 2011 | B1 |
8078704 | Lee | Dec 2011 | B2 |
8116307 | Thesayi | Feb 2012 | B1 |
8125928 | Mehta | Feb 2012 | B2 |
8134922 | Elangovan | Mar 2012 | B2 |
8170038 | Belanger | May 2012 | B2 |
8194674 | Pagel | Jun 2012 | B1 |
8195774 | Lambeth | Jun 2012 | B2 |
8213313 | Doiron | Jul 2012 | B1 |
8213336 | Smith | Jul 2012 | B2 |
8230069 | Korupolu | Jul 2012 | B2 |
8239960 | Frattura | Aug 2012 | B2 |
8249069 | Raman | Aug 2012 | B2 |
8270401 | Barnes | Sep 2012 | B1 |
8295291 | Ramanathan | Oct 2012 | B1 |
8301686 | Appajodu | Oct 2012 | B1 |
8392496 | Linden | Mar 2013 | B2 |
8462774 | Page | Jun 2013 | B2 |
8520595 | Yadav | Aug 2013 | B2 |
8599850 | Jha | Dec 2013 | B2 |
20020021701 | Lavian | Feb 2002 | A1 |
20020091795 | Yip | Jul 2002 | A1 |
20030041085 | Sato | Feb 2003 | A1 |
20030123393 | Feuerstraeter | Jul 2003 | A1 |
20030174706 | Shankar | Sep 2003 | A1 |
20030189905 | Lee | Oct 2003 | A1 |
20040010600 | Baldwin | Jan 2004 | A1 |
20040117508 | Shimizu | Jun 2004 | A1 |
20040120326 | Yoon | Jun 2004 | A1 |
20040165595 | Holmgren | Aug 2004 | A1 |
20040213232 | Regan | Oct 2004 | A1 |
20050007951 | Lapuh | Jan 2005 | A1 |
20050044199 | Shiga | Feb 2005 | A1 |
20050094568 | Judd | May 2005 | A1 |
20050094630 | Valdevit | May 2005 | A1 |
20050169188 | Cometto | Aug 2005 | A1 |
20050213561 | Yao | Sep 2005 | A1 |
20050265356 | Kawarai | Dec 2005 | A1 |
20050278565 | Frattura | Dec 2005 | A1 |
20060018302 | Ivaldi | Jan 2006 | A1 |
20060059163 | Frattura | Mar 2006 | A1 |
20060062187 | Rune | Mar 2006 | A1 |
20060072550 | Davis | Apr 2006 | A1 |
20060083254 | Ge | Apr 2006 | A1 |
20060168109 | Warmenhoven et al. | Jul 2006 | A1 |
20060184937 | Abels | Aug 2006 | A1 |
20060242311 | Mai | Oct 2006 | A1 |
20060251067 | DeSanti | Nov 2006 | A1 |
20060256767 | Suzuki | Nov 2006 | A1 |
20060265515 | Shiga | Nov 2006 | A1 |
20060285499 | Tzeng | Dec 2006 | A1 |
20070097968 | Du | May 2007 | A1 |
20070116422 | Reynolds et al. | May 2007 | A1 |
20070177597 | Ju | Aug 2007 | A1 |
20070274234 | Kubota | Nov 2007 | A1 |
20070289017 | Copeland, III | Dec 2007 | A1 |
20080052487 | Akahane | Feb 2008 | A1 |
20080065760 | Damm | Mar 2008 | A1 |
20080101386 | Gray | May 2008 | A1 |
20080133760 | Berkvens et al. | Jun 2008 | A1 |
20080159277 | Vobbilisetty | Jul 2008 | A1 |
20080172492 | Raghunath | Jul 2008 | A1 |
20080181196 | Regan | Jul 2008 | A1 |
20080205377 | Chao | Aug 2008 | A1 |
20080219172 | Mohan | Sep 2008 | A1 |
20080240129 | Elmeleegy | Oct 2008 | A1 |
20080285555 | Ogasahara | Nov 2008 | A1 |
20090044270 | Shelly | Feb 2009 | A1 |
20090067422 | Poppe | Mar 2009 | A1 |
20090079560 | Fries | Mar 2009 | A1 |
20090083445 | Ganga | Mar 2009 | A1 |
20090092042 | Yuhara | Apr 2009 | A1 |
20090092043 | Lapuh et al. | Apr 2009 | A1 |
20090106405 | Mazarick | Apr 2009 | A1 |
20090116381 | Kanda | May 2009 | A1 |
20090138752 | Graham | May 2009 | A1 |
20090199177 | Edwards | Aug 2009 | A1 |
20090204965 | Tanaka | Aug 2009 | A1 |
20090222879 | Kostal | Sep 2009 | A1 |
20090245137 | Hares | Oct 2009 | A1 |
20090245242 | Carlson | Oct 2009 | A1 |
20090260083 | Szeto | Oct 2009 | A1 |
20090323708 | Ihle | Dec 2009 | A1 |
20090327392 | Tripathi | Dec 2009 | A1 |
20090327462 | Adams | Dec 2009 | A1 |
20100061269 | Banerjee | Mar 2010 | A1 |
20100074175 | Banks | Mar 2010 | A1 |
20100097941 | Carlson | Apr 2010 | A1 |
20100103813 | Allan | Apr 2010 | A1 |
20100103939 | Carlson | Apr 2010 | A1 |
20100131636 | Suri | May 2010 | A1 |
20100165877 | Shukia | Jul 2010 | A1 |
20100165995 | Mehta | Jul 2010 | A1 |
20100169467 | Shukia | Jul 2010 | A1 |
20100226381 | Mehta | Sep 2010 | A1 |
20100246388 | Gupta | Sep 2010 | A1 |
20100257263 | Casado | Oct 2010 | A1 |
20100271960 | Krygowski | Oct 2010 | A1 |
20100281106 | Ashwood-Smith | Nov 2010 | A1 |
20100287262 | Elzur | Nov 2010 | A1 |
20100287548 | Zhou | Nov 2010 | A1 |
20100290473 | Enduri | Nov 2010 | A1 |
20100303071 | Kotalwar | Dec 2010 | A1 |
20100303075 | Tripathi | Dec 2010 | A1 |
20100309820 | Rajagopalan | Dec 2010 | A1 |
20110019678 | Mehta | Jan 2011 | A1 |
20110035498 | Shah | Feb 2011 | A1 |
20110044339 | Kotalwar | Feb 2011 | A1 |
20110072208 | Gulati | Mar 2011 | A1 |
20110085560 | Chawla | Apr 2011 | A1 |
20110085563 | Kotha | Apr 2011 | A1 |
20110134802 | Rajagopalan | Jun 2011 | A1 |
20110134925 | Safrai et al. | Jun 2011 | A1 |
20110142053 | Van Der Merwe | Jun 2011 | A1 |
20110142062 | Wang | Jun 2011 | A1 |
20110161695 | Okita | Jun 2011 | A1 |
20110194403 | Sajassi | Aug 2011 | A1 |
20110228780 | Ashwood-Smith | Sep 2011 | A1 |
20110231574 | Saunderson | Sep 2011 | A1 |
20110235523 | Jha | Sep 2011 | A1 |
20110243133 | Villait | Oct 2011 | A9 |
20110243136 | Raman et al. | Oct 2011 | A1 |
20110246669 | Kanada | Oct 2011 | A1 |
20110255538 | Srinivasan | Oct 2011 | A1 |
20110255540 | Mizrahi | Oct 2011 | A1 |
20110261828 | Smith | Oct 2011 | A1 |
20110268120 | Vobbilisetty | Nov 2011 | A1 |
20110286457 | Ee | Nov 2011 | A1 |
20110296052 | Guo | Dec 2011 | A1 |
20110299532 | Yu | Dec 2011 | A1 |
20120011240 | Hara | Jan 2012 | A1 |
20120014261 | Salam | Jan 2012 | A1 |
20120014387 | Dunbar | Jan 2012 | A1 |
20120027017 | Rai | Feb 2012 | A1 |
20120033663 | Guichard | Feb 2012 | A1 |
20120099602 | Nagapudi | Apr 2012 | A1 |
20120106339 | Mishra | May 2012 | A1 |
20120131097 | Baykal | May 2012 | A1 |
20120131289 | Taguchi | May 2012 | A1 |
20120177039 | Berman | Jul 2012 | A1 |
20120243539 | Keesara | Sep 2012 | A1 |
20120294192 | Masood | Nov 2012 | A1 |
20120320800 | Kamble | Dec 2012 | A1 |
20130034015 | Jaiswal | Feb 2013 | A1 |
20130067466 | Combs | Mar 2013 | A1 |
20130259037 | Natarajan | Oct 2013 | A1 |
20140105034 | Sun | Apr 2014 | A1 |
Number | Date | Country |
---|---|---|
102801599 | Nov 2012 | CN |
1916807 | Apr 2008 | EP |
2001167 | Dec 2008 | EP |
2010111142 | Sep 2010 | WO |
Entry |
---|
“Switched Virtual Internetworking moved beyond bridges and routers”, 8178 Data Communications Sep. 23, 1994, No. 12, New York. |
S. Night et al., “Virtual Router Redundancy Protocol”, Network Working Group, XP-002135272, Apr. 1998. |
Eastlake 3rd., Donald et al., “RBridges: TRILL Header Options”, Draft-ietf-trill-rbridge-options-00.txt, Dec. 24, 2009. |
J. Touch, et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009. |
Perlman, Radia et al., “RBridge VLAN Mapping”, Draft-ietf-trill-rbridge-vlan-mapping-01.txt, Dec. 4, 2009. |
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions, (2009). |
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, XP-002649647, 2009. |
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, Mar. 2010. |
Perlman, Radia et al., “RBridges: Base Protocol Specification”, draft-ietf-trill-rbridge-protocol-16.txt, Mar. 3, 2010. |
Christensen, M. et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, May 2006. |
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, Oct. 2002. |
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08”, 2008. |
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013. |
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Nov. 12, 2013. |
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013. |
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013. |
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, from Jaroenchonwanit, Bunjob, dated Jan. 16, 2014. |
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, from Kavleski, Ryan C., dated Jan. 10, 2014. |
Brocade Unveils “The Effortless Network”, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network--nasdaq-brcd-0859535, 2012. |
Foundry FastIron Configuration Guide, Software Release FSX 042.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 26, 2008. |
FastIron and TurboIron 24X Configuration Guide Supporting FSX 05.1.00 for FESX, FWSX, and FSX; FGS 04.3.03 for FGS, FLS and FWS; FGS 05.0.02 for FGS-STK and FLS-STK, FCX 06.0.00 for FCX; and TIX 04.1.00 for TI24X, Feb. 16, 2010. |
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009. |
“The Effortless Network: HyperEdge Technology for the Campus LAN”, 2012. |
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization”, draft-narten-nvo3-overlay-problem-statement-01, Oct. 31, 2011. |
Knight, Paul et al., “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, IEEE Communications Magazine, Jun. 2004. |
“An Introduction to Brocade VCS Fabric Technology”, BROCADE white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012. |
Kreeger, L. et al., “Network Virtualization Overlay Control Protocol Requirements”, Draft-kreeger-nvo3-overlay-cp-00, Jan. 30, 2012. |
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003. |
Louati, Wajdi et al., “Network-based virtual personal overlay networks using programmable virtual routers”, IEEE Communications Magazine, Jul. 2005. |
U.S. Appl. No. 13/044,326 Office Action dated Oct. 2, 2013. |
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014. |
U.S. Appl. No. 12/312,903 Office Action dated Jun. 13, 2013. |
U.S. Appl. No. 13/365,808 Office Action dated Jul. 18, 2013. |
U.S. Appl. No. 13/365,993 Office Action dated Jul. 23, 2013. |
U.S. Appl. No. 13/092,873 Office Action dated Jun. 19, 2013. |
U.S. Appl. No. 13/184,526 Office Action dated May 22, 2013. |
U.S. Appl. No. 13/184,526 Office Action dated Jan. 28, 2013. |
U.S. Appl. No. 13/050,102 Office Action dated May 16, 2013. |
U.S. Appl. No. 13/050,102 Office Action dated Oct. 26, 2012. |
U.S. Appl. No. 13/044,301 Office Action dated Feb. 22, 2013. |
U.S. Appl. No. 13/044,301 Office Action dated Jun. 11, 2013. |
U.S. Appl. No. 13/030,688 Office Action dated Apr. 25, 2013. |
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012. |
U.S. Appl. No. 13/030,806 Office Action dated Jun. 11, 2013. |
U.S. Appl. No. 13/098,360 Office Action dated May 31, 2013. |
U.S. Appl. No. 13/092,864 Office Action dated Sep. 19, 2012. |
U.S. Appl. No. 12/950,968 Office Action dated Jun. 7, 2012. |
U.S. Appl. No. 12/950,968 Office Action dated Jan. 4, 2013. |
U.S. Appl. No. 13/092,877 Office Action dated Mar. 4, 2013. |
U.S. Appl. No. 12/950,974 Office Action dated Dec. 20, 2012. |
U.S. Appl. No. 12/950,974 Office Action dated May 24, 2012. |
U.S. Appl. No. 13/092,752 Office Action dated Feb. 5, 2013. |
U.S. Appl. No. 13/092,752 Office Action dated Jul. 18, 2013. |
U.S. Appl. No. 13/092,701 Office Action dated Jan. 28, 2013. |
U.S. Appl. No. 13/092,701 Office Action dated Jul. 3, 2013. |
U.S. Appl. No. 13/092,460 Office Action dated Jun. 21, 2013. |
U.S. Appl. No. 13/042,259 Office Action dated Mar. 18, 2013. |
U.S. Appl. No. 13/042,259 Office Action dated Jul. 31, 2013. |
U.S. Appl. No. 13/092,580 Office Action dated Jun. 10, 2013. |
U.S. Appl. No. 13/092,724 Office Action dated Jul. 16, 2013. |
U.S. Appl. No. 13/092,724 Office Action dated Feb. 5, 2013. |
U.S. Appl. No. 13/098,490 Office Action dated Dec. 21, 2012. |
U.S. Appl. No. 13/098,490 Office Action dated Jul. 9, 2013. |
U.S. Appl. No. 13/087,239 Office Action dated May 22, 2013. |
U.S. Appl. No. 13/087,239 Office Action dated Dec. 15, 2012. |
U.S. Appl. No. 12/725,249 Office Action dated Apr. 26, 2013. |
U.S. Appl. No. 12/725,249 Office Action dated Sep. 12, 2012. |
Foundry Fastlron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 26, 2008. |
Zhai F. Hu et al. “RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt”, May 15, 2012. |
Huang, Nen-Fu et al., “An Effective Spanning Tree Algorithm for a Bridged LAN”, Mar. 16, 1992. |
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012. |
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012. |
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012. |
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012. |
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012. |
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012. |
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012. |
Office Action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011. |
Office Action dated Jun. 20, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011. |
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011. |
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011. |
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011. |
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011. |
Number | Date | Country | |
---|---|---|---|
20110286357 A1 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
61345953 | May 2010 | US | |
61380807 | Sep 2010 | US |