Fabric switching

Information

  • Patent Grant
  • 10673703
  • Patent Number
    10,673,703
  • Date Filed
    Monday, May 2, 2011
    13 years ago
  • Date Issued
    Tuesday, June 2, 2020
    3 years ago
Abstract
One embodiment of the present invention provides a switch system. The switch includes one or more ports on the switch configured to transmit packets encapsulated based on a first protocol. The switch further includes a control mechanism. During operation, the control mechanism forms a logical switch based on a second protocol, receives an automatically assigned identifier for the logical switch without requiring manual configuration of the identifier, and joins a Ethernet fabric.
Description
BACKGROUND

Field


The present disclosure relates to network design. More specifically, the present disclosure relates to a method for a constructing a scalable switching system that facilitates automatic configuration.


Related Art


The relentless growth of the Internet has brought with it an insatiable demand for bandwidth. As a result, equipment vendors race to build larger, faster, and more versatile switches to move traffic. However, the size of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. More importantly, because an overly large system often does not provide economy of scale due to its complexity, simply increasing the size and throughput of a switch may prove economically unviable due to the increased per-port cost.


One way to increase the throughput of a switch system is to use switch stacking. In switch stacking, multiple smaller-scale, identical switches are interconnected in a special pattern to form a larger logical switch. However, switch stacking requires careful configuration of the ports and inter-switch links. The amount of required manual configuration becomes prohibitively complex and tedious when the stack reaches a certain size, which precludes switch stacking from being a practical option in building a large-scale switching system. Furthermore, a system based on stacked switches often has topology limitations which restrict the scalability of the system due to fabric bandwidth considerations.


SUMMARY

One embodiment of the present invention provides a switch system. The switch includes one or more ports on the switch configured to transmit packets encapsulated based on a first protocol. The switch further includes a control mechanism. During operation, the control mechanism forms a logical switch based on a second protocol, receives an automatically assigned identifier for the logical switch without requiring manual configuration of the identifier, and joins a Ethernet fabric.


In a variation on this embodiment, the Ethernet fabric comprises one or more physical switches which are allowed to be coupled in an arbitrary topology. Furthermore, the Ethernet fabric appears to be one single switch.


In a further variation, the first protocol is a Transparent Interconnection of Lots of Links (TRILL) protocol, and the packets are encapsulated in TRILL headers.


In a variation on this embodiment, the logical switch formed by the control mechanism is a logical Fibre Channel (FC) switch.


In a further variation, the identifier assigned to the logical switch is an FC switch domain ID.


In a variation on this embodiment, the control mechanism is further configured to maintain a copy of configuration information for the Ethernet fabric.


In a further variation on this embodiment, the configuration information for the Ethernet fabric comprises a number of logical switch identifiers assigned to the physical switches in the Ethernet fabric.


In a variation on this embodiment, the switch includes a media access control (MAC) learning mechanism which is configured to learn a source MAC address and a corresponding VLAN identifier of an ingress packet associated with a port and communicate a learned MAC address, a corresponding VLAN identifier, and the corresponding port information to a name service.


One embodiment of the present invention provides a switching system that includes a plurality of switches configured to transport packets using a first protocol. Each switch includes a control mechanism. The plurality switches are allowed to be coupled in an arbitrary topology. Furthermore, the control mechanism automatically configures the respective switch within the switching system based on a second protocol without requiring manual configuration, and the switching system appears externally as a single switch.


In a variation on this embodiment, a respective switch in the switching system receives an automatically configured identifier associated with a logical switch formed on the respective switch.


In a further variation, the logical switch is a logical FC switch. In addition, the identifier is an FC switch domain ID.


In a further variation, the packets are transported between switches based on a TRILL protocol. The respective switch is assigned a TRILL RBridge identifier that corresponds to the FC switch domain ID.


In a variation on this embodiment, a respective switch maintains a copy of configuration information of all the switches in the switching system.


In a variation on this embodiment, the switching system includes a name service which maintains records of MAC addresses and VLAN information learned by a respective switch.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1A illustrates an exemplary Ethernet fabric switching system, in accordance with an embodiment of the present invention.



FIG. 1B illustrates an exemplary Ethernet fabric system where the member switches are configured in a CLOS network, in accordance with an embodiment of the present invention.



FIG. 2 illustrates the protocol stack within an Ethernet fabric switch, in accordance with an embodiment of the present invention.



FIG. 3 illustrates an exemplary configuration of an Ethernet fabric switch, in accordance with an embodiment of the present invention.



FIG. 4 illustrates an exemplary configuration of how an Ethernet fabric switch can be connected to different edge networks, in accordance with an embodiment of the present invention.



FIG. 5A illustrates how a logical Fibre Channel switch fabric is formed in an Ethernet fabric switch in conjunction with the example in FIG. 4, in accordance with an embodiment of the present invention.



FIG. 5B illustrates an example of how a logical FC switch can be created within a physical Ethernet switch, in accordance with one embodiment of the present invention.



FIG. 6 illustrates an exemplary Ethernet fabric configuration database, in accordance with an embodiment of the present invention.



FIG. 7 illustrates an exemplary process of a switch joining an Ethernet fabric, in accordance with an embodiment of the present invention.



FIG. 8 presents a flowchart illustrating the process of looking up an ingress frame's destination MAC address and forwarding the frame in an Ethernet fabric switch, in accordance with one embodiment of the present invention.



FIG. 9 illustrates how data frames and control frames are transported through an Ethernet fabric, in accordance with one embodiment of the present invention.



FIG. 10 illustrates an exemplary switch that facilitates formation of an Ethernet fabric, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of building a versatile, cost-effective, and scalable switching system is solved by running a control plane with automatic configuration capabilities (such as the Fibre Channel control plane) over a conventional transport protocol, thereby allowing a number of switches to be inter-connected to form a single, scalable logical switch without requiring burdensome manual configuration. As a result, one can form a large-scale logical switch (referred to as a “switch fabric” or “Ethernet fabric” herein) using a number of smaller physical switches. The automatic configuration capability provided by the control plane running on each physical switch allows any number of switches to be connected in an arbitrary topology without requiring tedious manual configuration of the ports and links. This feature makes it possible to use many smaller, inexpensive switches to construct a large switch fabric or cluster, which can be viewed as a single logical switch externally.


It should be noted that an Ethernet fabric is not the same as conventional switch stacking. In switch stacking, multiple switches are interconnected at a common location (often within the same rack), based on a particular topology, and manually configured in a particular way. These stacked switches typically share a common address, e.g., IP address, so they can be addressed as a single switch externally. Furthermore, switch stacking requires a significant amount of manual configuration of the ports and inter-switch links. The need for manual configuration prohibits switch stacking from being a viable option in building a large-scale switching system. The topology restriction imposed by switch stacking also limits the number of switches that can be stacked. This is because it is very difficult, if not impossible, to design a stack topology that allows the overall switch bandwidth to scale adequately with the number of switch units.


In contrast, an Ethernet fabric can include an arbitrary number of switches with individual addresses, can be based on an arbitrary topology, and does not require extensive manual configuration. The switches can reside in the same location, or be distributed over different locations. These features overcome the inherent limitations of switch stacking and make it possible to build a large “switch farm” which can be treated as a single, logical switch. Due to the automatic configuration capabilities of the Ethernet fabric, an individual physical switch can dynamically join or leave the fabric without disrupting services to the rest of the network.


Furthermore, the automatic and dynamic configurability of Ethernet fabric allows a network operator to build its switching system in a distributed and “pay-as-you-grow” fashion without sacrificing scalability. The Ethernet fabric's ability to respond to changing network conditions makes it an ideal solution in a virtual computing environment, where network loads often change with time.


Although this disclosure is presented using examples based on the Transparent Interconnection of Lots of Links (TRILL) as the transport protocol and the Fibre Channel (FC) fabric protocol as the control-plane protocol, embodiments of the present invention are not limited to TRILL networks, or networks defined in a particular Open System Interconnection Reference Model (OSI reference model) layer. For example, an Ethernet fabric can also be implemented with switches running multi-protocol label switching (MPLS) protocols for the transport. In addition, the terms “RBridge” and “switch” are used interchangeably in this disclosure. The use of the term “RBridge” does not limit embodiments of the present invention to TRILL networks only. The TRILL protocol is described in IETF draft “RBridges: Base Protocol Specification,” available at http://tools.ietf.org/html/draft-ietf-trill-rbridge-protocol, which is incorporated by reference herein


The terms “switch fabric,” “Ethernet fabric,” “Ethernet fabric switch,” “switch cluster,” “virtual cluster switch,” “virtual cluster switching,” and “VCS” refer to a group of interconnected physical switches operating as a single logical switch. The control plane for these physical switches provides the ability to automatically configure a given physical switch, so that when it joins the Ethernet fabric, little or no manual configuration is required. “Ethernet fabric” or “VCS” is not limited to a specific product family from a particular vendor. In addition, “Ethernet fabric” or “VCS” is not the only term that can be used to name the switching system described herein. Other terms, such as “Ethernet fabric switch,” “fabric switch,” “cluster switch,” “Ethernet mesh switch,” and “mesh switch” can also be used to describe the same switching system. Hence, in some embodiments, these terms and “Ethernet fabric” can be used interchangeably.


The term “RBridge” refers to routing bridges, which are bridges implementing the TRILL protocol as described in IETF draft “RBridges: Base Protocol Specification.” Embodiments of the present invention are not limited to the application among RBridges. Other types of switches, routers, and forwarders can also be used.


The terms “frame” or “packet” refer to a group of bits that can be transported together across a network. “Frame” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Packet” should not be interpreted as limiting embodiments of the present invention to layer-3 networks. “Frame” or “packet” can be replaced by other terminologies referring to a group of bits, such as “cell” or “datagram.”


Ethernet Fabric Architecture



FIG. 1A illustrates an exemplary Ethernet fabric system, in accordance with an embodiment of the present invention. In this example, an Ethernet fabric 100 includes physical switches 101, 102, 103, 104, 105, 106, and 107. A given physical switch runs an Ethernet-based transport protocol on its ports (e.g., TRILL on its inter-switch ports, and Ethernet transport on its external ports), while its control plane runs an FC switch fabric protocol stack. The TRILL protocol facilitates transport of Ethernet frames within and across Ethernet fabric 100 in a routed fashion (since TRILL provides routing functions to Ethernet frames). The FC switch fabric protocol stack facilitates the automatic configuration of individual physical switches, in a way similar to how a conventional FC switch fabric is formed and automatically configured. In one embodiment, Ethernet fabric 100 can appear externally as an ultra-high-capacity Ethernet switch. More details on FC network architecture, protocols, naming/address conventions, and various standards are available in the documentation available from the NCITS/ANSI T11 committee (www.t11.org) and publicly available literature, such as “Designing Storage Area Networks,” by Tom Clark, 2nd Ed., Addison Wesley, 2003, the disclosures of which are incorporated by reference in their entirety herein.


A physical switch may dedicate a number of ports for external use (i.e., to be coupled to end hosts or other switches external to the Ethernet fabric) and other ports for inter-switch connection. Viewed externally, Ethernet fabric 100 appears to be one switch to a device from the outside, and any port from any of the physical switches is considered one port on the Ethernet fabric. For example, port groups 110 and 112 are both Ethernet fabric external ports and can be treated equally as if they were ports on a common physical switch, although switches 105 and 107 may reside in two different locations.


The physical switches can reside at a common location, such as a data center or central office, or be distributed in different locations. Hence, it is possible to construct a large-scale centralized switching system using many smaller, inexpensive switches housed in one or more chassis at the same location. It is also possible to have the physical switches placed at different locations, thus creating a logical switch that can be accessed from multiple locations. The topology used to interconnect the physical switches can also be versatile. Ethernet fabric 100 is based on a mesh topology. In further embodiments, an Ethernet fabric switch can be based on a ring, tree, or other types of topologies.


In one embodiment, the protocol architecture of an Ethernet fabric switch is based on elements from the standard IEEE 802.1Q Ethernet bridge, which is emulated over a transport based on the Fibre Channel Framing and Signaling-2 (FC-FS-2) standard. The resulting switch is capable of transparently switching frames from an ingress Ethernet port from one of the edge switches to an egress Ethernet port on a different edge switch through the Ethernet fabric.


Because of its automatic configuration capability, an Ethernet fabric switch can be dynamically expanded as the network demand increases. In addition, one can build a large-scale switch using many smaller physical switches without the burden of manual configuration. For example, it is possible to build a high-throughput fully non-blocking switch using a number of smaller switches. This ability to use small switches to build a large non-blocking switch significantly reduces the cost associated switch complexity. FIG. 1B presents an exemplary Ethernet fabric with its member switches connected in a CLOS network, in accordance with one embodiment of the present invention. In this example, an Ethernet fabric 120 forms a fully non-blocking 8×8 switch, using eight 4×4 switches and four 2×2 switches connected in a three-stage CLOS network. A large-scale switch with a higher port count can be built in a similar way.



FIG. 2 illustrates the protocol stack within an Ethernet fabric switch, in accordance with an embodiment of the present invention. In this example, two physical switches 202 and 204 are illustrated within an Ethernet fabric 200. Switch 202 includes an ingress Ethernet port 206 and an inter-switch port 208. Switch 204 includes an egress Ethernet port 212 and an inter-switch port 210. Ingress Ethernet port 206 receives Ethernet frames from an external device. The Ethernet header is processed by a medium access control (MAC) layer protocol. On top of the MAC layer is a MAC client layer, which hands off the information extracted from the frame's Ethernet header to a forwarding database (FDB) 214. Typically, in a conventional IEEE 802.1Q Ethernet switch, FDB 214 is maintained locally in a switch, which would perform a lookup based on the destination MAC address and the VLAN indicated in the Ethernet frame. The lookup result would provide the corresponding output port. However, since Ethernet fabric 200 is not one single physical switch, FDB 214 would return the egress switch's identifier (i.e., switch 204's identifier). In one embodiment, FDB 214 is a data structure replicated and distributed among all the physical switches. That is, every physical switch maintains its own copy of FDB 214. When a given physical switch learns the source MAC address and VLAN of an Ethernet frame (similar to what a conventional IEEE 802.1Q Ethernet switch does) as being reachable via the ingress port, the learned MAC and VLAN information, together with the ingress Ethernet port and switch information, is propagated to all the physical switches so every physical switch's copy of FDB 214 can remain synchronized. This prevents forwarding based on stale or incorrect information when there are changes to the connectivity of end stations or edge networks to the Ethernet fabric.


The forwarding of the Ethernet frame between ingress switch 202 and egress switch 204 is performed via inter-switch ports 208 and 210. The frame transported between the two inter-switch ports is encapsulated in an outer MAC header and a TRILL header, in accordance with the TRILL standard. The protocol stack associated with a given inter-switch port includes the following (from bottom up): MAC layer, TRILL layer, FC-FS-2 layer, FC E-Port layer, and FC link services (FC-LS) layer. The FC-LS layer is responsible for maintaining the connectivity information of a physical switch's neighbor, and populating an FC routing information base (RIB) 222. This operation is similar to what is done in an FC switch fabric. The FC-LS protocol is also responsible for handling joining and departure of a physical switch in Ethernet fabric 200. The operation of the FC-LS layer is specified in the FC-LS standard, which is available at http://www.t11.org/ftp/t11/member/fc/ls/06-393v5.pdf, the disclosure of which is incorporated herein in its entirety.


During operation, when FDB 214 returns the egress switch 204 corresponding to the destination MAC address of the ingress Ethernet frame, the destination egress switch's identifier is passed to a path selector 218. Path selector 218 performs a fabric shortest-path first (FSPF)-based route lookup in conjunction with RIB 222, and identifies the next-hop switch within Ethernet fabric 200. In other words, the routing is performed by the FC portion of the protocol stack, similar to what is done in an FC switch fabric.


Also included in each physical switch are an address manager 216 and a fabric controller 220. Address manager 216 is responsible for configuring the address of a physical switch when the switch first joins the Ethernet fabric. For example, when switch 202 first joins Ethernet fabric 200, address manager 216 can negotiate a new FC switch domain ID, which is subsequently used to identify the switch within Ethernet fabric 200. Fabric controller 220 is responsible for managing and configuring the logical FC switch fabric formed on the control plane of Ethernet fabric 200.


One way to understand the protocol architecture of Ethernet fabric is to view the Ethernet fabric as an FC switch fabric with an Ethernet/TRILL transport. Each physical switch, from an external point of view, appears to be a TRILL RBridge. However, the switch's control plane implements the FC switch fabric software. In other words, embodiments of the present invention facilitate the construction of an “Ethernet switch fabric” running on FC control software. This unique combination provides the Ethernet fabric with automatic configuration capability and allows it to provide the ubiquitous Ethernet services in a very scalable fashion.



FIG. 3 illustrates an exemplary configuration of an Ethernet fabric switch, in accordance with an embodiment of the present invention. In this example, an Ethernet fabric 300 includes four physical switches 302, 304, 306, and 308. Ethernet fabric 300 constitutes an access layer which is coupled to two aggregation switches 310 and 312. Note that the physical switches within Ethernet fabric 300 are connected in a ring topology. Aggregation switch 310 or 312 can connect to any of the physical switches within Ethernet fabric 300. For example, aggregation switch 310 is coupled to physical switches 302 and 308. These two links are viewed as a trunked link to Ethernet fabric 300, since the corresponding ports on switches 302 and 308 are considered to be from the same logical switch, Ethernet fabric 300. Note that, without Ethernet fabric, such topology would not have been possible, because the FDB needs to remain synchronized, which is facilitated by the Ethernet fabric.



FIG. 4 illustrates an exemplary configuration of how an Ethernet fabric switch can be connected to different edge networks, in accordance with an embodiment of the present invention. In this example, an Ethernet fabric 400 includes a number of TRILL RBridges 402, 404, 406, 408, and 410, which are controlled by the FC switch-fabric control plane. Also included in Ethernet fabric 400 are RBridges 412, 414, and 416. Each RBridge has a number of edge ports which can be connected to external edge networks.


For example, RBridge 412 is coupled with hosts 420 and 422 via 10GE ports. RBridge 414 is coupled to a host 426 via a 10GE port. These RBridges have TRILL-based inter-switch ports for connection with other TRILL RBridges in Ethernet fabric 400. Similarly, RBridge 416 is coupled to host 428 and an external Ethernet switch 430, which is coupled to an external network that includes a host 424. In addition, network equipment can also be coupled directly to any of the physical switches in Ethernet fabric 400. As illustrated here, TRILL RBridge 408 is coupled to a data storage 417, and TRILL RBridge 410 is coupled to a data storage 418.


Although the physical switches within Ethernet fabric 400 are labeled as “TRILL RBridges,” they are different from the conventional TRILL RBridge in the sense that they are controlled by the FC switch fabric control plane. In other words, the assignment of switch addresses, link discovery and maintenance, topology convergence, routing, and forwarding can be handled by the corresponding FC protocols. Particularly, each TRILL RBridge's switch ID or nickname is mapped from the corresponding FC switch domain ID, which can be automatically assigned when a switch joins Ethernet fabric 400 (which is logically similar to an FC switch fabric).


Note that TRILL is only used as a transport between the switches within Ethernet fabric 400. This is because TRILL can readily accommodate native Ethernet frames. Also, the TRILL standards provide a ready-to-use forwarding mechanism that can be used in any routed network with arbitrary topology (although the actual routing in the Ethernet fabric is done by the FC switch fabric protocols). Embodiments of the present invention should be not limited to using only TRILL as the transport. Other protocols (such as multi-protocol label switching (MPLS) or Internet Protocol (IP)), either public or proprietary, can also be used for the transport.


Ethernet Fabric Formation


In one embodiment, an Ethernet fabric is created by instantiating a logical FC switch in the control plane of each switch. After the logical FC switch is created, a virtual generic port (denoted as G_Port) is created for each Ethernet port on the RBridge. A G_Port assumes the normal G_Port behavior from the FC switch perspective. However, in this case, since the physical links are based on Ethernet, the specific transition from a G_Port to either an FC_F Port or E_Port is determined by the underlying link and physical layer protocols. For example, if the physical Ethernet port is connected to an external device which lacks Ethernet fabric capabilities, the corresponding G_Port will be turned into an F_Port. On the other hand, if the physical Ethernet port is connected to a switch with Ethernet fabric capabilities and it is confirmed that the switch on the other side is part of an Ethernet fabric, then the G_Port will be turned into an E_port.



FIG. 5A illustrates how a logical Fibre Channel switch fabric is formed in an Ethernet fabric switch in conjunction with the example in FIG. 4, in accordance with an embodiment of the present invention. RBridge 412 contains a virtual, logical FC switch 502. Corresponding to the physical Ethernet ports coupled to hosts 420 and 422, logical FC switch 502 has two logical F_Ports, which are logically coupled to hosts 420 and 422. In addition, two logical N_Ports, 506 and 504, are created for hosts 420 and 422, respectively. On the fabric side, logical FC switch 502 has three logical E_Ports, which are to be coupled with other logical FC switches in the logical FC switch fabric in the Ethernet fabric.


Similarly, RBridge 416 contains a virtual, logical FC switch 512. Corresponding to the physical Ethernet ports coupled to host 428 and external switch 430, logical FC switch 512 has a logical F_Port coupled to host 428, and a logical FL_Port coupled to switch 430. In addition, a logical N_Port 510 is created for host 428, and a logical NL_Port 508 is created for switch 430. Note that the logical FL_Port is created because that port is coupled to a switch (switch 430), instead of a regular host, and therefore logical FC switch 512 assumes an arbitrated loop topology leading to switch 430. Logical NL_Port 508 is created based on the same reasoning to represent a corresponding NL_Port on switch 430. On the fabric side, logical FC switch 512 has two logical E_Ports, which to be coupled with other logical FC switches in the logical FC switch fabric in the Ethernet fabric.



FIG. 5B illustrates an example of how a logical FC switch can be created within a physical Ethernet switch, in accordance with one embodiment of the present invention. The term “fabric port” refers to a port used to couple multiple switches in an Ethernet fabric. The clustering protocols control the forwarding between fabric ports. The term “edge port” refers to a port that is not currently coupled to another switch unit in the Ethernet fabric. Standard IEEE 802.1Q and layer-3 protocols control forwarding on edge ports.


In the example illustrated in FIG. 5B, a logical FC switch 521 is created within a physical switch (RBridge) 520. Logical FC switch 521 participates in the FC switch fabric protocol via logical inter-switch links (ISLs) to other switch units and has an FC switch domain ID assigned to it just as a physical FC switch does. In other words, the domain allocation, principal switch selection, and conflict resolution work just as they would on a physical FC ISL.


The physical edge ports 522 and 524 are mapped to logical F_Ports 532 and 534, respectively. In addition, physical fabric ports 526 and 528 are mapped to logical E_Ports 536 and 538, respectively. Initially, when logical FC switch 521 is created (for example, during the boot-up sequence), logical FC switch 521 only has four G_Ports which correspond to the four physical ports. These G_Ports are subsequently mapped to F_Ports or E_Ports, depending on the devices coupled to the physical ports.


Neighbor discovery is the first step in Ethernet fabric formation between two Ethernet fabric-capable switches. It is assumed that the verification of Ethernet fabric capability can be carried out by a handshake process between two neighbor switches when the link is first brought up.


In general, an Ethernet fabric presents itself as one unified switch composed of multiple member switches. Hence, the creation and configuration of Ethernet fabric is of critical importance. The Ethernet fabric configuration is based on a distributed database, which is replicated and distributed over all switches.


In one embodiment, an Ethernet fabric configuration database includes a global configuration table (GT) of the Ethernet fabric and a list of switch description tables (STs), each of which describes an Ethernet fabric member switch. In its simplest form, a member switch can have an Ethernet fabric configuration database that includes a global table and one switch description table, e.g., [<GT><ST>]. An Ethernet fabric with multiple switches will have a configuration database that has a single global table and multiple switch description tables, e.g., [<GT><ST0><ST1> . . . <STn−1>]. The number n corresponds to the number of member switches in the Ethernet fabric. In one embodiment, the GT can include at least the following information: the Ethernet fabric ID, number of nodes in the Ethernet fabric, a list of VLANs supported by the Ethernet fabric, a list of all the switches (e.g., list of FC switch domain IDs for all active switches) in the Ethernet fabric, and the FC switch domain ID of the principal switch (as in a logical FC switch fabric). A switch description table can include at least the following information: the IN_VCS flag, indication whether the switch is a principal switch in the logical FC switch fabric, the FC switch domain ID for the switch, the FC world-wide name (WWN) for the corresponding logical FC switch; the mapped ID of the switch, and optionally the IP address of the switch.


In addition, each switch's global configuration database is associated with a transaction ID. The transaction ID specifies the latest transaction (e.g., update or change) incurred to the global configuration database. The transaction IDs of the global configuration databases in two switches can be compared to determine which database has the most current information (i.e., the database with the more current transaction ID is more up-to-date). In one embodiment, the transaction ID is the switch's serial number plus a sequential transaction number. This configuration can unambiguously resolve which switch has the latest configuration.


As illustrated in FIG. 6, an Ethernet fabric member switch typically maintains two configuration tables that describe its instance: an Ethernet fabric configuration database 600, and a default switch configuration table 604. Ethernet fabric configuration database 600 describes the Ethernet fabric configuration when the switch is part of an Ethernet fabric. Default switch configuration table 604 describes the switch's default configuration. Ethernet fabric configuration database 600 includes a GT 602, which includes an Ethernet fabric identifier (denoted as VCS_ID) and a VLAN list within the Ethernet fabric. Also included in Ethernet fabric configuration database 600 are a number of STs, such as ST0, ST1, and STn. Each ST includes the corresponding member switch's MAC address and FC switch domain ID, as well as the switch's interface details. Note that each switch also has an Ethernet fabric-mapped ID which is a switch index within the Ethernet fabric.


In one embodiment, each switch also has an Ethernet fabric-mapped ID (denoted as “mappedID”), which is a switch index within the Ethernet fabric. This mapped ID is unique and persistent within the Ethernet fabric. That is, when a switch joins the Ethernet fabric for the first time, the Ethernet fabric assigns a mapped ID to the switch. This mapped ID persists with the switch, even if the switch leaves the Ethernet fabric. When the switch joins the Ethernet fabric again at a later time, the same mapped ID is used by the Ethernet fabric to retrieve previous configuration information for the switch. This feature can reduce the amount of configuration overhead in an Ethernet fabric. Also, the persistent mapped ID allows the Ethernet fabric to “recognize” a previously configured member switch when it re-joins the Ethernet fabric, since a dynamically assigned FC fabric domain ID would change each time the member switch joins and is configured by the Ethernet fabric.


Default switch configuration table 604 has an entry for the mappedID that points to the corresponding ST in Ethernet fabric configuration database 600. Note that only Ethernet fabric configuration database 600 is replicated and distributed to all switches in the Ethernet fabric. Default switch configuration table 604 is local to a particular member switch.


The “IN_VCS” value in default switch configuration table 604 indicates whether the member switch is part of an Ethernet fabric. A switch is considered to be “in an Ethernet fabric” when it is assigned one of the FC switch domains by the FC switch fabric with two or more switch domains. If a switch is part of an FC switch fabric that has only one switch domain, i.e., its own switch domain, then the switch is considered to be “not in an Ethernet fabric.”


When a switch is first connected to an Ethernet fabric, the logical FC switch fabric formation process allocates a new switch domain ID to the joining switch. In one embodiment, only the switches directly connected to the new switch participate in the Ethernet fabric join operation.


Note that in the case where the global configuration database of a joining switch is current and in sync with the global configuration database of the Ethernet fabric based on a comparison of the transaction IDs of the two databases (e.g., when a member switch is temporarily disconnected from the Ethernet fabric and re-connected shortly afterward), a trivial merge is performed. That is, the joining switch can be connected to the Ethernet fabric, and no change or update to the global Ethernet fabric configuration database is required.



FIG. 7 illustrates an exemplary process of a switch joining an Ethernet fabric, in accordance with an embodiment of the present invention. In this example, it is assumed that a switch 702 is within an existing Ethernet fabric, and a switch 704 is joining the Ethernet fabric. During operation, both switches 702 and 704 trigger an FC State Change Notification (SCN) process. Subsequently, both switches 702 and 704 perform a PRE-INVITE operation. The pre-invite operation involves the following process.


When a switch joins the Ethernet fabric via a link, both neighbors on each end of the link present to the other switch an Ethernet fabric four-tuple of <Prior VCS_ID, SWITCH_MAC, mappedID, IN_VCS> from a prior incarnation, if any. Otherwise, the switch presents to the counterpart a default tuple. If the VCS_ID value was not set from a prior join operation, a VCS_ID value of −1 is used. In addition, if a switch's IN_VCS flag is set to 0, it sends out its interface configuration to the neighboring switch. In the example in FIG. 7, both switches 702 and 704 send the above information to the other switch.


After the above PRE-INVITE operation, a driver switch for the join process is selected. By default, if a switch's IN_VCS value is 1 and the other switch's IN_VCS value is 0, the switch with IN_VCS=1 is selected as the driver switch. If both switches have their IN_VCS values as 1, then nothing happens, i.e., the PRE-INVITE operation would not lead to an INVITE operation. If both switches have their IN_VCS values as 0, then one of the switches is elected to be the driving switch (for example, the switch with a lower FC switch domain ID value). The driving switch's IN_VCS value is then set to 1 and drives the join process.


After switch 702 is selected as the driver switch, switch 702 then attempts to reserve a slot in the Ethernet fabric configuration database corresponding to the mappedID value in switch 704's PRE-INVITE information. Next, switch 702 searches the Ethernet fabric configuration database for switch 704's MAC address in any mappedID slot. If such a slot is found, switch 702 copies all information from the identified slot into the reserved slot. Otherwise, switch 702 copies the information received during the PRE-INVITE from switch 704 into the Ethernet fabric configuration database. The updated Ethernet fabric configuration database is then propagated to all the switches in the Ethernet fabric as a prepare operation in the database (note that the update is not committed to the database yet).


Subsequently, the prepare operation may or may not result in configuration conflicts, which may be flagged as warnings or fatal errors. Such conflicts can include inconsistencies between the joining switch's local configuration or policy setting and the Ethernet fabric configuration. For example, a conflict arises when the joining switch is manually configured to allow packets with a particular VLAN value to pass through, whereas the Ethernet fabric does not allow this VLAN value to enter the switch fabric from this particular RBridge (for example, when this VLAN value is reserved for other purposes). In one embodiment, the prepare operation is handled locally and/or remotely in concert with other Ethernet fabric member switches. If there is an un-resolvable conflict, switch 702 sends out a PRE-INVITE-FAILED message to switch 704. Otherwise, switch 702 generates an INVITE message with the Ethernet fabric's merged view of the switch (i.e., the updated Ethernet fabric configuration database).


Upon receiving the INVITE message, switch 704 either accepts or rejects the INVITE. The INVITE can be rejected if the configuration in the INVITE is in conflict with what switch 704 can accept. If the INVITE is acceptable, switch 704 sends back an INVITE-ACCEPT message in response. The INVITE-ACCEPT message then triggers a final database commit throughout all member switches in the Ethernet fabric. In other words, the updated Ethernet fabric configuration database is updated, replicated, and distributed to all the switches in the Ethernet fabric.


Layer-2 Services in Ethernet Fabric


In one embodiment, each Ethernet fabric switch unit performs source MAC address learning, similar to what an Ethernet bridge does. Each {MAC address, VLAN} tuple learned on a physical port on an Ethernet fabric switch unit is registered into the local Fibre Channel Name Server (FC-NS) via a logical Nx_Port interface corresponding to that physical port. This registration binds the address learned to the specific interface identified by the Nx_Port. Each FC-NS instance on each Ethernet fabric switch unit coordinates and distributes all locally learned {MAC address, VLAN} tuples with every other FC-NS instance in the fabric. This feature allows the dissemination of locally learned {MAC addresses, VLAN} information to every switch in the Ethernet fabric. In one embodiment, the learned MAC addresses are aged locally by individual switches.



FIG. 8 presents a flowchart illustrating the process of looking up an ingress frame's destination MAC address and forwarding the frame in an Ethernet fabric, in accordance with one embodiment of the present invention. During operation, an Ethernet fabric switch receives an Ethernet frame at one of its Ethernet ports (operation 802). The switch then extracts the frame's destination MAC address and queries the local FC Name Server (operation 804). Next, the switch determines whether the FC-NS returns an N_Port or an NL_Port identifier that corresponds to an egress Ethernet port (operation 806).


If the FC-NS returns a valid result, the switch forwards the frame to the identified N_Port or NL_Port (operation 808). Otherwise, the switch floods the frame on the TRILL multicast tree as well as on all the N_Ports and NL_Ports that participate in that VLAN (operation 810). This flood/broadcast operation is similar to the broadcast process in a conventional TRILL RBridge, wherein all the physical switches in the Ethernet fabric will receive and process this frame, and learn the source address corresponding to the ingress RBridge. In addition, each receiving switch floods the frame to its local ports that participate in the frame's VLAN (operation 812). Note that the above operations are based on the presumption that there is a one-to-one mapping between a switch's TRILL identifier (or nickname) and its FC switch domain ID. There is also a one-to-one mapping between a physical Ethernet port on a switch and the corresponding logical FC port.


End-To-End Frame Delivery and Exemplary Ethernet Fabric Member Switch



FIG. 9 illustrates how data frames and control frames are transported in an Ethernet fabric, in accordance with an embodiment of the present invention. In this example, an Ethernet fabric 930 includes member switches 934, 936, 938, 944, 946, and 948. An end host 932 is communicating with an end host 940. Switch 934 is the ingress Ethernet fabric member switch corresponding to host 932, and switch 938 is the egress Ethernet fabric member switch corresponding to host 938. During operation, host 932 sends an Ethernet frame 933 to host 940. Ethernet frame 933 is first encountered by ingress switch 934. Upon receiving frame 933, switch 934 first extracts frame 933's destination MAC address. Switch 934 then performs a MAC address lookup using the Ethernet name service, which provides the egress switch identifier (i.e., the RBridge identifier of egress switch 938). Based on the egress switch identifier, the logical FC switch in switch 934 performs a routing table lookup to determine the next-hop switch, which is switch 936, and the corresponding output port for forwarding frame 933. The egress switch identifier is then used to generate a TRILL header (which specifies the destination switch's RBridge identifier), and the next-hop switch information is used to generate an outer Ethernet header. Subsequently, switch 934 encapsulates frame 933 with the proper TRILL header and outer Ethernet header, and sends the encapsulated frame 935 to switch 936. Based on the destination RBridge identifier in the TRILL header of frame 935, switch 936 performs a routing table lookup and determines the next hop. Based on the next-hop information, switch 936 updates frame 935's outer Ethernet header and forwards frame 935 to egress switch 938.


Upon receiving frame 935, switch 938 determines that it is the destination RBridge based on frame 935's TRILL header. Correspondingly, switch 938 strips frame 935 of its outer Ethernet header and TRILL header, and inspects the destination MAC address of its inner Ethernet header. Switch 938 then performs a MAC address lookup and determines the correct output port leading to host 940. Subsequently, the original Ethernet frame 933 is transmitted to host 940.


As described above, the logical FC switches within the physical Ethernet fabric member switches may send control frames to one another (for example, to update the Ethernet fabric global configuration database or to notify other switches of the learned MAC addresses). In one embodiment, such control frames can be FC control frames encapsulated in a TRILL header and an outer Ethernet header. For example, if the logical FC switch in switch 944 is in communication with the logical FC switch in switch 938, switch 944 can sends a TRILL-encapsulated FC control frame 942 to switch 946. Switch 946 can forward frame 942 just like a regular data frame, since switch 946 is not concerned with the payload in frame 942.



FIG. 10 illustrates an exemplary Ethernet fabric member switch, in accordance with one embodiment of the present invention. In this example, the Ethernet fabric member switch is a TRILL RBridge 1000 running special Ethernet fabric software. RBridge 1000 includes a number of Ethernet communication ports 1001, which can transmit and receive Ethernet frames and/or TRILL encapsulated frames. Also included in RBridge 1000 is a packet processor 1002, a virtual FC switch management module 1004, a logical FC switch 1005, an Ethernet fabric configuration database 1006, and a TRILL header generation module 1008.


During operation, packet processor 1002 extracts the source and destination MAC addresses of incoming frames, and attaches proper Ethernet or TRILL headers to outgoing frames. Virtual FC switch management module 1004 maintains the state of logical FC switch 1005, which is used to join other Ethernet fabric switches using the FC switch fabric protocols. Ethernet fabric configuration database 1006 maintains the configuration state of every switch within the Ethernet fabric. TRILL header generation module 1008 is responsible for generating property TRILL headers for frames that are to be transmitted to other Ethernet fabric member switches.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: one or more ports;control circuitry configured to: maintain a membership in a network of interconnected switches, wherein the network of interconnected switches is identified by a fabric identifier, and wherein the fabric identifier is distinct from a first switch identifier identifying the switch in the network of interconnected switches; anddetermine that the switch has joined the network of interconnected switches based on the fabric identifier; andforwarding circuitry configured to encapsulate a packet with an encapsulation header forwardable in an Internet Protocol (IP) network in accordance with a tunneling protocol, wherein a source and a destination addresses of the encapsulation header correspond to the first switch identifier and a second switch identifier of a second switch in the network of interconnected switches, respectively, and wherein the second switch shares the fabric identifier with the switch,wherein the control circuitry is further configured to maintain configuration information for a respective switch of the network of interconnected switches in a data structure in a storage device within the switch, and configured to reassign the second switch identifier to the second switch in response to the second switch leaving and rejoining the network of interconnected switches.
  • 2. The switch of claim 1, wherein the forwarding circuitry is further configured to further encapsulate the encapsulated packet with an outer Ethernet header.
  • 3. The switch of claim 1, wherein the forwarding circuitry is further configured to determine a next-hop switch corresponding to the second switch identifier for the encapsulated packet.
  • 4. The switch of claim 1, wherein the configuration information for the second switch comprises the second switch identifier and a switch index identifying the configuration information for the second switch in the data structure.
  • 5. The switch of claim 1, wherein the control circuitry is further configured to construct a message for the second switch in response to learning a media access control (MAC) address from a port of the one or more ports, wherein a payload of the message comprises the MAC address and a virtual local area network (VLAN) identifier associated with the MAC address.
  • 6. The switch of claim 1, wherein the forwarding circuitry is further configured to query a name service data structure based on the packet's destination media access control (MAC) address and a virtual local area network (VLAN) identifier, wherein the name service is configured to maintain a mapping between a respective MAC address learned at the network of interconnected switches and a corresponding VLAN identifier.
  • 7. The switch of claim 1, wherein the control circuitry is further configured to determine a route between the switch and the second switch based on a routing protocol.
  • 8. A switching system, comprising: a plurality of interconnected switches;control circuitry residing on a respective switch of the plurality of interconnected switches;wherein the control circuitry is configured to: maintain a membership in the switching system, wherein the switching system is identified by a fabric identifier, and wherein the fabric identifier is distinct from a first switch identifier identifying a first switch in the switching system; anddetermine that the switch has joined the switching system based on the fabric identifier; andwherein forwarding circuitry residing on the first switch of the switching system is configured to encapsulate a packet with an encapsulation header forwardable in an Internet Protocol (IP) network in accordance with a tunneling protocol, wherein a source and a destination addresses of the encapsulation header correspond to the first switch identifier and a second switch identifier of a second switch in the switching system, respectively, and wherein the second switch shares the fabric identifier with the first switch,wherein the control circuitry is further configured to maintain configuration information for a respective switch of the switching system of interconnected switches in a data structure in a storage device within the switch, and configured to reassign the second switch identifier to the second switch in response to the second switch leaving and rejoining the network of interconnected switches.
  • 9. The switching system of claim 8, wherein forwarding circuitry residing on the first switch is configured to determine a next-hop switch based on the second switch identifier for the encapsulated packet.
  • 10. The switching system of claim 8, wherein a respective switch of the switching system maintains configuration information of all the switches in the switching system in a data structure in a local storage device, wherein the configuration information for the second switch comprises the second switch identifier and a switch index identifying the configuration information for the second switch in the data structure.
  • 11. The switching system of claim 8, wherein the forwarding circuitry residing on the first switch is further configured to query a name service data structure based on the packet's destination media access control (MAC) address and a virtual local area network (VLAN) identifier, wherein the name service is further configured to maintain a mapping between a respective MAC address learned at the switching system and a corresponding VLAN identifier.
  • 12. A method, comprising: maintaining, by a switch, a membership in a network of interconnected switches, wherein the network of interconnected switches is identified by a fabric identifier, and wherein the fabric identifier is distinct from a first switch identifier identifying the switch in the network of interconnected switches;determining that the switch has joined the network of interconnected switches based on the fabric identifier;encapsulating a packet with an encapsulation header forwardable in an Internet Protocol (IP) network in accordance with a tunneling protocol, wherein a source and a destination addresses of the encapsulation header correspond to the first switch identifier and a second switch identifier of a second switch in the network of interconnected switches, respectively, and wherein the second switch shares the fabric identifier with the switch;maintaining configuration information for a respective switch of the network of interconnected switches in a data structure in a storage device within the switch; andreassigning the second switch identifier to the second switch in response to the second switch leaving and rejoining the network of interconnected switches.
  • 13. The method of claim 12, further comprising determining a next-hop switch corresponding to the second switch identifier for the encapsulated packet.
  • 14. The method of claim 12, further comprising maintaining configuration information for a respective switch of the network of interconnected switches in a data structure in a storage device of the switch, wherein the configuration information for the second switch comprises the second switch identifier and a switch index identifying the configuration information for the second switch in the data structure.
  • 15. The method of claim 12, further comprising determining a route between the switch and the second switch based on a routing protocol.
  • 16. A computing system, comprising: a processor;a storage device coupled to the processor and storing instructions that when executed by a computer cause the computer to perform a method, the method comprising:maintaining a membership in a network of interconnected switches, wherein the network of interconnected switches is identified by a fabric identifier, and wherein the fabric identifier is distinct from a first switch identifier identifying the computing system in the network of interconnected switches;determining that the computing system has joined the network of interconnected switches based on the fabric identifier;encapsulating a packet with an encapsulation header forwardable in an Internet Protocol (IP) network in accordance with a tunneling protocol, wherein a source and a destination addresses of the encapsulation header correspond to the first switch identifier and a second switch identifier of a second computing system in the network of interconnected switches, respectively, and wherein the second computing system shares the fabric identifier with the computing system;maintaining configuration information for a respective switch of the network of interconnected switches in a data structure in a storage device within the switch; andreassigning the second switch identifier to the second switch in response to the second switch leaving and rejoining the network of interconnected switches.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 61/330,678, entitled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 3 May 2010, U.S. Provisional Application No. 61/334,945, entitled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 May 2010, and U.S. Provisional Application No. 61/380,819, entitled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 8 Sep. 2010, the disclosures of which are incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 12/725,249, entitled “REDUNDANT HOST CONNECTION IN A ROUTED NETWORK,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010, the disclosure of which is incorporated by reference herein.

US Referenced Citations (694)
Number Name Date Kind
829529 Keathley Aug 1906 A
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5878232 Marimuthu Mar 1999 A
5879173 Poplawski Mar 1999 A
5959968 Chin Sep 1999 A
5973278 Wehrill, III Oct 1999 A
5983278 Chong Nov 1999 A
5995262 Hirota Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6092062 Lohman Jul 2000 A
6104696 Kadambi Aug 2000 A
6122639 Babu Sep 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6295527 McCormack Sep 2001 B1
6331983 Haggerty Dec 2001 B1
6438106 Pillar Aug 2002 B1
6498781 Bass Dec 2002 B1
6542266 Phillips Apr 2003 B1
6553029 Alexander Apr 2003 B1
6571355 Linnell May 2003 B1
6583902 Yuen Jun 2003 B1
6633761 Singhal Oct 2003 B1
6636963 Stein Oct 2003 B1
6771610 Seaman Aug 2004 B1
6816462 Booth, III Nov 2004 B1
6870840 Hill Mar 2005 B1
6873602 Ambe Mar 2005 B1
6920503 Nanji Jul 2005 B1
6937576 DiBenedetto Aug 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7061877 Gummalla et al. Jun 2006 B1
7062177 Grivna Jun 2006 B1
7097308 Kim et al. Aug 2006 B2
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7274694 Cheng Sep 2007 B1
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7397768 Betker Jul 2008 B1
7397794 Lacroute et al. Jul 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser, Jr. Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7653056 Dianes Jan 2010 B1
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720076 Dobbins May 2010 B2
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7801021 Triantafillis Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7912091 Krishnan Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937438 Miller May 2011 B1
7937756 Kay May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8014378 Yoon Sep 2011 B1
8018938 Fromm Sep 2011 B1
8027354 Portolani Sep 2011 B1
7945941 Vobbilisetty Nov 2011 B2
8054832 Shukla Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8090805 Chawla Jan 2012 B1
8102781 Smith Jan 2012 B2
8102791 Smith Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8175107 Yalagandula May 2012 B1
8095774 Lambeth Jun 2012 B1
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8295921 Ramanathan Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake, III Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8451717 Srikrishnan May 2013 B2
8462774 Page Jun 2013 B2
8465774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8553710 White Oct 2013 B1
8595479 Radhakrishnan Nov 2013 B2
8599850 Jha Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
8619788 Sankaran Dec 2013 B1
8625616 Vobbilisetty Jan 2014 B2
8705526 Hasan Apr 2014 B1
8706905 McGlaughlin Apr 2014 B1
8717895 Koponen May 2014 B2
8724456 Hong May 2014 B1
8792501 Rustagi Jul 2014 B1
8798045 Aybay Aug 2014 B1
8798055 An Aug 2014 B1
8804732 Hepting Aug 2014 B1
8804736 Drake Aug 2014 B1
8806031 Kondur Aug 2014 B1
8826385 Congdon Sep 2014 B2
8918631 Kumar Dec 2014 B1
8937865 Kumar Jan 2015 B1
8948181 Kapadia Feb 2015 B2
8971173 Choudhury Mar 2015 B1
9019976 Gupta Apr 2015 B2
9178793 Marlow Nov 2015 B1
9231890 Vobbilisetty Jan 2016 B2
9350680 Thayalan May 2016 B2
9401818 Venkatesh Jul 2016 B2
9438447 Basso Sep 2016 B2
9450870 Ananthapadmanabha Sep 2016 B2
9524173 Guntaka Dec 2016 B2
9626255 Guntaka Apr 2017 B2
9628407 Guntaka Apr 2017 B2
20010005527 Vaeth Jun 2001 A1
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian et al. Feb 2002 A1
20020027885 Ben-Ami Mar 2002 A1
20020039350 Wang Apr 2002 A1
20020054593 Morohashi et al. May 2002 A1
20020087723 Williams Jul 2002 A1
20020091707 Keller Jul 2002 A1
20020091795 Yip Jul 2002 A1
20020138628 Tingley Sep 2002 A1
20020161867 Cochran Oct 2002 A1
20030026290 Umayabashi Feb 2003 A1
20030041085 Sato Feb 2003 A1
20030093567 Lolayekar May 2003 A1
20030097464 Martinez May 2003 A1
20030097470 Lapuh May 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030147385 Montalvo Aug 2003 A1
20030152075 Hawthorne Aug 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20030189930 Terrell Oct 2003 A1
20030208616 Laing Nov 2003 A1
20030216143 Roese Nov 2003 A1
20030223428 BlanquerGonzalez Dec 2003 A1
20030233534 Bernhard Dec 2003 A1
20040001433 Gram Jan 2004 A1
20040003094 See Jan 2004 A1
20040010600 Baldwin et al. Jan 2004 A1
20040088668 Bornowski Jan 2004 A1
20040037295 Tanaka Feb 2004 A1
20040047349 Fujita Mar 2004 A1
20040049699 Griffith Mar 2004 A1
20040057430 Paavolainen Mar 2004 A1
20040081171 Finn Apr 2004 A1
20040088437 Stimac May 2004 A1
20040095900 Siegel May 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040165596 Holmgren Aug 2004 A1
20040205234 Barrack Oct 2004 A1
20040213232 Regan Oct 2004 A1
20040225725 Enomoto Nov 2004 A1
20040243673 Goyal Dec 2004 A1
20050007951 Lapuh Jan 2005 A1
20050025179 McLaggan Feb 2005 A1
20050036488 Kalkunte Feb 2005 A1
20050044199 Shiga Feb 2005 A1
20050074001 Mattes Apr 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050108375 Hallak-Stamler May 2005 A1
20050111352 Ho May 2005 A1
20050122979 Gross Jun 2005 A1
20050152335 Lodha Jul 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050207423 Herbst Sep 2005 A1
20050213561 Yao et al. Sep 2005 A1
20050220096 Friskney Oct 2005 A1
20050259586 Hafid Nov 2005 A1
20050265330 Suzuki Dec 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060007869 Hirota Jan 2006 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060029055 Perera Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060036648 Frey Feb 2006 A1
20060036765 Weyman Feb 2006 A1
20060039366 Ghosh Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083172 Jordan Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060093254 Ge Apr 2006 A1
20060092860 Higashitaniguchi May 2006 A1
20060098589 Kreeger May 2006 A1
20060126511 Youn Jun 2006 A1
20060140130 Kalkunte Jun 2006 A1
20060155828 Ikeda Jul 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060206655 Chappell Sep 2006 A1
20060209886 Silberman Sep 2006 A1
20060221960 Borgione Oct 2006 A1
20060227776 Chandrasekaran Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060242398 Fontijn Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti et al. Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20060291480 Cho Dec 2006 A1
20060294413 Filz Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070053294 Ho Mar 2007 A1
20070061817 Atkinson Mar 2007 A1
20070074052 Hemmah Mar 2007 A1
20070081530 Nomura Apr 2007 A1
20070083625 Chamdani Apr 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070098006 Parry May 2007 A1
20070110068 Sekiguchi May 2007 A1
20070116224 Burke May 2007 A1
20070116422 Burke May 2007 A1
20070121617 Kanekar May 2007 A1
20070130295 Rastogi Jun 2007 A1
20070156659 Lim Jul 2007 A1
20070177525 Wijnands Aug 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070183393 Boyd Aug 2007 A1
20070206762 Chandra Sep 2007 A1
20070211712 Fitch Sep 2007 A1
20070220059 Lu Sep 2007 A1
20070226214 Smits Sep 2007 A1
20070230468 Narayanan Oct 2007 A1
20070230472 Jesuraj Oct 2007 A1
20070238343 Velleca Oct 2007 A1
20070258449 Bennett Nov 2007 A1
20070274234 Kubota Nov 2007 A1
20070280223 Pan Dec 2007 A1
20070289017 Copeland, III Dec 2007 A1
20070297406 Rooholamini Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080056135 Lee Mar 2008 A1
20080056300 Williams Mar 2008 A1
20080057918 Abrant Mar 2008 A1
20080065760 Damm Mar 2008 A1
20080075078 Watanabe Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080095160 Yadav Apr 2008 A1
20080101386 Gray May 2008 A1
20080112133 Torudbakken May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Henricus Jun 2008 A1
20080159260 Vobbilisetty Jul 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080165705 Umayabashi Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186968 Farinacci Aug 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225852 Melman Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080253380 Cazares Oct 2008 A1
20080267179 Lavigne Oct 2008 A1
20080279196 Friskney Nov 2008 A1
20080285458 Lysne Nov 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080288020 Einav Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080304498 Jorgensen Dec 2008 A1
20080304519 Koenen Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090022069 Khan Jan 2009 A1
20090024734 Merbach Jan 2009 A1
20090037607 Farinacci Feb 2009 A1
20090037977 Gai Feb 2009 A1
20090041046 Hirata Feb 2009 A1
20090042270 Shelly Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090052326 Bergamasco Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090094354 Rastogi Apr 2009 A1
20090106298 Furusho Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090113408 Toeroe Apr 2009 A1
20090116381 Kanda May 2009 A1
20090122700 Aboba May 2009 A1
20090129384 Regan May 2009 A1
20090129389 Halna DeFretay May 2009 A1
20090138577 Casado May 2009 A1
20090138752 Graham May 2009 A1
20090144720 Roush Jun 2009 A1
20090161584 Guan Jun 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090213867 Devireddy Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090225752 Mitsumori Sep 2009 A1
20090232031 Vasseur Sep 2009 A1
20090245112 Okazaki Oct 2009 A1
20090245137 Hares Oct 2009 A1
20090245242 Carlson et al. Oct 2009 A1
20090246137 Hares Oct 2009 A1
20090249444 Macauley Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090252061 Small Oct 2009 A1
20090252503 Ishigami Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090279701 Moisand Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323698 LeFaucheur Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20100002382 Aybay Jan 2010 A1
20100027420 Smith Feb 2010 A1
20100027429 Jorgens Feb 2010 A1
20100042869 Szabo Feb 2010 A1
20100046471 Hattori Feb 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100085981 Gupta Apr 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson et al. Apr 2010 A1
20100104280 Carlson Apr 2010 A1
20100114818 Lier May 2010 A1
20100131636 Suri May 2010 A1
20100157844 Casey Jun 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukia Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Shukla Jul 2010 A1
20100169467 Shukia Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100189119 Sawada Jul 2010 A1
20100192225 Ma Jul 2010 A1
20100195489 Zhou Aug 2010 A1
20100195529 Liu Aug 2010 A1
20100214913 Kompella Aug 2010 A1
20100215042 Sato Aug 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta et al. Sep 2010 A1
20100246580 Kaganoi Sep 2010 A1
20100254703 Kirkpatrick Oct 2010 A1
20100257263 Casado Oct 2010 A1
20100258263 Douxchamps Oct 2010 A1
20100265849 Harel Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100272107 Papp et al. Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284414 Gray Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100284698 McColloch Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290464 Assarpour Nov 2010 A1
20100290472 Raman Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100299527 Arunan Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20100309912 Mehta Dec 2010 A1
20100316055 Belanger Dec 2010 A1
20100329110 Rose Dec 2010 A1
20100329265 Lapuh Dec 2010 A1
20110007738 Berman Jan 2011 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035489 Shah Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110044352 Chaitou Feb 2011 A1
20110051723 Rabie Mar 2011 A1
20110058547 Waldrop Mar 2011 A1
20110064086 Xiong Mar 2011 A1
20110064089 Xiong Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085557 Gnanasekaran Apr 2011 A1
20110085560 Chawla Apr 2011 A1
20110085562 Bao Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110088011 Ouali Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134924 Hewson Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110149526 Turner Jun 2011 A1
20110158113 Nanda Jun 2011 A1
20110161494 McDysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110176412 Stine Jul 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110225540 d'Entremont Sep 2011 A1
20110228767 Singla Sep 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231570 Altekar Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268118 Schlansker Nov 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110268125 Vobbilisetty et al. Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110273990 Rajagopalan Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110280572 Vobbilisetty Nov 2011 A1
20110286357 Haris Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110286462 Kompella Nov 2011 A1
20110055274 Hegge Dec 2011 A1
20110292947 Vobbilisetty Dec 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299413 Chatwani Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317559 Kern Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Da Silva Feb 2012 A1
20120033668 Humphries Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120033672 Page Feb 2012 A1
20120039163 Nakajima Feb 2012 A1
20120042095 Kotha Feb 2012 A1
20120063363 Li Mar 2012 A1
20120075991 Sugita Mar 2012 A1
20120099567 Hart Apr 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120099863 Xu Apr 2012 A1
20120102160 Breh Apr 2012 A1
20120106339 Mishra May 2012 A1
20120117438 Shaffer May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120134266 Roitshtein May 2012 A1
20120136999 Roitshtein May 2012 A1
20120147740 Nakash Jun 2012 A1
20120158997 Hsu Jun 2012 A1
20120163164 Terry Jun 2012 A1
20120170491 Kern Jul 2012 A1
20120177039 Berman Jul 2012 A1
20120210416 Mihelich Aug 2012 A1
20120221636 Surtani Aug 2012 A1
20120230225 Matthews Sep 2012 A1
20120239918 Huang Sep 2012 A1
20120243359 Keesara Sep 2012 A1
20120243539 Keesara Sep 2012 A1
20120250502 Brolin Oct 2012 A1
20120260079 Mruthyunjaya Oct 2012 A1
20120275297 Subramanian Nov 2012 A1
20120275347 Banerjee Nov 2012 A1
20120278804 Narayanasamy Nov 2012 A1
20120281700 Koganti Nov 2012 A1
20120287785 Kamble Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120230800 Kamble Dec 2012 A1
20120320800 Kamble Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327766 Tsai et al. Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130003535 Sarwar Jan 2013 A1
20130003549 Matthews Jan 2013 A1
20130003608 Lei Jan 2013 A1
20130003737 Sinicrope Jan 2013 A1
20130003738 Koganti Jan 2013 A1
20130003747 Raman Jan 2013 A1
20130016606 Cirkovic Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130034021 Jaiswal Feb 2013 A1
20130034094 Cardona Feb 2013 A1
20130044629 Biswas Feb 2013 A1
20130058354 Casado Mar 2013 A1
20130066947 Ahmad Mar 2013 A1
20130067466 Combs Mar 2013 A1
20130070762 Adams Mar 2013 A1
20130083693 Himura Apr 2013 A1
20130097345 Munoz Apr 2013 A1
20130114595 Mack-Crane et al. May 2013 A1
20130121142 Bai May 2013 A1
20130124707 Ananthapadmanabha May 2013 A1
20130124750 Anumala May 2013 A1
20130127848 Joshi May 2013 A1
20130132296 Manfred May 2013 A1
20130135811 Dunwoody May 2013 A1
20130136123 Ge May 2013 A1
20130145008 Kannan Jun 2013 A1
20130148546 Eisenhauer Jun 2013 A1
20130148663 Xiong Jun 2013 A1
20130156425 Kirkpatrick Jun 2013 A1
20130163591 Shukla Jun 2013 A1
20130194914 Agarwal Aug 2013 A1
20130201992 Masaki Aug 2013 A1
20130215754 Tripathi Aug 2013 A1
20130219473 Schaefer Aug 2013 A1
20130223221 Xu Aug 2013 A1
20130223438 Tripathi Aug 2013 A1
20130223449 Koganti Aug 2013 A1
20130238802 Sarikaya Sep 2013 A1
20130250947 Zheng Sep 2013 A1
20130250951 Koganti Sep 2013 A1
20130250958 Watanabe Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130266015 Qu Oct 2013 A1
20130268590 Mahadevan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20130294451 Li Nov 2013 A1
20130297757 Han Nov 2013 A1
20130301425 Chandra Nov 2013 A1
20130301642 Radhakrishnan et al. Nov 2013 A1
20130308492 Baphna Nov 2013 A1
20130308641 Ackley Nov 2013 A1
20130308647 Rosset Nov 2013 A1
20130315125 Ravishankar Nov 2013 A1
20130315246 Zhang Nov 2013 A1
20130315586 Kipp Nov 2013 A1
20130322427 Stiekes Dec 2013 A1
20130329605 Nakil Dec 2013 A1
20130332660 Talagala Dec 2013 A1
20130336104 Talla Dec 2013 A1
20130346583 Low Dec 2013 A1
20140010239 Xu Jan 2014 A1
20140013324 Zhang Jan 2014 A1
20140019608 Kawakami Jan 2014 A1
20140019639 Ueno Jan 2014 A1
20140025736 Wang Jan 2014 A1
20140029412 Janardhanan Jan 2014 A1
20140029419 Jain Jan 2014 A1
20140044126 Sabhanatarajan et al. Feb 2014 A1
20140050223 Foo Feb 2014 A1
20140056298 Vobbilisetty Feb 2014 A1
20140059225 Gasparakis Feb 2014 A1
20140064056 Sakata Mar 2014 A1
20140071987 Janardhanan Mar 2014 A1
20140086253 Yong Mar 2014 A1
20140092738 Grandhi Apr 2014 A1
20140105034 Sun Apr 2014 A1
20140112122 Kapadia Apr 2014 A1
20140140243 Ashwood-Smith May 2014 A1
20140157251 Hocker Jun 2014 A1
20140169368 Grover Jun 2014 A1
20140188996 Lie Jul 2014 A1
20140192804 Ghanwani Jul 2014 A1
20140195695 Okita Jul 2014 A1
20140241147 Rajagopalan Aug 2014 A1
20140258446 Bursell Sep 2014 A1
20140269701 Kaushik Sep 2014 A1
20140269709 Benny Sep 2014 A1
20140269720 Srinivasan Sep 2014 A1
20140269733 Venkatesh Sep 2014 A1
20140298091 Carlen Oct 2014 A1
20140348168 Singh Nov 2014 A1
20140355477 Moopath Dec 2014 A1
20140362854 Addanki Dec 2014 A1
20140362859 Addanki Dec 2014 A1
20140376550 Khan Dec 2014 A1
20150009992 Zhang Jan 2015 A1
20150010007 Matsuhira Jan 2015 A1
20150016300 Devireddy Jan 2015 A1
20150030031 Zhou Jan 2015 A1
20150092593 Kompella Apr 2015 A1
20150103826 Davis Apr 2015 A1
20150110111 Song Apr 2015 A1
20150110487 Fenkes Apr 2015 A1
20150117256 Sabaa Apr 2015 A1
20150117454 Koponen Apr 2015 A1
20150127618 Alberti May 2015 A1
20150139234 Hu May 2015 A1
20150143369 Zheng May 2015 A1
20150172098 Agarwal Jun 2015 A1
20150188753 Anumala Jul 2015 A1
20150188770 Naiksatam Jul 2015 A1
20150195093 Ramasubramani Jul 2015 A1
20150222506 Kizhakkiniyil Aug 2015 A1
20150248298 Gavrilov Sep 2015 A1
20150263897 Ganichev Sep 2015 A1
20150263899 Tubaltsev Sep 2015 A1
20150263991 MacChiano Sep 2015 A1
20150281066 Koley Oct 2015 A1
20150301901 Rath Oct 2015 A1
20150347468 Bester Dec 2015 A1
20160072899 Tung Mar 2016 A1
20160087885 Tripathi Mar 2016 A1
20160139939 Bosch May 2016 A1
20160182458 Shatzkamer Jun 2016 A1
20160212040 Bhagavathiperumal Jul 2016 A1
20160344640 Soderlund Nov 2016 A1
20170012880 Yang Jan 2017 A1
20170026197 Venkatesh Jan 2017 A1
20170097841 Chang Apr 2017 A1
20170134266 Venkata May 2017 A1
20180013614 Vobbilisetty Jan 2018 A1
Foreign Referenced Citations (38)
Number Date Country
1735062 Feb 2006 CN
1777149 May 2006 CN
101064682 Oct 2007 CN
101459618 Jun 2009 CN
101471899 Jul 2009 CN
101548511 Sep 2009 CN
101645880 Feb 2010 CN
102098237 Jun 2011 CN
102148749 Aug 2011 CN
102301663 Dec 2011 CN
102349268 Feb 2012 CN
102378176 Mar 2012 CN
102404181 Apr 2012 CN
102415065 Apr 2012 CN
102415065 Apr 2012 CN
102801599 Nov 2012 CN
102801599 Nov 2012 CN
102088388 Apr 2014 CN
0579567 May 1993 EP
0579567 Jan 1994 EP
0993156 Apr 2000 EP
0993156 Dec 2000 EP
1398920 Mar 2004 EP
1398920 Mar 2004 EP
1916807 Apr 2008 EP
2001167 Dec 2008 EP
2854352 Apr 2015 EP
2874359 May 2015 EP
2008056838 May 2008 WO
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
2010111142 Sep 2010 WO
2011132568 Oct 2011 WO
2011140028 Nov 2011 WO
2011140028 Nov 2011 WO
2012033663 Mar 2012 WO
2012093429 Jul 2012 WO
2014031781 Feb 2014 WO
Non-Patent Literature Citations (253)
Entry
U.S. Appl. No. 12/312,903 Office Action dated Jun. 13, 2013.
U.S. Appl. No. 13/365,808 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/365,993 Office Action dated Jul. 23, 2013.
U.S. Appl. No. 13/092,873 Office Action dated Jun. 19, 2013.
U.S. Appl. No. 13/184,526 Office Action dated May 22, 2013.
U.S. Appl. No. 13/184,526 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/050,102 Office Action dated May 16, 2013.
U.S. Appl. No. 13/050,102 Office Action dated Oct. 26, 2012.
U.S. Appl. No. 13/044,301 Office Action dated Feb. 22, 2013.
U.S. Appl. No. 13/044,301 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/030,688 Office Action dated Apr. 25, 2013.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Jun. 11, 2013.
U.S. Appl. No. 13/098,360 Office Action dated May 31, 2013.
U.S. Appl. No. 13/092,864 Office Action dated Sep. 19, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jun. 7, 2012.
U.S. Appl. No. 12/950,968 Office Action dated Jan. 4, 2013.
U.S. Appl. No. 13/092,877 Office Action dated Mar. 4, 2013.
U.S. Appl. No. 12/950,974 Office Action dated Dec. 20, 2012.
U.S. Appl. No. 12/950,974 Office Action dated May 24, 2012.
U.S. Appl. No. 13/092,752 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/092,752 Office Action dated Jul. 18, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jan. 28, 2013.
U.S. Appl. No. 13/092,701 Office Action dated Jul. 3, 2013.
U.S. Appl. No. 13/092,460 Office Action dated Jun. 21, 2013.
U.S. Appl. No. 13/042,259 Office Action dated Mar. 18, 2013.
U.S. Appl. No. 13/042,259 Office Action dated Jul. 31, 2013.
U.S. Appl. No. 13/092,580 Office Action dated Jun. 10, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Jul. 16, 2013.
U.S. Appl. No. 13/092,724 Office Action dated Feb. 5, 2013.
U.S. Appl. No. 13/098,490 Office Action dated Dec. 21, 2012.
U.S. Appl. No. 13/098,490 Office Action dated Jul. 9, 2013.
U.S. Appl. No. 13/087,239 Office Action dated May 22, 2013.
U.S. Appl. No. 13/087,239 Office Action dated Dec. 5, 2012.
U.S. Appl. No. 12/725,249 Office Action dated Apr. 26, 2013.
U.S. Appl. No. 12/725,249 Office Action dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Brocade Unveils “The Effortless Network”, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012.
Foundry FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 26, 2008.
FastIron and TurboIron 24X Configuration Guide Supporting FSX 05.1.00 for FESX, FWSX, and FSX; FGS 04.3.03 for FGS, FLS and FWS; FGS 05.0.02 for FGS-STK and FLS-STK, FCX 06.0.00 for FCX; and TIX 04.1.00 for TI24X, Feb. 16, 2010.
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
“The Effortless Network: HyperEdge Technology for the Campus LAN”, 2012.
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization”, draft-narten-nvo3-overlay-problem-statement-01, Oct. 31, 2011.
Knight, Paul et al., “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, IEEE Communications Magazine, Jun. 2004.
“An Introduction to Brocade VCS Fabric Technology”, BROCADE white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012.
Kreeger, L. et al., “Network Virtualization Overlay Control Protocol Requirements”, Draft-kreeger-nvo3-overlay-cp-00, Jan. 30, 2012.
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003.
Louati, Wajdi et al., “Network-based virtual personal overlay networks using programmable virtual routers”, IEEE Communications Magazine, Jul. 2005.
U.S. Appl. No. 13/092,877 Office Action dated Sep. 5, 2013.
U.S. Appl. No. 13/044,326 Office Action dated Oct. 2, 2013.
Zhai F. Hu et al. “RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt”, May 15, 2012.
Huang, Nen-Fu et al., “An Effective Spanning Tree Algorithm for a Bridged LAN”, Mar. 16, 1992.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office Action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office Action dated Jun. 20, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
“Switched Virtual Internetworking moved beyond bridges and routers”, 8178 Data Communications Sep. 23, 1994, No. 12, New York.
S. Night et al., “Virtual Router Redundancy Protocol”, Network Working Group, XP-002135272, Apr. 1998.
Eastlake 3rd., Donald et al., “RBridges: TRILL Header Options”, Draft-ietf-trill-rbridge-options-00.txt, Dec. 24, 2009.
J. Touch, et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, May 2009.
Perlman, Radia et al., “RBridge VLAN Mapping”, Draft-ietf-trill-rbridge-vlan-mapping-01.txt, Dec. 4, 2009.
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions.
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, XP-002649647, 2009.
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, Mar. 2010.
Perlman, Radia et al., “RBridges: Base Protocol Specification”, draft-ietf-trill-rbridge-protocol-16.txt, Mar. 3, 2010.
Christensen, M. et al., “Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches”, May 2006.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, Oct. 2002.
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08”, 2008.
‘RBridges: Base Protocol Specification’, IETF Draft, Perlman et al., Jun. 26, 2009.
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, from Jaroenchonwanit, Bunjob, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action for U.S. Appl. No. 13/087,239, filing Apr. 14, 2011, dated Dec. 5, 2012.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, dated Dec. 2, 2012.
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Office Action for U.S. Appl. No. 14/662,095, dated Mar. 24, 2017.
Office Action for U.S. Appl. No. 15/005,967, dated Mar. 31, 2017.
Office Action for U.S. Appl. No. 15/215,377, dated Apr. 7, 2017.
Office Action for U.S. Appl. No. 13/098,490, dated Apr. 6, 2017.
Office Action for U.S. Appl. No. 14/662,092, dated Mar. 29, 2017.
Office Action for U.S. Appl. No. 14/817,097, dated May 4, 2017.
Office Action for U.S. Appl. No. 14/872,966, dated Apr. 20, 2017.
Office Action for U.S. Appl. No. 14/680,915, dated May 3, 2017.
Office Action for U.S. Appl. No. 14/792,166, dated Apr. 26, 2017.
Office Action for U.S. Appl. No. 14/660,803, dated May 17, 2017.
Office Action for U.S. Appl. No. 14/488,173, dated May 12, 2017.
Office Action for U.S. Appl. No. 13/288,822, dated May 26, 2017.
Office Action for U.S. Appl. No. 14/329,447, dated Jun. 8, 2017.
Office Action for U.S. Appl. No. 14/510,913, dated Jun. 30, 2017.
Office Action for U.S. Appl. No. 15/005,946, dated Jul. 14, 2017.
Office Action for U.S. Appl. No. 13/092,873, dated Jul. 19, 2017.
Office Action for U.S. Appl. No. 15/047,539, dated Aug. 7, 2017.
Office Action for U.S. Appl. No. 14/830,035, dated Aug. 28, 2017.
Office Action for U.S. Appl. No. 13/098,490, dated Aug. 24, 2017.
Office Action for U.S. Appl. No. 13/786,328, dated Aug. 21, 2017.
Office Action for U.S. Appl. No. 14/216,292, dated Oct. 6, 2017.
Office Action dated Oct. 25, 2017, U.S. Appl. No. 14/867,865, filed Sep. 28, 2015.
Office action dated Oct. 26, 2017, U.S. Appl. No. 14/817,097, filed Aug. 3, 2015.
Office Action dated Mar. 20, 2018, U.S. Appl. No. 14/867,865, filed Sep. 28, 2015.
Office Action dated Jun. 13, 2018, U.S. Appl. No. 13/786,328, filed Mar. 5, 2013.
IEEE et al., “Amendment to Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications—Aggregation of Multiple Link Segments”, Mar. 30, 2000, IEEE Computer Society, IEEE Std 802.3ad-2000, pp. 116-117.
Office Action dated Jul. 13, 2018, U.S. Appl. No. 15/402,924, filed Jul. 13, 2018.
Office Action dated Jul. 24, 2018, U.S. Appl. No. 14/799,371, filed Jul. 24, 2018.
Office Action dated Jun. 18, 215, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Jun. 16, 2015, U.S. Appl. No. 13/048,817, filed Mar. 15, 2011.
Touch, J. et al., ‘Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement’, May 2009, Network Working Group, pp. 1-17.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 13/598,204, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Aug. 21, 2015, U.S. Appl. No. 13/776,217, filed Feb. 25, 2013.
Office Action dated Aug. 19, 2015, U.S. Appl. No. 14/156,374, filed Jan. 15, 2014.
Office Action dated Sep. 2, 2015, U.S. Appl. No. 14/151,693, filed Jan. 9, 2014.
Office Action dated Sep. 17, 2015, U.S. Appl. No. 14/577,785, filed Dec. 19, 2014.
Office Action dated Sep. 22, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology” 2009, IEEE Globecom Workshops.
“Network based IP VPN Architecture using Virtual Routers” Paul Knight et al.
Yang Yu et al “A Framework of using OpenFlow to handle transient link failure”, TMEE, 2011 International Conference on, IEEE, Dec. 16, 2011.
Office Action for U.S. Appl. No. 15/227,789, dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/822,380, dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/704,660, dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/510,913, dated Mar. 3, 2017.
Office Action for U.S. Appl. No. 14/473,941, dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/329,447, dated Feb. 10, 2017.
Office Action dated Nov. 5, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Oct. 19, 2015, U.S. Appl. No. 14/215,996, filed Mar. 17, 2014.
Office Action dated Sep. 18, 2015, U.S. Appl. No. 13/345,566, filed Jan. 6, 2012.
Open Flow Switch Specification Version 1.1.0, Feb. 28, 2011.
Open Flow Switch Specification Version 1.0.0, Dec. 31, 2009.
Open Flow Configuration and Management Protocol 1.0 (OF-Config 1.0) Dec. 23, 2011.
Open Flow Switch Specification Version 1.2 Dec. 5, 2011.
Office action dated Feb. 2, 2016, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Feb. 2, 2016. U.S. Appl. No. 14/154,106, filed Jan. 13, 2014.
Office Action dated Feb. 3, 2016, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Feb. 4, 2016, U.S. Appl. No. 13/557,105, filed Jul. 24, 2012.
Office Action dated Feb. 11, 2016, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Feb. 24, 2016, U.S. Appl. No. 13/971,397, filed Aug. 20, 2013.
Office Action dated Feb. 24, 2016, U.S. Appl. No. 12/705,508, filed Feb. 12, 2010.
Office Action dated Jul. 6, 2016, U.S. Appl. No. 14/618,941, filed Feb. 10, 2015.
Office Action dated Jul. 20, 2016, U.S. Appl. No. 14/510,913, filed Oct. 9, 2014.
Office Action dated Jul. 29, 2016, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Jul. 28, 2016, U.S. Appl. No. 14/284,212, filed May 21, 2016.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Nov. 12, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jan. 10, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jan. 6, 2014.
Abawajy J. “An Approach to Support a Single Service Provider Address Image for Wide Area Networks Environment” Centre for Parallel and Distributed Computing, School of Computer Science Carleton University, Ottawa, Ontario, K1S 5B6, Canada.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015.
Mahalingam “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks” Oct. 17, 2013 pp. 1-22, Sections 1, 4 and 4.1.
Office action dated Apr. 30, 2015, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Apr. 1, 2015, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated May 21, 2015, U.S. Appl. No. 13/288,822, filed Nov. 3, 2011.
Siamak Azodolmolky et al. “Cloud computing networking: Challenges and opportunities for innovations”, IEEE Communications Magazine, vol. 51, No. 7, Jul. 1, 2013.
Office action dated Jun. 8, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Jun. 10, 2015, U.S. Appl. No. 13/890,150, filed May 8, 2013.
Office Action dated Jan. 31, 2017, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office Action dated Jan. 27, 2017, U.S. Appl. No. 14/216,292, filed Mar. 17, 2014.
Office Action dated Jan. 26, 2017, U.S. Appl. No. 13/786,328, filed Mar. 5, 2013.
Office Action dated Dec. 2, 2016, U.S. Appl. No. 14/512,268, filed Oct. 10, 2014.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/899,849, filed May 22, 2013.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated Nov. 30, 2016, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Nov. 21, 2016, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/822,380, filed Aug. 10, 2015.
Related Publications (1)
Number Date Country
20110268120 A1 Nov 2011 US
Provisional Applications (3)
Number Date Country
61330678 May 2010 US
61334945 May 2010 US
61380819 Sep 2010 US