Link aggregation in software-defined networks

Information

  • Patent Grant
  • 9729387
  • Patent Number
    9,729,387
  • Date Filed
    Wednesday, February 18, 2015
    10 years ago
  • Date Issued
    Tuesday, August 8, 2017
    7 years ago
Abstract
One embodiment of the present invention provides a switch capable of processing software-defined data flows. The switch includes an identifier management module and a flow definition management module. During operation, the identifier management module allocates a logical identifier to a link aggregation port group which includes a plurality of ports associated with different links. The flow definition management module processes a flow definition corresponding to the logical identifier, applies the flow definition to ports in the link aggregation port group, and update lookup information for the link aggregation port group based on the flow definition.
Description
BACKGROUND

Field


The present disclosure relates to network management. More specifically, the present disclosure relates to a method and system for facilitating link aggregation in a software-defined network.


Related Art


The exponential growth of the Internet has made it a popular delivery medium for heterogeneous data flows. Such heterogeneity has caused an increasing demand for bandwidth. As a result, equipment vendors race to build larger and faster switches with versatile capabilities, such as defining data flows using software, to move more traffic efficiently. However, the complexity of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. Furthermore, switches with higher and versatile capability are usually more complex and expensive.


Software-defined flow is a new paradigm in data communication networks. Any network supporting software-defined flows can be referred to as software-defined network. An example of a software-defined network can be an OpenFlow network, wherein a network administrator can configure how a switch behave based on data flows that can be defined across different layers of network protocols. A software-defined network separates the intelligence needed for controlling individual network devices (e.g., routers and switches) and offloads the control mechanism to a remote controller device (often a stand-alone server or end device). Therefore, a software-defined network provides complete control and flexibility in managing data flow in the network.


While support for software-defined flows brings many desirable features to networks, some issues remain unsolved in facilitating flow definitions for a link aggregation across one or more switches that support software-defined flows.


SUMMARY

One embodiment of the present invention provides a switch capable of processing software-defined data flows. The switch includes an identifier management module and a flow definition management module. During operation, the identifier management module allocates a logical identifier to a link aggregation port group which includes a plurality of ports associated with different links. The flow definition management module processes a flow definition corresponding to the logical identifier, applies the flow definition to ports in the link aggregation port group, and update lookup information for the link aggregation port group based on the flow definition.


In a variation on this embodiment, the flow definition management module incorporates in the lookup information a policy regarding traffic distribution across ports in the link aggregation port group.


In a variation on this embodiment, the switch also includes a high-availability module which detects the inability of a port in the link aggregation port group to forward traffic updates the lookup information to associate the flow definition one or more active ports in the link aggregation port group.


In a variation on this embodiment, the switch is an OpenFlow-capable switch.


One embodiment of the present invention provides a switch in a software-defined network. The switch includes an identifier management module, an election module configurable, and a flow definition management module. During operation, the identifier management module allocates a logical identifier to a link aggregation port group which includes a plurality of ports associated with different links. The election module elects a master switch in conjunction with a remote switch. The switch and the remote switch participate in the multi-switch link aggregation and have the same logical identifier allocated to the multi-switch link aggregation port group. The flow definition management module processes a flow definition corresponding to the logical identifier.


In a variation on this embodiment, the flow definition management module applies the flow definition to the ports in the multi-switch link aggregation and updates lookup information for the multi-switch link aggregation port group based on the flow definition.


In a variation on this embodiment, the flow definition management module communicates with a network controller. The switch also includes a synchronization module which sends the flow definition to the remote switch.


In a variation on this embodiment, the switch includes a synchronization module which receives the flow definition from the remote switch in response to the remote switch being elected as the master switch.


In a further variation, the switch includes a high-availability module which detects a failure associated with the remote switch. After the detection, the flow definition management module communicates with a network controller.


In a variation on this embodiment, the switch is an OpenFlow-capable switch.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1A illustrates an exemplary link aggregation in a heterogeneous software-defined network, in accordance with an embodiment of the present invention.



FIG. 1B illustrates exemplary fault-resilient multi-chassis link aggregations in a heterogeneous software-defined network, in accordance with an embodiment of the present invention.



FIG. 2A illustrates an exemplary heterogeneous software-defined network with multi-chassis link aggregation, in accordance with an embodiment of the present invention.



FIG. 2B illustrates an exemplary heterogeneous software-defined network with multi-chassis link aggregations between software-definable switches, in accordance with an embodiment of the present invention.



FIG. 3A presents a flowchart illustrating the initialization process of a master software-definable switch of a multi-chassis link aggregation, in accordance with an embodiment of the present invention.



FIG. 3B presents a flowchart illustrating the initialization process of a salve software-definable switch of a multi-chassis link aggregation, in accordance with an embodiment of the present invention.



FIG. 4A presents a flowchart illustrating the process of a master software-definable switch of a multi-chassis link aggregation sharing new/updated flow definitions with a respective salve software-definable switch of the link aggregation, in accordance with an embodiment of the present invention.



FIG. 4B presents a flowchart illustrating the process of a slave software-definable switch of a multi-chassis link aggregation updating lookup information with received flow definitions from the master switch of the link aggregation, in accordance with an embodiment of the present invention.



FIG. 5 presents a flowchart illustrating the traffic forwarding process of a software-definable switch in a multi-chassis link aggregation, in accordance with an embodiment of the present invention.



FIG. 6A illustrates exemplary failures associated with a multi-chassis link aggregation in a heterogeneous software-defined network, in accordance with an embodiment of the present invention.



FIG. 6B illustrates an exemplary failure associated with a multi-chassis link aggregation between software-definable switches in a software-defined network, in accordance with an embodiment of the present invention.



FIG. 7A presents a flowchart illustrating the process of a salve software-definable switch of a multi-chassis link aggregation handling a failure, in accordance with an embodiment of the present invention.



FIG. 7B presents a flowchart illustrating the process of a software-definable switch handling a failure associated with a link aggregation, in accordance with an embodiment of the present invention.



FIG. 8 illustrates an exemplary switch in a software-defined network, in accordance with an embodiment of the present invention.





In the figures, like reference numerals refer to the same figure elements.


DETAILED DESCRIPTION

The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.


Overview


In embodiments of the present invention, the problem of facilitating single- and multi-chassis link aggregations for switches that support software-defined flows is solved by: (1) providing a logical identifier associated with a respective physical port or link aggregation to a controller for flow definition; and (2) synchronizing flow definitions between the switches, thereby allowing the switches to associate with a controller as a single switch.


It is often desirable to aggregate multiple links between switches or end devices in a network into a logical link aggregation (can also be referred to as a trunk) in a software-defined network. Such a link aggregation includes several links between one or more switches or end devices to create a single logical link and support increased bandwidth. The link aggregation can also provide high availability. If one of the links in the link aggregation fails, the switch associated with the link aggregation can automatically redistribute traffic across the active links in the link aggregation. Ideally, a controller, which is a standalone device providing the forwarding intelligence (i.e., the control plane) to a software-defined network, should provide flow definitions (such as those defined using OpenFlow) to the link aggregation. However, with the existing technologies, a flow definition is defined based on individual physical ports, regardless of whether it is configured for a link aggregation. Hence, a controller can generate erroneous and conflicting flow definitions associated with the ports in the link aggregation.


A second problem faced by the existing software-defined network architecture is providing high availability to the switches capable of processing software-defined flows. Because flow definitions are specific to a switch and its ports, with the existing technologies, a controller does not automatically provide high availability (e.g., switch redundancy). Consequently, a failure to a switch in a software-defined network can disrupt, and often disconnect, the network.


The solutions described herein to the above problems are two-fold. First, in a software-defined network, a switch capable of processing software-defined flows allocates a logical identifier to a respective port group of the switch. The port group includes an individual physical port of the switch or a group of ports in a link aggregation associated with the switch. The switch maintains a mapping between a respective logical identifier and the ports in the corresponding port group. The switch provides these logical identifiers to the controller in the software-defined network. The controller considers these logical identifiers to be the physical port identifiers. As a result, the controller provides flow definitions comprising the logical identifiers as input and/or output ports. Upon receiving a flow definition, the switch converts the flow definition based on the mapping and makes the flow definition applicable to the ports in the corresponding port group. In some embodiments, the switch further incorporates any local policy regarding the traffic distribution across the ports in the link aggregation in addition to the flow definition.


Second, in a software-defined network requiring high availability, a multi-chassis link aggregation (can also be referred to as a multi-chassis trunk) can be established across a plurality of switches for one or more end devices or switches. In a multi-chassis link aggregation, at least one link couples a respective switch associated with the link aggregation. The switches associated with the link aggregation elect one of the switches as a master switch while the others remain slave switches. Among these switches, only the master switch establishes a connection with the controller and receives the flow definitions, which comprises the logical identifiers as input and/or output ports. A respective slave switch receives the flow definitions from the master switch. As a result, the flow definitions are replicated in the master switch as well as the slave switches, without the slave switches establishing a connection with the controller. Because all switches associated with the multi-chassis link aggregation have the same flow definitions, whenever the master switch fails, one of the slave switches can readily take over as the master switch.


In this disclosure, the term “software-defined network” refers to a network that facilitates control over a respective data flow by specifying the action associated with the flow in a flow definition. A controller, which can be a server, coupled to the software-defined network provides a respective switch in the software-defined network with the flow definitions. A flow definition can include a priority value, a rule that specifies a flow, and an action (e.g., a forwarding port or “drop”) for the flow. The rule of a flow definition can specify, for example, any value combination in the ten-tuple of {in-port, virtual local area network (VLAN) identifier, media access control (MAC) source and destination addresses, Ethertype, Internet protocol (IP) source and destination addresses, IP Protocol, Transmission Control Protocol (TCP) source and destination ports}. Other packet header fields can also be included in the flow rule. Depending on its specificity, a flow rule can correspond to one or more flows in the network. Upon matching a respective packet to a rule, the switch in the software-defined network takes the action included in the corresponding flow definition. An example of a software-defined network includes, but is not limited to, OpenFlow, as described in Open Networking Foundation (ONF) specification “OpenFlow Switch Specification,” available at http://www.openflow.org/documents/openflow-spec-v1.1.0.pdf, which is incorporated by reference herein.


In this disclosure, a switch in a software-defined network and capable of processing software-defined flows is referred to as a “software-definable” switch. Such a software-definable switch can include both ports that process software-defined flows and ports reserved for convention packet forwarding (e.g., layer-2/Ethernet switching, or IP routing), which are referred to as “regular ports” in this disclosure. A flow definition typically includes one or more software-definable in-ports to which the definition is applicable. Any flow arriving via any port can potentially be a match for the generic flow definition.


In some embodiments, the software-defined network is a fabric switch and a respective switch in the software-defined network is a member switch of the fabric switch. The fabric switch can be an Ethernet fabric switch. In an Ethernet fabric switch, any number of switches coupled in an arbitrary topology may logically operate as a single switch. Any new switch may join or leave the fabric switch in “plug-and-play” mode without any manual configuration. A fabric switch appears as a single logical switch to the end device.


Although the present disclosure is presented using examples based on OpenFlow, embodiments of the present invention are not limited to networks defined OpenFlow or a particular Open System Interconnection Reference Model (OSI reference model) layer. In this disclosure, the term “software-defined network” is used in a generic sense, and can refer to any network which facilitates switching of data flows based on software-defined rules. The term “flow definition” is also used in a generic sense, and can refer to any rule which identifies a data frame belonging to a specific flow and/or dictates how a switch should process the frame.


The term “end device” can refer a host, a conventional layer-2 switch, or any other type of network device. Additionally, an end device can be coupled to other switches or hosts further away from a network. An end device can also be an aggregation point for a number of network devices to enter the network.


The term “message” refers to a group of bits that can be transported together across a network. “Message” should not be interpreted as limiting embodiments of the present invention to any specific networking layer. “Message” can be replaced by other terminologies referring to a group of bits, such as “frame,” “packet,” “cell,” or “datagram.” The term “frame” is used in a generic sense and should not be interpreted as limiting embodiments of the present invention to layer-2 networks. “Frame” can be replaced by other terminologies referring to a group of bits, such as “packet,” “cell,” or “datagram.”


The term “switch” is used in a generic sense, and it can refer to any standalone or fabric switch operating in any network layer. “Switch” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. Any device that can forward traffic to an end device can be referred to as a “switch.” Examples of a “switch” include, but are not limited to, a layer-2 switch, a layer-3 router, a Transparent Interconnection of Lots of Links (TRILL) Routing Bridge (RBridge), an FC router, or an FC switch.


The term “Ethernet fabric switch” refers to a number of interconnected physical switches which form a single, scalable logical switch. In a fabric switch, any number of switches can be connected in an arbitrary topology, and the entire group of switches functions together as one single, logical switch. This feature makes it possible to use many smaller, inexpensive switches to construct a large fabric switch, which can be viewed as a single logical switch externally.


Network Architecture



FIG. 1A illustrates an exemplary link aggregation in a heterogeneous software-defined network, in accordance with an embodiment of the present invention. A heterogeneous software-defined network 100 includes regular switches 102 and 103. Also included is software-definable switch 101, which is capable of processing software-defined flows. Controller 130 is logically coupled to switch 101 in network 100. The logical connection between controller 130 and switch 101 can include one or more physical links. Switches 102 and 103 are coupled to switch 101 via link aggregation 110 and a physical link, respectively.


During operation, switch 101 allocates logical identifier 122 to port group 142 comprising physical ports 112 and 114 in link aggregation 110, and logical identifier 124 to port group 144 comprising physical port 116. Switch 101 also maintains a mapping between logical identifier 122 and ports 112 and 114 in corresponding port group 142, and logical identifier 124 and port 116 in corresponding port group 144. For configuring flow definitions, controller 130 sends a query to switch 101 for the port identifiers of switch 101. Controller 130 can send the query based on a preconfigured instruction (e.g., a daemon running on controller 130) or an instruction from a network administrator (e.g., instruction received via an input device).


Upon receiving the query, switch 101 provides logical identifiers 122 and 124 to controller 130. With the existing technologies, switch 101 sends identifiers of ports 112, 114, and 116. Consequently, controller 130 cannot generate flow definition for link aggregation 110. However, when controller 130 receives logical identifier 122, controller 130 perceives that switch 102 is coupled to switch 101 via a single port. As result, controller 130 provides flow definitions comprising logical identifier 122 as an input and/or output port. Similarly, controller 130 also provides flow definitions comprising logical identifier 124 as an input and/or output port. Upon receiving the flow definitions, switch 101 converts the flow definitions from logical identifiers 122 and 124 based on the mapping and makes the flow definitions applicable to the ports in corresponding port groups 142 and 144, respectively. In some embodiments, switch 101 further incorporates any local policy regarding the traffic distribution across ports 112 and 114 in addition the flow definitions comprising logical identifier 122. Switch 101 then uses a data structure (e.g., a linked-list) to store the flow definitions based on ports 112, 114, and 116. Switch 101 also incorporates the flow definitions in lookup information in hardware (e.g., in a CAM).


In the example in FIG. 1A, to allow switch 102 to forward a data flow to switch 103, controller 130 provides switch 101 a corresponding flow definition. The flow definition specifies logical identifier 122 as an input port and logical identifier 124 as an output port. The flow definition also includes a rule which represents the data flow from switch 102 to switch 103. Upon receiving the flow definition, switch 101 converts the logical identifier 122 to port 112 (or, depending on the forwarding policy, to port 114) and logical identifier 124 to port 116 in the flow definition. In this way, switch 101 converts the flow definition comprising logical identifiers 122 and 124, and makes the flow definition applicable to ports 112 and 116. Switch 101 stores the converted flow definition in the local lookup information. When switch 101 receives a data frame via port 112, switch 101 matches the data frame with the lookup information and identifies port 116 to be the output port. Switch 101 then transmits the data frame to port 116.



FIG. 1B illustrates exemplary fault-resilient multi-chassis link aggregations in a heterogeneous software-defined network, in accordance with an embodiment of the present invention. A heterogeneous software-defined network 150 includes regular switches 153 and 154. Also included are software-definable switches 151 and 152. Switches 153 and 154 are coupled to switches 151 and 152 via link aggregations 192 and 194, respectively. During operation, switches 151 and 152 negotiate among each other via inter-switch link 190 and elect switch 151 as a master switch for link aggregations 192 and 194. In some embodiments, a respective link aggregation can have a respective master switch. Switch 152 operates as a slave switch in conjunction with master switch 151.


Switch 151 allocates logical identifier 172 and 174 to port groups 192 and 194, respectively. Port groups 192 and 194 include ports 162-1 and 164-1, respectively, which are associated with link aggregations 192 and 194, respectively. Switch 151 creates a mapping between logical identifier 172 and corresponding port 162-1 in port group 192, and logical identifier 174 and corresponding port 164-1 in port group 194. In some embodiments, switch 151 shares the mapping with switch 152 via link 190. Upon receiving the mapping, switch 152 identifies ports 164-2 and 162-2 as parts of in port groups 192 and 194, respectively. Switch 152 allocates logical identifier 172 and 174 to ports 164-2 in port group 192 and 162-2 in port group 194, respectively, and creates a local mapping between logical identifier 172 and port 164-2 in corresponding port group 192, and logical identifier 174 and port 162-2 in corresponding port group 194. Because a logical identifier can represent a respective port of a multi-chassis link aggregation, the link aggregation can have different physical ports (i.e., ports with different identifiers) on different switches. For example, link aggregation 192 includes port 162-1 in switch 151 and port 164-2 in switch 152.


For configuring flow definitions, controller 180 sends a query to switch 151 for the port identifiers of switch 151. Upon receiving the query, switch 151 provides logical identifiers 172 and 174 to controller 180. Controller 180 then creates flow definitions for logical identifiers 172 and 174, which can be based on an instruction from a network administrator, and sends the flow definitions to switch 151. Switch 151 sends the flow definitions to switch 152 via link 190 and locally converts the flow definitions based on the local mapping and makes the flow definitions applicable to ports 162-1 and 164-1. Upon receiving the flow definitions from switch 151, switch 152 locally converts the flow definitions based on the local mapping and makes the flow definitions applicable to ports 164-2 and 162-2. As a result, the same flow definitions are replicated in switches 151 and 152, without switch 152 establishing a connection to controller 180. In this way, switches 151 and 152 provide link and node-level high availability to switch 153 and 154 without any modification to controller 180.


The forwarding policy of switches 151 and 152 determines which of switches 151 and 152 forwards traffic. If an active-active forwarding policy is adopted, both switches 151 and 152 forward traffic matched by the flow definitions. If an active-standby forwarding policy is adopted, switch 151 forwards the traffic matched by the flow definitions while switch 152 drops the traffic. Even when switch 152 remains standby, because both switches 151 and 152 have the flow definitions, slave switch 152 can readily take over as the master switch whenever switch 151 fails.



FIG. 2A illustrates an exemplary software-defined network with a virtual-switch-based multi-chassis link aggregation, in accordance with an embodiment of the present invention. A heterogeneous software-defined network 200 includes regular switches 201, 202, and 203. Also included are software-definable switches 204 and 205, which are capable of processing software-defined flows. End device 232 and switch 206 both are dual-homed and coupled to switches 204 and 205. The goal is to allow a dual-homed device to use both physical links to multiple software-definable switches as a multi-chassis link aggregation, with the same address. Examples of such address include, but are not limited to a MAC address, an IP address, or an RBridge identifier.


In embodiments of the present invention, as illustrated in FIG. 2, switches 204 and 205 are configured to operate in a special “trunked” mode for end device 232 and switch 206. End device 232 and switch 206 view switches 204 and 205 as a common virtual switch 210, with a corresponding virtual address. End device 232 and switch 206 are considered to be logically coupled to virtual switch 210 via logical links represented by dotted lines. Virtual switch 210 is considered to be logically coupled to both switches 204 and 205, optionally with zero-cost links (also represented by dotted lines). While forwarding data frames from end device 232 and switch 206, switches 204 and 205 mark the data frames with virtual switch 210's address as their source address. As a result, other switches in network 200 can learn that end device 232 and switch 206 are both reachable via virtual switch 210.


In the following description, switches which participate in link aggregation are referred to as “partner switches.” Since the two partner switches function as a single logical switch, the MAC address reachability learned by a respective switch is shared with the other partner switch. For example, during normal operation, end device 232 may choose to send its outgoing data frames only via the link to switch 205. As a result, only switch 205 would learn end device 232's MAC address. This information is then shared by switch 205 with switch 204 via inter switch link 250. In some embodiments, switches 204 and 205 are TRILL RBridges and virtual switch 210 is a virtual RBridge associated with a virtual RBridge identifier. Under such a scenario, RBridges 204 and 205 can advertise their respective connectivity (optionally via zero-cost links) to virtual RBridge 210. Hence, multi-pathing can be achieved when other RBridges choose to send data frames to virtual RBridge 108 (which is marked as the egress RBridge in the frames) via RBridges 204 and 205.


During operation, switches 204 and 205 negotiate among each other and elect switch 204 as a master switch. Switch 205 operates as a slave switch in conjunction with master switch 204. Switches 204 and 205 uses inter-chassis link 250 between them for sharing information. Switches 204 and 205 allocate logical identifiers to the port groups associated with link aggregations and create local mappings between the logical identifiers and the ports in the corresponding port groups, as described in conjunction with FIGS. 1A and 1B. Between switches 204 and 205, only master switch 204 establishes a logical connection with controller 220 and receives flow definitions based on the logical identifiers. Switch 204 sends the flow definitions to switch 205. As a result, the flow definitions are replicated in switches 204 and 205, without any modification to controller 220.


To send data frames to end device 232 or switch 206, switches 201, 202, and 203 send data frames toward virtual switch 210. Switches 204 and 205 receive the data frames, recognize the data frames to be forwarded to 210, and compare the data frames with the flow definitions in the lookup information. Depending on the forwarding policy, as described in conjunction with FIG. 1B, either switch 204 or both switches 204 and 205 forward the data frames to end device 232 or switch 206.


The ports capable of receiving software-defined flows (can be referred to as software-definable ports) should have identical configuration in both switches 204 and 205. For example, if master switch 204 has 10 port groups for sending and receiving software-defined flows, slave switch 205 should also have 10 port groups with identical logical identifiers and connectivity associated with the software-definable ports in the port groups. Because switch 204 is coupled to switch 201, 202, and 203 via software-definable ports, switch 205 is also coupled to switch 201, 202, and 203 with identical corresponding logical identifiers. However, rest of the ports can be different. In some embodiments, switches 204 and 205 can have different hardware or software configurations. For example, switch 205 is coupled to end device 234 via a non-software-definable port while switch 204 is not.


In some embodiments, software-definable switches 204 and 205 can be coupled to other software-definable switches. FIG. 2B illustrates an exemplary heterogeneous software-defined network with multi-chassis link aggregations between software-definable switches, in accordance with an embodiment of the present invention. In the example in FIG. 2B, switches 201, 202, and 203 are software-definable switches as well. Switch 201 receives flow definitions from controller 220, and switches 202 and 203 receive flow definitions from another controller 222. Because flow definitions are specific to a switch and its logical identifiers, even though switches 202 and 203, and switches 204 and 205 have different controllers, these switches can still participate in a link aggregation. To ensure uninterrupted communication with switches 204 and 205, switches 201, 202, and 203 are coupled to switches 204 and 205 via multi-chassis link aggregations 272, 274, and 276, respectively.


Because a respective port in a link aggregation is associated with a logical identifier, a switch can apply a flow definition associated with the logical identifier to all ports in the link aggregation, as described in conjunction with FIG. 1A. For example, switch 201 applies a flow definition associated with the logical identifier of the port group associated with link aggregation 272 to all ports in the port group. Consequently, if switch 204 becomes unavailable due to a link or node failure, switches 201 can still forward the data frames belonging to a software-defined flow to switch 205 via the active links in link aggregation 272. Similarly, switches 202 and 203 can still forward to switch 204 via the active links in link aggregations 274 and 276, respectively. Note that link aggregations 272, 274, and 276 are distinguishable from the perspectives of switches 201, 202, and 203, and switches 204 and 205. For example, from switch 201's perspective, link aggregation 272 provides link level high-availability and ensures frame forwarding via at least one port when another port cannot forward data frames. On the other hand, from switch 204's perspective, link aggregation 272 provides both link and node level high-availability. Even when switch 204 fails, switch 205 is available for forwarding data frames to switch 201.


Initialization


In the example in FIG. 1B, switches 151 and 152 initialize their respective operations to operate as a master and slave switch, respectively. FIG. 3A presents a flowchart illustrating the initialization process of a master software-definable switch of a multi-chassis link aggregation, in accordance with an embodiment of the present invention. The switch first identifies partner switch(es) (operation 302) and establishes inter-switch link(s) with the partner switch(es) (operation 304). The switch elects the local switch as the master switch in conjunction with the partner switch(es) (operation 306). The switch identifies the single- and multi-chassis link aggregations associated with the switch (operation 308) and allocates logical identifiers to port groups (operation 310). The switch can execute operations 302, 304, 306, 308, and 310 based on a preconfigured instruction (e.g., a daemon running on the switch) or an instruction from a network administrator (e.g., instruction received via an interface).


The switch allocates only one logical identifier to a port group associated with a link aggregation, thereby associating the plurality of ports of the port group with the logical identifier. The switch also allocates a logical identifier to a respective port group comprising an individual port not in a link aggregation. The switch creates a logical identifier mapping between the logical identifiers and the ports in their corresponding port groups (operation 312). The switch establishes a connection with the controller using a data path identifier (operation 314). The data path identifier identifies the switch to the controller. The switch shares its data path identifier with other partner switch(es) (operation 316). In some embodiments, the data path identifier is preconfigured in a respective partner switch.


The switch receives a query message from the controller for local port information (operation 318). In response, the switch sends one or more messages with the logical identifiers as port identifiers to the controller (operation 320). Because a link aggregation is associated with a single logical identifier, the controller considers the ports in the link aggregation to be a single port. The controller provides flow definitions comprising the logical identifiers as input and output ports. The switch receives one or more messages with the flow definitions (operation 322) and sends the received flow definitions to partner switch(es) (operation 324) via one or more messages.


The switch then converts the flow definition based on the mapping and makes the definitions applicable to the physical ports in the port groups corresponding to the logical identifiers (operation 326). The switch then updates the lookup information with converted flow definitions based on the physical ports in software (e.g., a linked list representing the flow definitions) and hardware (e.g., a CAM) (operation 328). In some embodiments, the switch further incorporates any local policy regarding the traffic distribution across the ports in the link aggregation in addition to the flow definition. The switch then send periodic “keep alive” message to the partner switch(es) to notify them that the master switch is operational (operation 330).



FIG. 3B presents a flowchart illustrating the initialization process of a salve software-definable switch of a multi-chassis link aggregation, in accordance with an embodiment of the present invention. The switch first identifies partner switch(es) (operation 352) and establishes inter-switch link(s) with the partner switch(es) (operation 354). The switch elects the local switch as a slave switch in conjunction with the partner switch(es) (operation 356). In some embodiments, a respective link aggregation can have a respective master switch. The switch identifies the single- and multi-chassis link aggregations associated with the switch (operation 358). The switch can execute operations 352, 354, 356, and 358 based on a preconfigured instruction or an instruction from a network administrator. The switch receives a logical identifier mapping from the master switch via an inter-switch link (operation 360) and identifies the local port groups (individual and in link aggregations) corresponding to port groups in the logical identifier mapping of the master switch (operation 362).


The switch creates a local logical identifier mapping using the same logical identifiers in the logical identifier mapping of the master switch for the ports in the corresponding port groups (operation 364). For example, if the master switch has 10 port groups, the switch should also have 10 port groups with identical logical identifiers and connectivity associated with the software-definable ports in the port groups. The switch then receives from the master switch the data path identifier which the master switch has used to establish connection with the controller (operation 366). The switch stores the data path identifier and uses the identifier to establish connection with the controller if the master switch fails.


The switch receives from the master switch one or more messages with the flow definitions comprising the logical identifiers in the logical identifier mapping (operation 368). The switch converts the flow definition based on the local logical identifier mapping and makes the definitions applicable to the physical ports in the port groups corresponding to the logical identifiers (operation 370). The switch then updates the lookup information with converted flow definitions based on the physical ports in software (e.g., a linked list representing the flow definitions) and hardware (e.g., a CAM) (operation 372). In some embodiments, the switch further incorporates any local policy regarding the traffic distribution across the ports in the link aggregation in addition to the flow definition. Afterward, the switch continues to expect periodic “keep alive” message from the master switch to be notified about the operational state of the master switch (operation 374).


Operations


In the example in FIG. 1B, switch 151 has an active communication with controller 180 and receives flow definitions from controller 180. Switch 152 receives these flow definitions from switch 151. To ensure that the flow definitions are always replicated at switch 152, whenever switch 151 receives a new or updated (e.g., modified or deleted) flow definition from controller 180, switch 151 sends the flow definition to switch 152. Upon receiving the flow definition from switch 151, switch 152 updates the flow definitions in local lookup information.



FIG. 4A presents a flowchart illustrating the process of a master software-definable switch of a multi-chassis link aggregation sharing new/updated flow definitions with a respective salve software-definable switch of the link aggregation, in accordance with an embodiment of the present invention. Upon receiving new or updated flow definition(s) from the controller (operation 502), the switch identifies the partner switches (operation 504). The switch constructs one or more messages for a respective partner switch comprising the new or updated flow definition(s) (operation 506) and sends the message(s) to the partner switch(es) (operation 508). If the switch receives multiple flow definitions from the controller, the switch can include all flow definitions in a single message or send individual messages for a respective received flow definition. Any message from the master switch can be a layer-2 frame, a layer-3 packet, a TRILL packet, a Fibre Channel frame, or have any other messaging format. The switch can also encapsulate the message based on a security scheme implemented in partner switches.



FIG. 4B presents a flowchart illustrating the process of a slave software-definable switch of a multi-chassis link aggregation updating lookup information with received flow definitions from the master switch of the link aggregation, in accordance with an embodiment of the present invention. The switch receives from the master switch one or more messages comprising the new or updated flow definitions (operation 452). These flow definitions include the logical identifiers associated with the switch. The switch extracts the flow definitions from the message(s) (operation 454). The extraction process can include decapsulating security encapsulation and one or more of layer-2, layer-3, layer-4, TRILL, and Fibre Channel frame encapsulation. The switch converts the flow definition based on the local logical identifier mapping and makes the definitions applicable to the physical ports corresponding to the logical identifiers (operation 456). The switch then updates the lookup information with converted flow definitions based on the physical ports in software (e.g., a linked list representing the flow definitions) and hardware (e.g., a CAM) (operation 458).



FIG. 5 presents a flowchart illustrating the traffic forwarding process of a software-definable switch in a multi-chassis link aggregation forwarding traffic, in accordance with an embodiment of the present invention. Upon receiving a data frame (operation 502), the switch checks whether the data frame belongs to a software-defined flow (operation 504). The switch checks whether the data frame belongs to a software-defined flow by determining whether the data frame matches at least one of the flow definitions in the local lookup information (e.g., in a CAM). If the data frame belongs to a software-defined flow, the switch identifies the software-definable output port specified in the flow definition corresponding to the software-defined flow (operation 512) and transmits the data frame to the identified software-definable port (operation 514).


If the data frame does not belong to a software-defined flow, the switch checks whether the switch supports non-software-defined flows (operation 506). If the switch does not support non-software-defined flow, the switch drops the data frame (operation 532). If the switch supports non-software-defined flows, the switch checks whether the data frame is destined to the local switch or a virtual switch associated with the switch (operation 508), as described in conjunction with FIG. 2A. If the data frame is not destined to the local switch or a virtual switch, the switch forwards the data frame to next-hop switch (operation 532). If the data frame is destined to the local switch or a virtual switch, the switch identifies an output port for the data frame's destination address (operation 522). For example, if the data frame is a TRILL packet, the switch can identify the output port based on the egress RBridge identifier of the TRILL packet. The switch then forwards the data frame to the output port (operation 524).


Failure Handling



FIG. 6A illustrates exemplary failures associated with a multi-chassis link aggregation in a heterogeneous software-defined network, in accordance with an embodiment of the present invention. A heterogeneous software-defined network 600 includes regular switch 606 and software-definable switches 602 and 604. End device 612 is dual-homed and coupled to switches 602 and 604, which are configured to operate in a special “trunked” mode for end device 612. End device 612 views switches 602 and 604 as a common virtual switch 610, with a corresponding virtual address. End device 612 is considered to be logically coupled to virtual switch 610 via logical links represented by dotted lines. Virtual switch 610 is considered to be logically coupled to both switches 602 and 604, optionally with zero-cost links (also represented by dotted lines).


During operation, switches 602 and 604 negotiate among each other and elect switch 602 as a master switch. Switch 604 operates as a slave switch. Switches 602 and 604 allocate logical identifiers to the ports in link aggregation that couples end device 612. Switch 602 establishes a logical connection 622 with controller 620 using a data path identifier and receives flow definitions based on the logical identifiers. Switch 602 sends the data path identifier and the flow definitions to switch 604 via one or more messages. As a result, the flow definitions are replicated in switches 602 and 604, without switch 604 establishing a connection with controller 620.


Suppose that failure 632 fails switch 602. Switch 606 and end device 612 still consider virtual switch 610 to be operational and continues to forward traffic to switch 604. Because switch 604 has the flow definitions, switch 604 can readily process the data frames belonging to the software-defined flows specified by the flow definitions. Furthermore, upon detecting failure 632, switch 604 establishes a logical connection 624 to controller 620 using the same data path identifier used to establish connection 622. Controller 620 considers connection 624 to be from the same switch (i.e., switch 602). As a result, instead of sending flow definitions, controller 220 simply verifies with switch 604 whether the flow definitions are available. In response, switch 604 notifies controller 220 about the availability of the flow definitions. Controller 620 sends subsequent new or updated flow definitions to switch 604 via connection 624.


Suppose that failure 634 fails logical connection 622. Consequently, switch 602 cannot receive flow definitions from controller 220 any longer. Upon detecting failure 634, switch 602 sends a “take over” message instructing switch 604 to assume the role of the master switch. Switch 604, in response, establishes logical connection 624 and starts operating as the master switch while switch 602 starts operating as a slave switch. Suppose that failure 636 fails the inter-switch link between switches 602 and 604. Switch 602 then cannot send new or updated flow definitions to switch 604 any longer. Upon detecting failure 636, switch 604 establishes a new logical connection 624 using its own data path identifier and starts operating as an independent software-definable switch.



FIG. 6B illustrates an exemplary failure associated with a multi-chassis link aggregation between software-definable switches in a software-defined network, in accordance with an embodiment of the present invention. In this example, switch 606 is also a software-definable switch coupled to switches 602 and 604 via multi-chassis link aggregation 652. Suppose that failure 638 fails switch 602. Because switch 606 applies a flow definition associated with the logical identifier of link aggregation 652 to all ports in link aggregation 652, switches 606 can still forward the data frames belonging to a software-defined flow to switch 604 via the active links in link aggregations 602. Consequently, if switch 606 is a software-definable switch, virtual switch 610 is not necessary for switch 606 to forward data frames to end device 612 via switch 604 in the event of failure 638. Note that link aggregation 602 is distinguishable from the perspectives of switch 606, and switches 602 and 604. From switch 606's perspective, link aggregation 652 provides link level high-availability and ensures frame forwarding to switch 604 because another port cannot forward data frames to switch 602. On the other hand, from switch 604's perspective, link aggregation 252 provides both link and node level high-availability. Even when switch 602 fails, switch 604 is available for forwarding data frames to switch 606 and end device 612.



FIG. 7A presents a flowchart illustrating the process of a salve software-definable switch of a multi-chassis link aggregation handling a failure, in accordance with an embodiment of the present invention. The switch first checks whether it has received any “take over” message from the master switch (operation 702). This take over message can be received if the master switch has incurred failure 634, as described in conjunction with FIG. 6A. If not, the switch expects a periodic “keep alive” message from the master switch within a given time period (operation 704). The switch checks whether it has received the message before a timeout period associated with the message (operation 706). If the switch receives the message within the timeout period, the switch continues to check whether it has received any “take over” message from the master switch (operation 702).


If the switch does not receive the “keep alive” message within the timeout period, the switch considers the master switch to be inactive. The master switch being inactive corresponds to failure 632 in FIG. 6A. The switch, in conjunction with other partner switch(es), elects a master switch (operation 708) and checks whether the local switch has been elected as the master switch (operation 710). Note that if the multi-chassis link aggregation is configured with only one slave switch, the switch does not require executing operations 708 and 710. If the switch is not elected as the master switch, the switch continues to operate as the slave switch (operation 712).


If the switch receives a “take over” message from the master switch (operation 702) or has been elected to operate as a master switch (operation 710), the switch sends a connection request to the controller using the data path identifier of the master switch (operation 714) and establishes a logical connection with the controller (operation 716). The switch receives flow definition verification message from the controller (operation 718), as described in conjunction with FIG. 6A. In response, the switch sends a message verifying the flow definitions (operation 720). The switch then starts operating as the master switch for the multi-chassis link aggregation (operation 722).



FIG. 7B presents a flowchart illustrating the process of a software-definable switch handling a failure associated with a link aggregation, in accordance with an embodiment of the present invention. In the example in FIG. 6B, this process corresponds to switch 606 handling failure 638. Upon detecting a failure associated with the link aggregation (operation 752), the switch identifies the physical port associated with the failure (operation 754). The switch then identifies the active physical ports associated with the link aggregation (operation 756) and updates the local lookup information replacing the port associated with the failure with the identified active port (operation 758). In the example in FIG. 6B, upon detecting failure 638, switch 606 updates the local lookup information replacing the port coupling switch 602 with the active port in link aggregation 652 coupling switch 604.


Exemplary Switch



FIG. 8 illustrates an exemplary switch in a software-defined network, in accordance with an embodiment of the present invention. In this example, a switch 800 includes a number of communication ports 802, a flow definition management module 830, an identifier management module 820, a packet processor 810, and a storage 850. Packet processor 810 further includes a CAM 811, which stores lookup information. One or more of communication ports 802 are software-definable ports. These software-definable ports can be OpenFlow enabled. During operation, identifier management module 820 allocates a logical identifier to a respective port group of one or more software-definable ports. A port group can represent a plurality of software-definable ports associated with a link aggregation. Flow definition management module 830 maintains a mapping between a respective logical identifier and a corresponding port group. In some embodiments, this mapping is stored in storage 850.


Switch 800 provides the logical identifiers as port identifiers of the software-definable ports of the communication ports 802 to a controller in the software-defined network. In response, the controller sends switch 800 a message comprising one or more flow definitions based on the logical identifiers. Flow definition management module 830 operating in conjunction with packet processor 810 receives the message from the controller via one of the communication ports 802. Flow definition management module 830 converts a respective flow definition to make the flow definition applicable to the physical ports in a port group based on the mapping and updates the lookup information with the converted flow definition.


In some embodiments, switch 800 also includes an election module 832, which elects a master switch in conjunction with a remote switch. Switch 800 and the remote switch participate in a multi-chassis link aggregation and have the same logical identifier for the port group associated with the multi-chassis link aggregation. If switch 800 is elected as the master switch, flow definition management module 830 establishes a logical connection with the controller using a data path identifier. Switch 800 also includes a synchronization module 834 which, operating in conjunction with packet processor 810, constructs for the remote switch message(s) including the flow definitions received from the controller. If switch 800 is not elected as the master switch, flow definition management module 830 precludes switch 800 from establishing a logical connection with the controller. Under such a scenario, switch 800 receives message(s) comprising the flow definitions from the remote switch instead of the controller.


In some embodiments, the switch also includes a high-availability module 840. If high-availability module 840 detects that a port in a port group cannot forward traffic (e.g., due to a link failure or a downstream node failure), high-availability module 840 updates the lookup information to make a flow definition associated with the port group applicable to the active ports in the port group. On the other hand, if switch 800 is a not the master switch and if high-availability module 840 detects a failure associated with the remote switch (e.g., a node failure or a failure to the logical link to the controller), flow definition management module 830 establishes a logical connection with the controller using the same data path identifier used by the remote switch. Switch 800 then starts operating as the master switch.


In some embodiments, switch 800 may maintain a membership in a fabric switch. Switch 800 maintains a configuration database in storage 850 that maintains the configuration state of a respective switch within the fabric switch. Switch 800 maintains the state of the fabric switch, which is used to join other switches. Under such a scenario, communication ports 802 can include inter-switch communication channels for communication within a fabric switch. This inter-switch communication channel can be implemented via a regular communication port and based on any open or proprietary format.


Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory which is coupled to one or more processors in switch 800. When executed, these instructions cause the processor(s) to perform the aforementioned functions.


In summary, embodiments of the present invention provide a switch and a method for proving link aggregation in a software-defined network. In one embodiment, The switch includes an identifier management module and a flow definition management module. During operation, the identifier management module allocates a logical identifier to a link aggregation port group which includes a plurality of ports associated with different links. The flow definition management module processes a flow definition corresponding to the logical identifier, applies the flow definition to ports in the link aggregation port group, and update lookup information for the link aggregation port group based on the flow definition.


The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.


The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.


The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: identifier management circuitry configured to create a mapping between a logical identifier identifying a link aggregation port group and a respective port participating in the port group, wherein the port group includes a plurality of ports associated with different links; andflow definition management circuitry configured to: identify a first flow definition comprising a rule and the logical identifier, wherein the rule indicates how a flow is processed based on the logical identifier;identify one or more ports of the switch corresponding to the logical identifier based on the mapping;convert the first flow definition to a second flow definition applicable to the identified one or more ports of the switch; andapply the second flow definition to traffic associated with the identified one or more ports.
  • 2. The switch of claim 1, wherein the flow definition management circuitry is further configured to incorporate, in the second flow definition, a local policy regarding traffic distribution across the identified local ports.
  • 3. The switch of claim 1, further comprising high-availability circuitry configured to: detect a failure associated with one of the identified local ports;identify active ports in the identified one or more ports; andupdate lookup information to associate the second flow definition with the identified active ports.
  • 4. The switch of claim 1, wherein the logical identifier appears as a port identifier in the first flow definition.
  • 5. A switch, comprising: identifier management circuitry configured to create a mapping between a logical identifier identifying a multi-switch link aggregation port group and a respective port participating in the port group, wherein the port group includes a plurality of ports of the switch and a remote switch;election circuitry configured to elect a master switch between the switch and the remote switch, wherein the switch and the remote switch participate in the multi-switch link aggregation port group, and wherein the master switch is responsible for obtaining flow definitions for the port group;wherein the logical identifier is same in the switch and the remote switch; andflow definition management circuitry configured to identify a first flow definition comprising a rule and the logical identifier, wherein the first flow definition is received based on a data path identifier, which identifies the switch to a controller, and wherein the rule indicates how a flow is processed based on the logical identifier.
  • 6. The switch of claim 5, wherein the flow definition management circuitry is further configured to: identify one or more ports of the switch corresponding to the logical identifier based on the mapping;convert the first flow definition to a second flow definition applicable to the identified one or more ports; andapply the second flow definition to traffic associated with the identified one or more ports.
  • 7. The switch of claim 5, further comprising synchronization circuitry configured to, in response to the switch being the master switch, construct, for the remote switch, a message comprising the data path identifier.
  • 8. The switch of claim 5, further comprising synchronization circuitry configured to, in response to the remote switch being the master switch, identify the first flow definition in a message from the remote switch.
  • 9. The switch of claim 8, further comprising high-availability circuitry configured to detect a failure associated with the remote switch; and wherein the flow definition management circuitry is further configured to, in response to the detection of the failure, construct a connection request destined to the network controller based on the data path identifier.
  • 10. The switch of claim 8, wherein the flow definition management circuitry is further configured to, in response to identifying a take-over message from the remote switch, construct a connection request destined to the network controller based on the data path identifier, wherein the take-over message instructs the switch to operate as the master switch.
  • 11. A computer-executable method, comprising: creating at a switch a mapping between a logical identifier identifying a link aggregation port group and a respective port participating in the port group, wherein the port group includes a plurality of ports associated with different links;identifying a first flow definition comprising a rule and the logical identifier, wherein the rule indicates how a flow is processed based on the logical identifier;identifying one or more ports of the switch corresponding to the logical identifier based on the mapping;converting the first flow definition to a second flow definition applicable to the identified one or more ports of the switch; andapplying the converted flow definition to traffic associated with the identified one or more ports.
  • 12. The method of claim 11, further comprising incorporating, in the second flow definition, a local policy regarding traffic distribution across the identified one or more ports.
  • 13. The method of claim 11, further comprising: detecting a failure associated with one of the identified local ports;identifying active ports in the identified one or more ports; andupdating lookup information to associate the second flow definition with the identified active ports.
  • 14. The method of claim 11, wherein the logical identifier appears as a port identifier in the first flow definition.
  • 15. A computer-executable method, comprising: creating at a first switch a mapping between a logical identifier identifying a multi-switch link aggregation port group and a respective port participating in the port group, wherein the port group includes a plurality of ports of the first switch and a second switch;electing a master switch between the first switch and the second switch, wherein the first switch and the second switch participate in the multi-switch link aggregation port group, and wherein the master switch is responsible for obtaining flow definitions for the port group;wherein the logical identifier is same in the first switch and the second switch; andidentifying a first flow definition comprising a rule and the logical identifier, wherein the first flow definition is received based on a data path identifier, which identifies the switch to a controller, and wherein the rule indicates how a flow is processed based on the logical identifier.
  • 16. The method of claim 15, further comprising: identifying one or more ports of the first switch corresponding to the logical identifier based on the mapping;converting the first flow definition to a second flow definition applicable to the identified one or more ports; andapplying the second flow definition to traffic associated with the identified one or more ports.
  • 17. The method of claim 15, further comprising: in response to the first switch being the master switch, constructing, for the second switch, a message comprising the data path identifier.
  • 18. The method of claim 15, further comprising, in response to the second switch being the master switch, identifying the first flow definition in a message from the second switch.
  • 19. The method of claim 18, further comprising: detecting a failure associated with the second switch; andin response to the detection of the failure, constructing a connection request destined to the network controller based on the data path identifier.
  • 20. The method of claim 15, wherein, in response to identifying a take-over message from the second switch, constructing a connection request destined to the network controller based on the data path identifier, wherein the take-over message instructs the first switch to operate as the master switch.
RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 13/742,207, titled “Link Aggregation in Software-Defined Networks”, by inventors Vivek Agarwal, Arvindsrinivasan Lakshminarasimhan, and Kashyap Tavarekere Ananthapadmanabha, filed 15 Jan. 2013, which claims the benefit of U.S. Provisional Application No. 61/591,227, titled “Building Redundancy into OpenFlow Enabled Network using Multi-Chassis Trunking,” by inventors Vivek Agarwal, Arvindsrinivasan Lakshminarasimhan, and Kashyap Tavarekere Ananthapadmanabha, filed 26 Jan. 2012; and U.S. Provisional Application No. 61/658,330, titled “High Availability and Facilitating Link Aggregation for OpenFlow,” by inventors Vivek Agarwal, Arvindsrinivasan Lakshminarasimhan, and Kashyap Tavarekere Ananthapadmanabha, filed 11 Jun. 2012, the disclosures of which are incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 12/725,249, titled “Redundant Host Connection in a Routed Network,” by inventors Somesh Gupta, Anoop Ghanwani, Phanidhar Koganti, and Shunjia Yu, filed 16 Mar. 2010; and U.S. patent application Ser. No. 13/669,313, titled “System and Method for Flow Management in Software-Defined Networks,” by inventors Kashyap Tavarekere Ananthapadmanabha, Vivek Agarwal, and Eswara S. P. Chinthalapati, filed 5 Nov. 2012, the disclosures of which are incorporated by reference herein.

US Referenced Citations (457)
Number Name Date Kind
5390173 Spinney Feb 1995 A
5802278 Isfeld Sep 1998 A
5878232 Marimuthu Mar 1999 A
5959968 Chin Sep 1999 A
5973278 Wehrill, III Oct 1999 A
5983278 Chong Nov 1999 A
6041042 Bussiere Mar 2000 A
6085238 Yuasa Jul 2000 A
6104696 Kadambi Aug 2000 A
6185214 Schwartz Feb 2001 B1
6185241 Sun Feb 2001 B1
6331983 Haggerty Dec 2001 B1
6438106 Pillar Aug 2002 B1
6498781 Bass Dec 2002 B1
6542266 Phillips Apr 2003 B1
6633761 Singhal Oct 2003 B1
6771610 Seaman Aug 2004 B1
6870840 Hill Mar 2005 B1
6873602 Ambe Mar 2005 B1
6937576 DiBenedetto Aug 2005 B1
6956824 Mark Oct 2005 B2
6957269 Williams Oct 2005 B2
6975581 Medina Dec 2005 B1
6975864 Singhal Dec 2005 B2
7016352 Chow Mar 2006 B1
7061877 Gummalla Jun 2006 B1
7173934 Lapuh Feb 2007 B2
7197308 Singhal Mar 2007 B2
7206288 Cometto Apr 2007 B2
7310664 Merchant Dec 2007 B1
7313637 Tanaka Dec 2007 B2
7315545 Chowdhury et al. Jan 2008 B1
7316031 Griffith Jan 2008 B2
7330897 Baldwin Feb 2008 B2
7380025 Riggins May 2008 B1
7397794 Lacroute Jul 2008 B1
7430164 Bare Sep 2008 B2
7453888 Zabihi Nov 2008 B2
7477894 Sinha Jan 2009 B1
7480258 Shuen Jan 2009 B1
7508757 Ge Mar 2009 B2
7558195 Kuo Jul 2009 B1
7558273 Grosser Jul 2009 B1
7571447 Ally Aug 2009 B2
7599901 Mital Oct 2009 B2
7688736 Walsh Mar 2010 B1
7688960 Aubuchon Mar 2010 B1
7690040 Frattura Mar 2010 B2
7706255 Kondrat et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720076 Dobbins May 2010 B2
7729296 Choudhary Jun 2010 B1
7787480 Mehta Aug 2010 B1
7792920 Istvan Sep 2010 B2
7796593 Ghosh Sep 2010 B1
7808992 Homchaudhuri Oct 2010 B2
7836332 Hara Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7843907 Abou-Emara Nov 2010 B1
7860097 Lovett Dec 2010 B1
7898959 Arad Mar 2011 B1
7912091 Krishnan Mar 2011 B1
7924837 Shabtay Apr 2011 B1
7937438 Miller May 2011 B1
7937756 Kay May 2011 B2
7945941 Sinha May 2011 B2
7949638 Goodson May 2011 B1
7957386 Aggarwal Jun 2011 B1
8018938 Fromm Sep 2011 B1
8027354 Portolani Sep 2011 B1
8054832 Shukla Nov 2011 B1
8068442 Kompella Nov 2011 B1
8078704 Lee Dec 2011 B2
8090805 Chawla Jan 2012 B1
8102781 Smith Jan 2012 B2
8102791 Tang Jan 2012 B2
8116307 Thesayi Feb 2012 B1
8125928 Mehta Feb 2012 B2
8134922 Elangovan Mar 2012 B2
8155150 Chung Apr 2012 B1
8160063 Maltz Apr 2012 B2
8160080 Arad Apr 2012 B1
8170038 Belanger May 2012 B2
8175107 Yalagandula May 2012 B1
8194674 Pagel Jun 2012 B1
8195774 Lambeth Jun 2012 B2
8204061 Sane Jun 2012 B1
8213313 Doiron Jul 2012 B1
8213336 Smith Jul 2012 B2
8230069 Korupolu Jul 2012 B2
8239960 Frattura Aug 2012 B2
8249069 Raman Aug 2012 B2
8270401 Barnes Sep 2012 B1
8295291 Ramanathan Oct 2012 B1
8295921 Wang Oct 2012 B2
8301686 Appajodu Oct 2012 B1
8339994 Gnanasekaran Dec 2012 B2
8351352 Eastlake Jan 2013 B1
8369335 Jha Feb 2013 B2
8369347 Xiong Feb 2013 B2
8392496 Linden Mar 2013 B2
8451717 Venkataraman May 2013 B2
8462774 Page Jun 2013 B2
8467375 Blair Jun 2013 B2
8520595 Yadav Aug 2013 B2
8599850 Jha Dec 2013 B2
8599864 Chung Dec 2013 B2
8615008 Natarajan Dec 2013 B2
8619788 Sankaran Dec 2013 B1
8705526 Hasan Apr 2014 B1
8706905 McGlaughlin Apr 2014 B1
8717895 Koponen May 2014 B2
8724456 Hong May 2014 B1
8804736 Drake Aug 2014 B1
8806031 Kondur Aug 2014 B1
8826385 Congdon Sep 2014 B2
8918631 Kumar Dec 2014 B1
8937865 Kumar Jan 2015 B1
8995272 Agarwal Mar 2015 B2
9178793 Marlow Nov 2015 B1
9438447 Basso Sep 2016 B2
20010005527 Vaeth Jun 2001 A1
20010055274 Hegge Dec 2001 A1
20020019904 Katz Feb 2002 A1
20020021701 Lavian Feb 2002 A1
20020039350 Wang Apr 2002 A1
20020054593 Morohashi May 2002 A1
20020087723 Williams Jul 2002 A1
20020091795 Yip Jul 2002 A1
20030026290 Umayabashi Feb 2003 A1
20030041085 Sato Feb 2003 A1
20030097470 Lapuh May 2003 A1
20030123393 Feuerstraeter Jul 2003 A1
20030147385 Montalvo Aug 2003 A1
20030174706 Shankar Sep 2003 A1
20030189905 Lee Oct 2003 A1
20030208616 Laing Nov 2003 A1
20030216143 Roese Nov 2003 A1
20040001433 Gram Jan 2004 A1
20040003094 See Jan 2004 A1
20040010600 Baldwin Jan 2004 A1
20040049699 Griffith Mar 2004 A1
20040057430 Paavolainen Mar 2004 A1
20040081171 Finn Apr 2004 A1
20040117508 Shimizu Jun 2004 A1
20040120326 Yoon Jun 2004 A1
20040156313 Hofmeister et al. Aug 2004 A1
20040165595 Holmgren Aug 2004 A1
20040165596 Garcia Aug 2004 A1
20040205234 Barrack Oct 2004 A1
20040213232 Regan Oct 2004 A1
20040225725 Enomoto Nov 2004 A1
20050007951 Lapuh Jan 2005 A1
20050044199 Shiga Feb 2005 A1
20050074001 Mattes Apr 2005 A1
20050094568 Judd May 2005 A1
20050094630 Valdevit May 2005 A1
20050122979 Gross Jun 2005 A1
20050152335 Lodha Jul 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050157751 Rabie Jul 2005 A1
20050169188 Cometto Aug 2005 A1
20050195813 Ambe Sep 2005 A1
20050207423 Herbst Sep 2005 A1
20050213561 Yao Sep 2005 A1
20050220096 Friskney Oct 2005 A1
20050265356 Kawarai Dec 2005 A1
20050278565 Frattura Dec 2005 A1
20060007869 Hirota Jan 2006 A1
20060018302 Ivaldi Jan 2006 A1
20060023707 Makishima et al. Feb 2006 A1
20060029055 Perera Feb 2006 A1
20060034292 Wakayama Feb 2006 A1
20060036765 Weyman Feb 2006 A1
20060059163 Frattura Mar 2006 A1
20060062187 Rune Mar 2006 A1
20060072550 Davis Apr 2006 A1
20060083254 Ge Apr 2006 A1
20060098589 Kreeger May 2006 A1
20060126511 Youn Jun 2006 A1
20060140130 Kalkunte Jun 2006 A1
20060168109 Warmenhoven Jul 2006 A1
20060184937 Abels Aug 2006 A1
20060221960 Borgione Oct 2006 A1
20060227776 Chandrasekaran Oct 2006 A1
20060235995 Bhatia Oct 2006 A1
20060242311 Mai Oct 2006 A1
20060245439 Sajassi Nov 2006 A1
20060251067 DeSanti Nov 2006 A1
20060256767 Suzuki Nov 2006 A1
20060265515 Shiga Nov 2006 A1
20060285499 Tzeng Dec 2006 A1
20060291388 Amdahl Dec 2006 A1
20060291480 Cho Dec 2006 A1
20070036178 Hares Feb 2007 A1
20070053294 Ho Mar 2007 A1
20070061817 Atkinson Mar 2007 A1
20070083625 Chamdani Apr 2007 A1
20070086362 Kato Apr 2007 A1
20070094464 Sharma Apr 2007 A1
20070097968 Du May 2007 A1
20070098006 Parry May 2007 A1
20070116224 Burke May 2007 A1
20070116422 Reynolds May 2007 A1
20070156659 Lim Jul 2007 A1
20070177525 Wijnands Aug 2007 A1
20070177597 Ju Aug 2007 A1
20070183313 Narayanan Aug 2007 A1
20070206762 Chandra Sep 2007 A1
20070211712 Fitch Sep 2007 A1
20070226214 Smits Sep 2007 A1
20070258449 Bennett Nov 2007 A1
20070274234 Kubota Nov 2007 A1
20070289017 Copeland, III Dec 2007 A1
20080052487 Akahane Feb 2008 A1
20080056135 Lee Mar 2008 A1
20080065760 Damm Mar 2008 A1
20080080517 Roy Apr 2008 A1
20080095160 Yadav Apr 2008 A1
20080101386 Gray May 2008 A1
20080112400 Dunbar et al. May 2008 A1
20080133760 Berkvens Jun 2008 A1
20080159277 Vobbilisetty Jul 2008 A1
20080172492 Raghunath Jul 2008 A1
20080181196 Regan Jul 2008 A1
20080181243 Vobbilisetty Jul 2008 A1
20080186981 Seto Aug 2008 A1
20080205377 Chao Aug 2008 A1
20080219172 Mohan Sep 2008 A1
20080225852 Raszuk Sep 2008 A1
20080225853 Melman Sep 2008 A1
20080228897 Ko Sep 2008 A1
20080240129 Elmeleegy Oct 2008 A1
20080267179 LaVigne Oct 2008 A1
20080285458 Lysne Nov 2008 A1
20080285555 Ogasahara Nov 2008 A1
20080298248 Roeck Dec 2008 A1
20080304498 Jorgensen Dec 2008 A1
20080310342 Kruys Dec 2008 A1
20090022069 Khan Jan 2009 A1
20090037607 Farinacci Feb 2009 A1
20090042270 Dolly Feb 2009 A1
20090044270 Shelly Feb 2009 A1
20090067422 Poppe Mar 2009 A1
20090067442 Killian Mar 2009 A1
20090079560 Fries Mar 2009 A1
20090080345 Gray Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092042 Yuhara Apr 2009 A1
20090092043 Lapuh Apr 2009 A1
20090106405 Mazarick Apr 2009 A1
20090116381 Kanda May 2009 A1
20090129384 Regan May 2009 A1
20090138577 Casado May 2009 A1
20090138752 Graham May 2009 A1
20090161584 Guan Jun 2009 A1
20090161670 Shepherd Jun 2009 A1
20090168647 Holness Jul 2009 A1
20090199177 Edwards Aug 2009 A1
20090204965 Tanaka Aug 2009 A1
20090213783 Moreton Aug 2009 A1
20090222879 Kostal Sep 2009 A1
20090232031 Vasseur Sep 2009 A1
20090245137 Hares Oct 2009 A1
20090245242 Carlson Oct 2009 A1
20090246137 Hadida Ruah et al. Oct 2009 A1
20090252049 Ludwig Oct 2009 A1
20090252061 Small Oct 2009 A1
20090260083 Szeto Oct 2009 A1
20090279558 Davis Nov 2009 A1
20090292858 Lambeth Nov 2009 A1
20090316721 Kanda Dec 2009 A1
20090323698 LeFaucheur Dec 2009 A1
20090323708 Ihle Dec 2009 A1
20090327392 Tripathi Dec 2009 A1
20090327462 Adams Dec 2009 A1
20100027420 Smith Feb 2010 A1
20100046471 Hattori Feb 2010 A1
20100054260 Pandey Mar 2010 A1
20100061269 Banerjee Mar 2010 A1
20100074175 Banks Mar 2010 A1
20100085981 Gupta Apr 2010 A1
20100097941 Carlson Apr 2010 A1
20100103813 Allan Apr 2010 A1
20100103939 Carlson Apr 2010 A1
20100131636 Suri May 2010 A1
20100158024 Sajassi Jun 2010 A1
20100165877 Shukla Jul 2010 A1
20100165995 Mehta Jul 2010 A1
20100168467 Johnston Jul 2010 A1
20100169467 Shukla Jul 2010 A1
20100169948 Budko Jul 2010 A1
20100182920 Matsuoka Jul 2010 A1
20100189119 Sawada Jul 2010 A1
20100195489 Zhou Aug 2010 A1
20100215042 Sato Aug 2010 A1
20100215049 Raza Aug 2010 A1
20100220724 Rabie Sep 2010 A1
20100226368 Mack-Crane Sep 2010 A1
20100226381 Mehta Sep 2010 A1
20100246388 Gupta Sep 2010 A1
20100257263 Casado Oct 2010 A1
20100258263 Douxchamps Oct 2010 A1
20100265849 Harel Oct 2010 A1
20100271960 Krygowski Oct 2010 A1
20100272107 Papp Oct 2010 A1
20100281106 Ashwood-Smith Nov 2010 A1
20100284414 Agarwal Nov 2010 A1
20100284418 Gray Nov 2010 A1
20100287262 Elzur Nov 2010 A1
20100287548 Zhou Nov 2010 A1
20100290464 Assarpour Nov 2010 A1
20100290473 Enduri Nov 2010 A1
20100299527 Arunan Nov 2010 A1
20100303071 Kotalwar Dec 2010 A1
20100303075 Tripathi Dec 2010 A1
20100303083 Belanger Dec 2010 A1
20100309820 Rajagopalan Dec 2010 A1
20100309912 Mehta Dec 2010 A1
20100329110 Rose Dec 2010 A1
20110019678 Mehta Jan 2011 A1
20110032945 Mullooly Feb 2011 A1
20110035489 Mcdaniel Feb 2011 A1
20110035498 Shah Feb 2011 A1
20110044339 Kotalwar Feb 2011 A1
20110044352 Chaitou Feb 2011 A1
20110051723 Rabie Mar 2011 A1
20110058547 Waldrop Mar 2011 A1
20110064086 Xiong Mar 2011 A1
20110064089 Hidaka Mar 2011 A1
20110072208 Gulati Mar 2011 A1
20110085560 Chawla Apr 2011 A1
20110085563 Kotha Apr 2011 A1
20110110266 Li May 2011 A1
20110134802 Rajagopalan Jun 2011 A1
20110134803 Dalvi Jun 2011 A1
20110134925 Safrai Jun 2011 A1
20110142053 Van Der Merwe et al. Jun 2011 A1
20110142062 Wang Jun 2011 A1
20110161494 Mcdysan Jun 2011 A1
20110161695 Okita Jun 2011 A1
20110176412 Stine Jul 2011 A1
20110188373 Saito Aug 2011 A1
20110194403 Sajassi Aug 2011 A1
20110194563 Shen Aug 2011 A1
20110228780 Ashwood-Smith Sep 2011 A1
20110231570 Altekar Sep 2011 A1
20110231574 Saunderson Sep 2011 A1
20110235523 Jha Sep 2011 A1
20110243133 Villait Oct 2011 A9
20110243136 Raman Oct 2011 A1
20110246669 Kanada Oct 2011 A1
20110255538 Srinivasan Oct 2011 A1
20110255540 Mizrahi Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110268120 Vobbilisetty Nov 2011 A1
20110268125 Vobbilisetty Nov 2011 A1
20110273988 Tourrilhes Nov 2011 A1
20110274114 Dhar Nov 2011 A1
20110280572 Vobbilisetty Nov 2011 A1
20110286457 Ee Nov 2011 A1
20110296052 Guo Dec 2011 A1
20110299391 Vobbilisetty Dec 2011 A1
20110299413 Chatwani Dec 2011 A1
20110299414 Yu Dec 2011 A1
20110299527 Yu Dec 2011 A1
20110299528 Yu Dec 2011 A1
20110299531 Yu Dec 2011 A1
20110299532 Yu Dec 2011 A1
20110299533 Yu Dec 2011 A1
20110299534 Koganti Dec 2011 A1
20110299535 Vobbilisetty Dec 2011 A1
20110299536 Cheng Dec 2011 A1
20110317559 Kern Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120011240 Hara Jan 2012 A1
20120014261 Salam Jan 2012 A1
20120014387 Dunbar Jan 2012 A1
20120020220 Sugita Jan 2012 A1
20120027017 Rai Feb 2012 A1
20120033663 Guichard Feb 2012 A1
20120033665 Jacob Da Silva et al. Feb 2012 A1
20120033668 Humphries Feb 2012 A1
20120033669 Mohandas Feb 2012 A1
20120033672 Page Feb 2012 A1
20120063363 Li Mar 2012 A1
20120075991 Sugita Mar 2012 A1
20120099567 Hart Apr 2012 A1
20120099602 Nagapudi Apr 2012 A1
20120106339 Mishra May 2012 A1
20120117438 Shaffer May 2012 A1
20120131097 Baykal May 2012 A1
20120131289 Taguchi May 2012 A1
20120134266 Roitshtein May 2012 A1
20120136999 Roitshtein May 2012 A1
20120147740 Nakash Jun 2012 A1
20120158997 Hsu Jun 2012 A1
20120163164 Terry Jun 2012 A1
20120170491 Kern Jul 2012 A1
20120177039 Berman Jul 2012 A1
20120210416 Mihelich Aug 2012 A1
20120230225 Matthews Sep 2012 A1
20120243539 Keesara Sep 2012 A1
20120250502 Brolin Oct 2012 A1
20120275297 Subramanian Nov 2012 A1
20120275347 Banerjee Nov 2012 A1
20120278804 Narayanasamy Nov 2012 A1
20120287785 Kamble Nov 2012 A1
20120294192 Masood Nov 2012 A1
20120294194 Balasubramanian Nov 2012 A1
20120320800 Kamble Dec 2012 A1
20120320926 Kamath et al. Dec 2012 A1
20120327766 Tsai Dec 2012 A1
20120327937 Melman et al. Dec 2012 A1
20130003535 Sarwar Jan 2013 A1
20130003549 Matthews Jan 2013 A1
20130003737 Sinicrope Jan 2013 A1
20130003738 Koganti Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034015 Jaiswal Feb 2013 A1
20130034021 Jaiswal Feb 2013 A1
20130067466 Combs Mar 2013 A1
20130070762 Adams Mar 2013 A1
20130083693 Himura Apr 2013 A1
20130097345 Munoz Apr 2013 A1
20130114595 Mack-Crane et al. May 2013 A1
20130124707 Ananthapadmanabha May 2013 A1
20130127848 Joshi May 2013 A1
20130136123 Ge May 2013 A1
20130148546 Eisenhauer Jun 2013 A1
20130194914 Agarwal Aug 2013 A1
20130219473 Schaefer Aug 2013 A1
20130223221 Xu Aug 2013 A1
20130250951 Koganti Sep 2013 A1
20130250958 Watanabe Sep 2013 A1
20130259037 Natarajan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20130294451 Li Nov 2013 A1
20130301425 Udutha Nov 2013 A1
20130301642 Radhakrishnan Nov 2013 A1
20130322427 Stiekes Dec 2013 A1
20130346583 Low Dec 2013 A1
20140013324 Zhang Jan 2014 A1
20140025736 Wang Jan 2014 A1
20140044126 Sabhanatarajan Feb 2014 A1
20140056298 Vobbilisetty Feb 2014 A1
20140059225 Gasparakis Feb 2014 A1
20140086253 Yong Mar 2014 A1
20140105034 Sun Apr 2014 A1
20140258446 Bursell Sep 2014 A1
20140269720 Srinivasan Sep 2014 A1
20140269733 Venkatesh Sep 2014 A1
20140355477 Velayudhan Dec 2014 A1
20150010007 Matsuhira Jan 2015 A1
20150030031 Zhou Jan 2015 A1
20150143369 Zheng May 2015 A1
20150172098 Agarwal Jun 2015 A1
Foreign Referenced Citations (35)
Number Date Country
1735062 Feb 2006 CN
101064682 Oct 2007 CN
101459618 Jun 2009 CN
101471899 Jul 2009 CN
101548511 Sep 2009 CN
101645880 Feb 2010 CN
102098237 Jun 2011 CN
102148749 Aug 2011 CN
102301663 Dec 2011 CN
102349268 Feb 2012 CN
102378176 Mar 2012 CN
102415065 Apr 2012 CN
102415065 Apr 2012 CN
102801599 Nov 2012 CN
102801599 Nov 2012 CN
102088388 Apr 2014 CN
0579567 May 1993 EP
0579567 Jan 1994 EP
0993156 Apr 2000 EP
0993156 Dec 2000 EP
1398920 Mar 2004 EP
1398920 Mar 2004 EP
1916807 Apr 2008 EP
2001167 Oct 2008 EP
2874359 May 2015 EP
WO 2012093429 Jul 2012 JP
2008056838 May 2008 WO
2009042919 Apr 2009 WO
2010111142 Sep 2010 WO
2010111142 Sep 2010 WO
2011132568 Oct 2011 WO
2011140028 Nov 2011 WO
2011140028 Nov 2011 WO
2012033663 Mar 2012 WO
2014031781 Feb 2014 WO
Non-Patent Literature Citations (219)
Entry
Office Action dated Jun. 18, 215, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Jun. 16, 2015, U.S. Appl. No. 13/048,817, filed Mar. 15, 2011.
Touch, J. et al., Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement, May 2009, Network Working Group, pp. 1-17.
Zhai F. Hu et al. ‘RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt’, May 15, 2012.
Office Action dated Jul. 13, 2015, U.S. Appl. No. 13/598,204, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Aug. 21, 2015, U.S. Appl. No. 13/776,217, filed Feb. 25, 2013.
Office Action dated Aug. 19, 2015, U.S. Appl. No. 14/156,374, filed Jan. 15, 2014.
Office Action dated Sep. 2, 2015, U.S. Appl. No. 14/151,693, filed Jan. 9, 2014.
Office Action dated Sep. 17, 2015, U.S. Appl. No. 14/577,785, filed Dec. 19, 2014.
Office Action dated Sep. 22, 2015 U.S. Appl. No. 13/656,438 filed Oct. 19, 2012.
Office Action dated Nov. 5, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action dated Oct. 19, 2015, U.S. Appl. No. 14/215,996, filed Mar. 17, 2014.
Office Action dated Sep. 18, 2015, U.S. Appl. No. 13/345,566, filed Jan. 6, 2012.
Open Flow Switch Specification Version 1.1.0, Feb. 28, 2011.
Open Flow Switch Specification Version 1.0.0, Dec. 31, 2009.
Open Flow Configuration and Management Protocol 1.0 (OF-Config 1.0) Dec. 23, 2011.
Open Flow Switch Specification Version 1.2 Dec. 5, 2011.
Office action dated Feb. 2, 2016, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office Action dated Feb. 2, 2016. U.S. Appl. No. 14/154,106, filed Jan. 13, 2014.
Office Action dated Feb. 3, 2016, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office Action dated Feb. 4, 2016, U.S. Appl. No. 13/557,105, filed Jul. 24, 2012.
Office Action dated Feb. 11, 2016, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014.
Office Action dated Feb. 24, 2016, U.S. Appl. No. 13/971,397, filed Aug. 20, 2013.
Office Action dated Feb. 24, 2016, U.S. Appl. No. 12/705,508, filed Feb. 12, 2010.
Eastlake, D. et al., ‘RBridges: TRILL Header Options’, Dec. 24, 2009, pp. 1-17, TRILL Working Group.
Perlman, Radia et al., ‘RBridge VLAN Mapping’, TRILL Working Group, Dec. 4, 2009, pp. 1-12.
Touch, J. et al., ‘Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement’, May 2009, Network Working Group, pp. 1-17.
‘RBridges: Base Protocol Specification’, IETF Draft, Perlman et al., Jun. 26, 2009.
‘Switched Virtual Networks. Internetworking Moves Beyond Bridges and Routers’ Data Communications, McGraw Hill. New York, US, vol. 23, No. 12, Sep. 1, 1994 (Sep. 1, 1994), pp. 66-70,72,74, XP000462385 ISSN: 0363-6399.
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office Action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office Action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012.
Office Action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Brocade, ‘Brocade Fabrics OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions’, pp. 1-6, 2009 Brocade Communications Systems, Inc.
Brocade, ‘FastIron and TurboIron 24x Configuration Guide’, Feb. 16, 2010.
Brocade, ‘The Effortless Network: Hyperedge Technology for the Campus LAN’ 2012.
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012.
Christensen, M. et al., ‘Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches’, May 2006.
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009.
Foundary FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 2008.
Knight, ‘Network Based IP VPN Architecture using Virtual Routers’, May 2003.
Knight P et al: ‘Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts’, IEEE Communications Magazine, IEEE Service Center, Piscataway, US, vol. 42, No. 6, Jun. 1, 2004 (Jun. 1, 2004), pp. 124-131, XP001198207, ISSN: 0163-6804, DOI: 10.1109/MCOM.2004.1304248.
Knight S et al: ‘Virtual Router Redundancy Protocol’ Internet Citation Apr. 1, 1998 (Apr. 1, 1998), XP002135272 Retrieved from the Internet: URL:ftp://ftp.isi.edu/in-notes/rfc2338.txt [retrieved on Apr. 10, 2000].
Lapuh, Roger et al., ‘Split Multi-Link Trunking (SMLT)’, Network Working Group, Oct. 2012.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009.
Louati, Wajdi et al., ‘Network-based virtual personal overlay networks using programmable virtual routers’, IEEE Communications Magazine, Jul. 2005.
Narten, T. et al., ‘Problem Statement: Overlays for Network Virtualization d raft-narten-n vo3-over l ay-problem-statement-01’, Oct. 31, 2011.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, from Jaroenchonwanit, Bunjob, dated Jan. 16, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, from Park, Jung H., dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, from Cho, Hong Sol., dated Jul. 23, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Apr. 26, 2013.
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Sep. 12, 2012.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jan. 4, 2013.
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jun. 7, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012.
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated May 24, 2012.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Apr. 25, 2013.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012.
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Mar. 18, 2013.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jul. 31, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Feb. 22, 2013.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jun. 11, 2013.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Oct. 26, 2012.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated May 16, 2013.
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Jun. 21, 2013.
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jun. 10, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jul. 3, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Jul. 16, 2013.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 5, 2013.
Office Action for U.S. Appl. No. 13/092,864, filed Apr. 22, 2011, dated Sep. 19, 2012.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jun. 19, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Mar. 4, 2013.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Sep. 5, 2013.
Office Action for U.S. Appl. No. 13/098,360, filed Apr. 29, 2011, dated May 31, 2013.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Dec. 21, 2012.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Jul. 9, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 28, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated May 22, 2013.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013.
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014.
Perlman, Radia et al., ‘Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology’, 2009.
S. Nadas et al., ‘Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6’, Internet Engineering Task Force, Mar. 2010.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office Action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office Action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office Action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012.
Office Action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012.
Office Action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012.
Office Action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Huang, Nen-Fu et al., ‘An Effective Spanning Tree Algorithm for a Bridged LAN’, Mar. 16, 1992.
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT)’, draft-lapuh-network-smlt-08, Jul. 2008.
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014.
Perlman, Radia et al., ‘RBridges: Base Protocol Specification; Draft-ietf-trill-rbridge-protocol-16.txt’, Mar. 3, 2010, pp. 1-117.
‘An Introduction to Brocade VCS Fabric Technology’, Brocade white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012.
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012.
Office action dated Jan. 10, 2014, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Jan. 16, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jul. 31, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jan. 6, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Oct. 2, 2013, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office action dated Dec. 2, 2013, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office action dated Nov. 29, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Nov. 12, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
BROCADE Brocade Unveils The Effortless Network, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012.
Kreeger, L. et al., ‘Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00’, Jan. 30, 2012.
Office Action for U.S. Appl. No. 13/365,808, filed date Jul. 18, 2013, dated Jul. 18, 2013.
Office Action for U.S. Appl. No. 13/092,887, dated Jan. 6, 2014.
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010.
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011.
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011.
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011.
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011.
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010.
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011.
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010.
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011.
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011.
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011.
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011.
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011.
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011.
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011.
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011.
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012.
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011.
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014.
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014.
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014.
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014.
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013.
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, from Haile, Awet A., dated Dec. 2, 2012.
Office Action for U.S. Appl. No. 13/087,239, filed date Apr. 14, 2011, dated Dec. 5, 2012.
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014.
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE Globecom Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009 (Nov. 30, 2009), pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011].
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010.
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011.
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014.
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014.
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011.
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf.
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011.
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015.
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015.
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015.
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015.
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015.
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015.
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015.
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011.
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007.
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999.
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015.
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015.
Mahalingam “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks” Oct. 17, 2013 pp. 1-22, Sections 1, 4 and 4.1.
Office action dated Apr. 30, 2015, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012.
Office Action dated Apr. 1, 2015, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated May 21, 2015, U.S. Appl. No. 13/288,822, filed Nov. 3, 2011.
Siamak Azodolmolky et al. “Cloud computing networking: Challenges and opportunities for innovations”, IEEE Communications Magazine, vol. 51, No. 7, Jul. 1, 2013.
Office Action dated Apr. 1, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office action dated Jun. 8, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014.
Office Action Dated Jun. 10, 2015, U.S. Appl. No. 13/890,150, filed May 8, 2013.
Office Action dated Jan. 31, 2017, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011.
Office Action dated Jan. 27, 2017, U.S. Appl. No. 14/216,292, filed Mar. 17, 2014.
Office Action dated Jan. 26, 2017, U.S. Appl. No. 13/786,328, filed Mar. 5, 2013.
Office Action dated Dec. 2, 2016, U.S. Appl. No. 14/512,268, filed Oct. 10, 2014.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/899,849, filed May 22, 2013.
Office Action dated Dec. 1, 2016, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012.
Office Action dated Nov. 30, 2016, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012.
Office Action dated Nov. 21, 2016, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014.
Office Action dated Feb. 8, 2017, U.S. Appl. No. 14/822,380, filed Aug. 10, 2015.
Yang Yu et al “A Framework of using OpenFlow to handle transient link failure”, TMEE, 2011 International Conference on, IEEE, Dec. 16, 2011.
Office Action for U.S. Appl. No. 15/227,789 dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/822,380 dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/704,660 dated Feb. 27, 2017.
Office Action for U.S. Appl. No. 14/510,913 dated Mar. 3, 2017.
Office Action for U.S. Appl. No. 14/473,941 dated Feb. 8, 2017.
Office Action for U.S. Appl. No. 14/329,447 dated Feb. 10, 2017.
Related Publications (1)
Number Date Country
20150172098 A1 Jun 2015 US
Provisional Applications (2)
Number Date Country
61591227 Jan 2012 US
61658330 Jun 2012 US
Continuations (1)
Number Date Country
Parent 13742207 Jan 2013 US
Child 14625536 US