Field
The present disclosure relates to communication networks. More specifically, the present disclosure relates to multicast distribution in a fabric switch.
Related Art
The exponential growth of the Internet has made it a popular delivery medium for a variety of applications running on physical and virtual devices. Such applications have brought with them an increasing demand for bandwidth. As a result, equipment vendors race to build larger and faster switches with versatile capabilities, such as distributed multicast traffic management, to move more traffic efficiently. However, the size of a switch cannot grow infinitely. It is limited by physical space, power consumption, and design complexity, to name a few factors. Furthermore, switches with higher capability are usually more complex and expensive. More importantly, because an overly large and complex system often does not provide economy of scale, simply increasing the size and capability of a switch may prove economically unviable due to the increased per-port cost.
A flexible way to improve the scalability of a switch system is to build a fabric switch. A fabric switch is a collection of individual member switches. These member switches form a single, logical switch that can have an arbitrary number of ports and an arbitrary topology. As demands grow, customers can adopt a “pay as you grow” approach to scale up the capacity of the fabric switch.
Meanwhile, layer-2 (e.g., Ethernet) switching technologies continue to evolve. More routing-like functionalities, which have traditionally been the characteristics of layer-3 (e.g., Internet Protocol or IP) networks, are migrating into layer-2. Notably, the recent development of the Transparent Interconnection of Lots of Links (TRILL) protocol allows Ethernet switches to function more like routing devices. TRILL overcomes the inherent inefficiency of the conventional spanning tree protocol, which forces layer-2 switches to be coupled in a logical spanning-tree topology to avoid looping. TRILL allows routing bridges (RBridges) to be coupled in an arbitrary topology without the risk of looping by implementing routing functions in switches and including a hop count in the TRILL header.
While a fabric switch brings many desirable features to a network, some issues remain unsolved in facilitating efficient multicast traffic distribution for a large number of virtual servers.
One embodiment of the present invention provides a switch. The switch includes an inter-switch multicast module and an edge multicast module. The inter-switch multicast module identifies for a first replication of a multicast packet an egress inter-switch port in a multicast tree rooted at the switch. The multicast tree is identified by an identifier of the switch. The edge multicast module identifies an egress edge port for a second replication of the multicast packet based on a multicast group identifier. The multicast group identifier is local within the switch.
In a variation on this embodiment, the inter-switch multicast module identifies the inter-switch port based on a bit value corresponding to the inter-switch port. The bit value is in an inter-switch bitmap associated with the multicast tree.
In a further variation, the inter-switch bitmap is included in an entry in a multicast switch identifier table. The entry in the multicast switch identifier table corresponds to the identifier of the switch.
In a further variation, the switch also includes a selection module which selects the multicast switch identifier table from a plurality of multicast switch identifier table instances based on a multicast group of the multicast packet. A respective multicast switch identifier table instance is associated with a respective multicast group.
In a variation on this embodiment, the edge multicast module identifies the edge port based on a bit value corresponding to the edge port. The bit value is in an edge bitmap associated with the multicast group identifier.
In a further variation, the edge bitmap is included in an entry in a multicast group identifier table. The entry in the multicast switch identifier table corresponds to the multicast group identifier.
In a variation on this embodiment, the multicast group identifier is mapped to a virtual local area network (VLAN) identifier of the multicast packet in a mapping table.
In a variation on this embodiment, the switch also includes a fabric switch management module which maintains a membership in a fabric switch. The fabric switch accommodates a plurality of switches and operates as a single switch.
In a further variation, the first replication of the multicast packet is encapsulated in a fabric encapsulation of the fabric switch. The inter-switch multicast module also identifies for a third replication of the multicast packet an egress inter-switch port in a second multicast tree rooted at a second switch. This second multicast tree is identified by an identifier of the second switch.
In a further variation, the edge multicast module also determines whether the multicast group identifier is associated with the multicast packet based on a VLAN identifier of the multicast packet.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
Overview
In embodiments of the present invention, the problem of efficient multicast traffic distribution in a fabric switch is solved by facilitating a multicast distribution tree, which is referred to as an ingress switch multicast tree, at a respective member switch of the fabric switch. Upon receiving a multicast packet, the switch forwards the packet via its own ingress switch multicast tree.
With existing technologies, a fabric switch has a finite number of multicast distribution trees. The member switches forward traffic belonging to all multicast groups using these trees to distribute traffic. As a result, the member switches can forward multicast traffic belonging to a large number of multicast groups via a single tree. This can congest the links in the multicast tree, leading to inefficient forwarding and greater delay. This problem can be further aggravated when this number is small. For example, this finite number can typically be one (i.e., the fabric switch typically can have one multicast distribution tree). In that case, all member switches forward all multicast traffic via the same tree and cause the links of the tree to congest.
To solve this problem, a respective member switch computes its own ingress switch multicast tree and forwards multicast traffic via that tree. As a result, the multicast traffic load of the fabric switch becomes distributed among the ingress switch multicast trees of the corresponding member switches. Since different ingress switch multicast trees comprise different links of the fabric switch, the multicast traffic load becomes distributed across the links of the fabric switch instead of a few links of a finite number of trees.
In some embodiments, in a member switch, multicast packet replication is performed in two stages. In the first stage, the member switch replicates a multicast packet to its edge ports based on a multicast group identifier (MGID) representing the edge multicast replication of the switch. In some embodiments, this MGID is local to the switch and operates as a local multicast replication identifier for the switch. It should be noted that this multicast group identifier is distinct from a multicast group address of a multicast packet, which is not local and specific to a multicast group. In the second stage, the switch replicates the packet to inter-switch (IS) ports for other member switches based on the egress switch identifier of the packet.
In a fabric switch, any number of switches coupled in an arbitrary topology may logically operate as a single switch. The fabric switch can be an Ethernet fabric switch or a virtual cluster switch (VCS), which can operate as a single Ethernet switch. Any member switch may join or leave the fabric switch in “plug-and-play” mode without any manual configuration. In some embodiments, a respective switch in the fabric switch is a Transparent Interconnection of Lots of Links (TRILL) routing bridge (RBridge). In some further embodiments, a respective switch in the fabric switch is an Internet Protocol (IP) routing-capable switch (e.g., an IP router).
It should be noted that a fabric switch is not the same as conventional switch stacking. In switch stacking, multiple switches are interconnected at a common location (often within the same rack), based on a particular topology, and manually configured in a particular way. These stacked switches typically share a common address, e.g., an IP address, so they can be addressed as a single switch externally. Furthermore, switch stacking requires a significant amount of manual configuration of the ports and inter-switch links. The need for manual configuration prohibits switch stacking from being a viable option in building a large-scale switching system. The topology restriction imposed by switch stacking also limits the number of switches that can be stacked. This is because it is very difficult, if not impossible, to design a stack topology that allows the overall switch bandwidth to scale adequately with the number of switch units.
In contrast, a fabric switch can include an arbitrary number of switches with individual addresses, can be based on an arbitrary topology, and does not require extensive manual configuration. The switches can reside in the same location, or be distributed over different locations. These features overcome the inherent limitations of switch stacking and make it possible to build a large “switch farm,” which can be treated as a single, logical switch. Due to the automatic configuration capabilities of the fabric switch, an individual physical switch can dynamically join or leave the fabric switch without disrupting services to the rest of the network.
Furthermore, the automatic and dynamic configurability of the fabric switch allows a network operator to build its switching system in a distributed and “pay-as-you-grow” fashion without sacrificing scalability. The fabric switch's ability to respond to changing network conditions makes it an ideal solution in a virtual computing environment, where network loads often change with time.
In this disclosure, the term “fabric switch” refers to a number of interconnected physical switches which form a single, scalable logical switch. These physical switches are referred to as member switches of the fabric switch. In a fabric switch, any number of switches can be connected in an arbitrary topology, and the entire group of switches functions together as one single, logical switch. This feature makes it possible to use many smaller, inexpensive switches to construct a large fabric switch, which can be viewed as a single logical switch externally. Although the present disclosure is presented using examples based on a fabric switch, embodiments of the present invention are not limited to a fabric switch. Embodiments of the present invention are relevant to any computing device that includes a plurality of devices operating as a single device.
The term “multicast” is used in a generic sense, and can refer to any traffic forwarding toward a plurality of recipients. Any traffic forwarding that creates and forwards more than one copy of the same packet in a fabric switch can be a referred to as “multicast.” Examples of “multicast” traffic include, but are not limited to, broadcast, unknown unicast, and multicast traffic.
The term “end device” can refer to any device external to a fabric switch. Examples of an end device include, but are not limited to, a host machine, a conventional layer-2 switch, a layer-3 router, or any other type of network device. Additionally, an end device can be coupled to other switches or hosts further away from a layer-2 or layer-3 network. An end device can also be an aggregation point for a number of network devices to enter the fabric switch.
The term “switch” is used in a generic sense, and it can refer to any standalone or fabric switch operating in any network layer. “Switch” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. Any device that can forward traffic to an external device or another switch can be referred to as a “switch.” Any physical or virtual device (e.g., a virtual machine/switch operating on a computing device) that can forward traffic to an end device can be referred to as a “switch.” Examples of a “switch” include, but are not limited to, a layer-2 switch, a layer-3 router, a TRILL RBridge, or a fabric switch comprising a plurality of similar or heterogeneous smaller physical and/or virtual switches.
The term “edge port” refers to a port on a fabric switch which exchanges data frames with a network device outside of the fabric switch (i.e., an edge port is not used for exchanging data frames with another member switch of a fabric switch). The term “inter-switch port” refers to a port which sends/receives data frames among member switches of a fabric switch. The terms “interface” and “port” are used interchangeably.
The term “switch identifier” refers to a group of bits that can be used to identify a switch. Examples of a switch identifier include, but are not limited to, a media access control (MAC) address, an Internet Protocol (IP) address, and an RBridge identifier. Note that the TRILL standard uses “RBridge ID” (RBridge identifier) to denote a 48-bit intermediate-system-to-intermediate-system (IS-IS) System ID assigned to an RBridge, and “RBridge nickname” to denote a 16-bit value that serves as an abbreviation for the “RBridge ID.” In this disclosure, “switch identifier” is used as a generic term, is not limited to any bit format, and can refer to any format that can identify a switch. The term “RBridge identifier” is also used in a generic sense, is not limited to any bit format, and can refer to “RBridge ID,” “RBridge nickname,” or any other format that can identify an RBridge.
The term “packet” refers to a group of bits that can be transported together across a network. “Packet” should not be interpreted as limiting embodiments of the present invention to layer-3 networks. “Packet” can be replaced by other terminologies referring to a group of bits, such as “message,” “frame,” “cell,” or “datagram.”
Network Architecture
Switches in fabric switch 100 use edge ports to communicate with end devices (e.g., non-member switches) and inter-switch ports to communicate with other member switches. For example, switch 105 is coupled to end device 114 via an edge port and to switches 101, 102, and 104 via inter-switch ports and one or more links. Data communication via an edge port can be based on Ethernet and via an inter-switch port can be based on IP and/or TRILL protocol. It should be noted that control message exchange via inter-switch ports can be based on a different protocol (e.g., Internet Protocol (IP) or Fibre Channel (FC) protocol). Supporting multiple multicast trees in a TRILL network is specified in U.S. patent application Ser. No. 13/030,688 titled “Supporting multiple multicast trees in TRILL networks,” by inventors Shunjia Yu, Nagarajan Venkatesan, Anoop Ghanwani, Phanidhar Koganti, Mythilikanth Raman, Rajiv Krishnamurthy, and Dilip Chatwani, the disclosure of which is incorporated herein in its entirety.
During operation, switch 103 receives a multicast packet from end device 112. Switch 103 is then the ingress switch of fabric switch 100 for that multicast packet. With existing technologies, fabric switch 100 has a finite number of multicast distribution trees. Suppose that one of these trees is rooted at switch 101. Upon receiving the multicast packet, switch 103 forwards the packet to switch 101, which in turn, forwards that packet to switches 102, 104, and 105 via the tree. Similarly, upon receiving a multicast packet from end device 114, switch 105 forwards the packet to switch 101, which in turn, forwards that packet to switches 102, 103, and 104 via the tree. Using the same tree to forward multicast traffic from different ingress switches can congest the links in the multicast tree, leading to inefficient forwarding and greater delay.
To solve this problem, a respective member switch of fabric switch 100 computes its own ingress switch multicast tree and forwards multicast traffic via that tree. For example, upon receiving a multicast packet, switch 103 forwards the packet via its ingress switch multicast tree. Similarly, upon receiving a multicast packet, switch 105 forwards the packet via its ingress switch multicast tree. As a result, these multicast packets become distributed in fabric switch 100 among the ingress switch multicast trees rooted at switches 103 and 105. Since different ingress switch multicast trees comprise different links of fabric switch 100, the multicast traffic load becomes distributed across the links of fabric switch 100 instead of a few links of a finite number of trees.
Similarly, upon receiving a multicast packet from end device 114, switch 105 forwards the packet via ingress switch multicast tree 135. Ingress switch multicast tree 135 includes links 124, 125, 126, and 123. Switch 105 replicates the multicast packet and forwards the replicated packets via links 124, 125, and 126. Upon receiving the replicated packet, switch 104 further replicates the packet and forwards the replicated packet via link 123. If end device 116 is a receiver of the multicast group of the packet, switch 104 replicates the packet and forwards the packet via the edge port coupling end device 116.
Packet Headers
In some embodiments, in the example in
Switch 103 encapsulates packet 202 in a fabric encapsulation 212 to generate fabric-encapsulated packet 204. Examples of fabric encapsulation 212 include, but are not limited to, TRILL encapsulation and IP encapsulation. Fabric encapsulation 212 includes the identifier of switch 103 as both the ingress and the egress identifier. Examples of a switch identifier include, but are not limited to, an RBridge identifier, an IP version 4 address, and an IP version 6 address. Examples of fabric-encapsulated packet 204 include, but are not limited to, a TRILL frame and an IP packet. Fabric encapsulation 212 can also include an outer layer-2 header comprising an all-MAC address 220, which indicates that this packet is destined to all recipients in fabric switch 100. The outer layer-2 header also includes the MAC address of switch 103 as the source MAC address.
Switch 103 forwards fabric-encapsulated packet 204 via ingress switch multicast tree 133. Switches 101, 102, and 104 receive fabric-encapsulated packet 204, identify all-MAC address 220, and determine that this packet is a multicast packet. Switches 101, 102, and 104 also identify the identifier of switch 103 as egress switch identifier (which is also the ingress switch identifier), and recognize that the packet should be forwarded via ingress switch multicast tree 133 of switch 103. Switches 101 and 102 identify themselves as leaf nodes of ingress switch multicast tree 133. Suppose that switch 101 is coupled to end device 222, which is a recipient of packet 202. Switch 101 then removes fabric encapsulation 212, replicates inner packet 202, and forwards packet 202 to end device 222 via the corresponding edge port.
On the other hand, switch 104 detects that it is coupled to another downstream switch of ingress switch multicast tree 133. Switch 104 then replicates fabric-encapsulated packet 204 to generate fabric-encapsulated packet 206. However, because switch 104 is forwarding the packet, switch 104 changes the source MAC address of the outer layer-2 header to the MAC address of switch 104 to generate fabric encapsulation 214, and forwards fabric-encapsulated packet 206 to switch 105. Suppose that end device 116 is a recipient of packet 202. Switch 104 then also removes fabric encapsulation 212, replicates inner packet 202, and forwards packet 202 to end device 116 via the corresponding edge port.
Switch 105 receives fabric-encapsulated packet 206, identifies all-MAC address 220, and determines that this packet is a multicast packet. Switch 105 also identifies the identifier of switch 103 as the egress switch identifier (which is also the ingress switch identifier), and recognizes that the packet should be forwarded via ingress switch multicast tree 133 of switch 103. Switch 105 identifies itself as a leaf node of ingress switch multicast tree 133. Suppose that end device 114 is a recipient of packet 202. Switch 105 then removes fabric encapsulation 214, replicates inner packet 202, and forwards packet 202 to end device 114 via the corresponding edge port.
Multicast Replication
In the example in
Mapping table 302 can also map one or more fields of a packet to an MGID, such as source and/or destination IP addresses, source and/or destination MAC addresses, source and/or destination ports, and a service and/or client VLANs. A respective entry in mapping table 302 can include the mapping or can be indexed based on VLAN identifiers. Mapping table 302 includes mapping for MGIDs 312-1, 312-2, . . . , 312-n. The switch uses the VLAN identifier of the packet to obtain the corresponding MGID from mapping table 302.
The switch uses this MGID to obtain an edge port bitmap from MGID table 304. The edge port bitmap represents the edge ports to which the packet should be replicated. An edge port is represented by a bit in the edge port bitmap, and a set (or unset) bit can indicate that the packet should be replicated and forwarded via the corresponding edge port. For example, a bitmap of “11000” can indicate that a packet should be replicated to the first two edge ports of the switch. It should be noted that the length of the edge port bitmap (i.e., the number of bits in the bitmap) can be equal to or greater than the number of edge ports of the switch. MGID table 304 includes edge port bitmaps 314-1, 314-2, . . . , 314-m. It should be noted that m and n can be different. A respective entry in MGID table 304 can include a mapping between an edge port bitmap and an MGID, or can be indexed based on MGIDs. Upon obtaining an edge port bitmap, the switch replicates and forwards the packet via the edge ports indicated by the bitmap.
The switch uses its local switch identifier (e.g., an RBridge identifier or an IP address) to obtain an IS port bitmap from a multicast switch identifier (MSID) table 306. The IS port bitmap represents the IS ports to which the packet should be replicated. An IS port is represented by a bit in the IS port bitmap, and a set (or unset) bit can indicate that the packet should be replicated and forwarded via the corresponding IS port. For example, a bitmap of “11000” can indicate that a packet should be replicated to the first two IS ports of the switch. It should be noted that the length of the IS port bitmap can be equal to or greater than the number of switches in a relevant network (e.g., in a fabric switch). MSID table 306 includes IS port bitmaps 316-1, 316-2, . . . , 316-k. It should be noted that each of m, n, and k can be different.
A respective entry in MSID table 306 can include a mapping between an IS port bitmap and a switch identifier, or can be indexed based on the egress switch identifiers. Because the ingress and egress switch identifiers of a fabric encapsulation identify the root node of an ingress switch multicast tree, such indexing leads to the IS port bitmap corresponding to that ingress switch multicast tree. Upon obtaining an IS port bitmap, the switch encapsulates the packet in fabric encapsulation, replicates the fabric-encapsulated packet, and forwards the fabric-encapsulated packets via the IS ports indicated by the bitmap. It should be noted that if a switch has no IS port to which the fabric-encapsulated packet should be replicated, a respective bit in the corresponding IS port bitmap can be unset (or set).
In the example in
In some embodiments, an edge port bitmap with all bits unset (or set) indicates that the packet should not be replicated to local edge ports. The switch removes the fabric encapsulation and forwards the inner packet via the edge ports indicated by the edge port bitmap. The switch also obtains an IS port bitmap from its MSID table 306 based on the egress switch identifier in the fabric encapsulation. Based on the IS port bitmap, the switch determines the IS ports to which the fabric-encapsulated packet should be replicated.
In the example in
In some embodiments, a switch selects an MSID table from MSID tables 308-1, 308-2, . . . , 308-i based on a layer-2 or layer-3 forwarding decision. In the example in
Multicast Forwarding
In the example in
The switch obtains an edge port bitmap from an MGID table based on the obtained MGID (operation 408). The switch can obtain the edge port bitmap from an entry in the MGID table comprising a mapping between the MGID and the edge port bitmap, or by using the MGID as an index of the MGID table. The switch identifies the edge ports corresponding to the obtained edge port bitmap (operation 410), as described in conjunction with
If the switch has selected an MSID table instance (operation 414) and/or has replicated the packet via the edge ports (operation 412), the switch obtains an IS port bitmap from an MSID table based on the local switch identifier (operation 416). The switch can obtain the IS port bitmap from an entry in the MSID table comprising a mapping between the switch identifier and the IS port bitmap, or by using the switch identifier as an index of the MSID table. Examples of the switch identifier include, but are not limited to, a TRILL RBridge identifier, a MAC address, and an IP address. The switch identifies the IS ports corresponding to the obtained IS port bitmap (operation 418), as described in conjunction with
The switch obtains an edge port bitmap from an MGID table based on the obtained MGID (operation 460). The switch can obtain the edge port bitmap from an entry in the MGID table comprising a mapping between the MGID and the edge port bitmap, or by using the MGID as an index of the MGID table. The switch identifies the edge ports corresponding to the obtained edge port bitmap (operation 462), as described in conjunction with
If the switch has selected an MSID table instance (operation 468) and/or has replicated the packet via the edge ports (operation 466), the switch obtains an IS port bitmap from an MSID table based on the egress switch identifier in the fabric encapsulation (operation 470), as described in conjunction with
Presence-Based Multicast Trees
Suppose that multicast group 512 does not have presence in switch 103. As a result, switch 103 is not included in ingress switch multicast tree 502. Consequently, multicast group 512 does not need hardware resources on switch 103, which does not include the MSID table instance corresponding to ingress switch multicast tree 502. Similarly, suppose that multicast group 514 does not have presence in switch 102. As a result, switch 102 is not included in ingress switch multicast tree 504. Consequently, multicast group 514 does not need hardware resources on switch 102, which does not include the MSID table instance corresponding to ingress switch multicast tree 504. In this way, a switch uses its hardware resources only for the multicast groups which are present in that switch. This allows efficient scaling of multicast groups in fabric switch 100.
Suppose that switch 102 becomes unavailable (e.g., due to a link or node failure, or reboot event). Under such a scenario, traffic of multicast group 514 does not have any impact on such unavailability. However, this unavailability of switch 102 hinders forwarding traffic of multicast group 512 to end device 520. When switch 102 becomes available again, switch 102 can start receiving traffic of multicast group 512 and start forwarding that traffic to end device 520.
Exemplary Switch
As described in conjunction with
In some embodiments, switch 600 may maintain a membership in a fabric switch, as described in conjunction with
Communication ports 602 can include inter-switch communication channels for communication within a fabric switch. This inter-switch communication channel can be implemented via a regular communication port and based on any open or proprietary format. Communication ports 602 can include one or more TRILL ports capable of receiving frames encapsulated in a TRILL header. Communication ports 602 can also include one or more IP ports capable of receiving IP packets. An IP port is capable of receiving an IP packet and can be configured with an IP address. Packet processor 610 can process TRILL-encapsulated frames and/or IP packets.
Note that the above-mentioned modules can be implemented in hardware as well as in software. In one embodiment, these modules can be embodied in computer-executable instructions stored in a memory, which is coupled to one or more processors in switch 600. When executed, these instructions cause the processor(s) to perform the aforementioned functions.
In summary, embodiments of the present invention provide a switch and a method for facilitating ingress switch multicast trees in a fabric switch. In one embodiment, the switch includes an inter-switch multicast module and an edge multicast module. The inter-switch multicast module identifies for a first replication of a multicast packet an egress inter-switch port in a multicast tree rooted at the switch. The multicast tree is identified by an identifier of the switch. The edge multicast module identifies an egress edge port for a second replication of the multicast packet based on a multicast group identifier. The multicast group identifier is local within the switch.
The methods and processes described herein can be embodied as code and/or data, which can be stored in a computer-readable non-transitory storage medium. When a computer system reads and executes the code and/or data stored on the computer-readable non-transitory storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the medium.
The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.
The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/833,385, titled “Virtual Cluster TRILL Source RBridge Multicast Distribution,” by inventors Venkata R. K. Addanki, Shunjia Yu, and Mythilikanth Raman, filed 10 Jun. 2013, the disclosure of which is incorporated by reference herein. The present disclosure is related to U.S. patent application Ser. No. 13/087,239, titled “Virtual Cluster Switching,” by inventors Suresh Vobbilisetty and Dilip Chatwani, filed 14 Apr. 2011, the disclosure of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
829529 | Keathley | Aug 1906 | A |
5390173 | Spinney | Feb 1995 | A |
5802278 | Isfeld | Sep 1998 | A |
5878232 | Marimuthu | Mar 1999 | A |
5959968 | Chin | Sep 1999 | A |
5973278 | Wehrill, III | Oct 1999 | A |
5983278 | Chong | Nov 1999 | A |
6041042 | Bussiere | Mar 2000 | A |
6085238 | Yuasa | Jul 2000 | A |
6104696 | Kadambi | Aug 2000 | A |
6185214 | Schwartz | Feb 2001 | B1 |
6185241 | Sun | Feb 2001 | B1 |
6331983 | Haggerty | Dec 2001 | B1 |
6438106 | Pillar | Aug 2002 | B1 |
6498781 | Bass | Dec 2002 | B1 |
6542266 | Phillips | Apr 2003 | B1 |
6633761 | Signhal | Oct 2003 | B1 |
6771610 | Seaman | Aug 2004 | B1 |
6873602 | Ambe | Mar 2005 | B1 |
6937576 | DiBenedetto | Aug 2005 | B1 |
6956824 | Mark | Oct 2005 | B2 |
6957269 | Williams | Oct 2005 | B2 |
6975581 | Medina | Dec 2005 | B1 |
6975864 | Signhal | Dec 2005 | B2 |
7016352 | Chow | Mar 2006 | B1 |
7061877 | Gummalla | Jun 2006 | B1 |
7173934 | Lapuh | Feb 2007 | B2 |
7197308 | Singhal | Mar 2007 | B2 |
7206288 | Cometto | Apr 2007 | B2 |
7310664 | Merchant | Dec 2007 | B1 |
7313637 | Tanaka | Dec 2007 | B2 |
7315545 | Chowdhury et al. | Jan 2008 | B1 |
7316031 | Griffith | Jan 2008 | B2 |
7330897 | Baldwin | Feb 2008 | B2 |
7380025 | Riggins | May 2008 | B1 |
7397794 | Lacroute | Jul 2008 | B1 |
7430164 | Bare | Sep 2008 | B2 |
7453888 | Zabihi | Nov 2008 | B2 |
7477894 | Sinha | Jan 2009 | B1 |
7480258 | Shuen | Jan 2009 | B1 |
7508757 | Ge | Mar 2009 | B2 |
7558195 | Kuo | Jul 2009 | B1 |
7558273 | Grosser, Jr. | Jul 2009 | B1 |
7571447 | Ally | Aug 2009 | B2 |
7599901 | Mital | Oct 2009 | B2 |
7688736 | Walsh | Mar 2010 | B1 |
7688960 | Aubuchon | Mar 2010 | B1 |
7690040 | Frattura | Mar 2010 | B2 |
7706255 | Kondrat et al. | Apr 2010 | B1 |
7716370 | Devarapalli | May 2010 | B1 |
7720076 | Dobbins | May 2010 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7787480 | Mehta | Aug 2010 | B1 |
7792920 | Istvan | Sep 2010 | B2 |
7796593 | Ghosh | Sep 2010 | B1 |
7808992 | Homchaudhuri | Oct 2010 | B2 |
7836332 | Hara | Nov 2010 | B2 |
7843906 | Chidambaram et al. | Nov 2010 | B1 |
7843907 | Abou-Emara | Nov 2010 | B1 |
7860097 | Lovett | Dec 2010 | B1 |
7898959 | Arad | Mar 2011 | B1 |
7912091 | Krishnan | Mar 2011 | B1 |
7924837 | Shabtay | Apr 2011 | B1 |
7937756 | Kay | May 2011 | B2 |
7945941 | Sinha | May 2011 | B2 |
7949638 | Goodson | May 2011 | B1 |
7957386 | Aggarwal | Jun 2011 | B1 |
8018938 | Fromm | Sep 2011 | B1 |
8027354 | Portolani | Sep 2011 | B1 |
8054832 | Shukla | Nov 2011 | B1 |
8068442 | Kompella | Nov 2011 | B1 |
8078704 | Lee | Dec 2011 | B2 |
8102781 | Smith | Jan 2012 | B2 |
8102791 | Tang | Jan 2012 | B2 |
8116307 | Thesayi | Feb 2012 | B1 |
8125928 | Mehta | Feb 2012 | B2 |
8134922 | Elangovan | Mar 2012 | B2 |
8155150 | Chung | Apr 2012 | B1 |
8160063 | Maltz | Apr 2012 | B2 |
8160080 | Arad | Apr 2012 | B1 |
8170038 | Belanger | May 2012 | B2 |
8175107 | Yalagandula | May 2012 | B1 |
8194674 | Pagel | Jun 2012 | B1 |
8195774 | Lambeth | Jun 2012 | B2 |
8204061 | Sane | Jun 2012 | B1 |
8213313 | Doiron | Jul 2012 | B1 |
8213336 | Smith | Jul 2012 | B2 |
8230069 | Korupolu | Jul 2012 | B2 |
8239960 | Frattura | Aug 2012 | B2 |
8249069 | Raman | Aug 2012 | B2 |
8270401 | Barnes | Sep 2012 | B1 |
8295291 | Ramanathan | Oct 2012 | B1 |
8295921 | Wang | Oct 2012 | B2 |
8301686 | Appajodu | Oct 2012 | B1 |
8339994 | Gnanasekaran | Dec 2012 | B2 |
8351352 | Eastlake, III | Jan 2013 | B1 |
8369335 | Jha | Feb 2013 | B2 |
8369347 | Xiong | Feb 2013 | B2 |
8392496 | Linden | Mar 2013 | B2 |
8462774 | Page | Jun 2013 | B2 |
8467375 | Blair | Jun 2013 | B2 |
8520595 | Yadav | Aug 2013 | B2 |
8599850 | Jha | Dec 2013 | B2 |
8599864 | Chung | Dec 2013 | B2 |
8615008 | Natarajan | Dec 2013 | B2 |
8706905 | McGlaughlin | Apr 2014 | B1 |
8724456 | Hong | May 2014 | B1 |
8806031 | Kondur | Aug 2014 | B1 |
8826385 | Congdon | Sep 2014 | B2 |
8918631 | Kumar | Dec 2014 | B1 |
8937865 | Kumar | Jan 2015 | B1 |
20010005527 | Vaeth | Jun 2001 | A1 |
20010055274 | Hegge | Dec 2001 | A1 |
20020019904 | Katz | Feb 2002 | A1 |
20020021701 | Lavian | Feb 2002 | A1 |
20020039350 | Wang | Apr 2002 | A1 |
20020054593 | Morohashi | May 2002 | A1 |
20020091795 | Yip | Jul 2002 | A1 |
20030041085 | Sato | Feb 2003 | A1 |
20030123393 | Feuerstraeter | Jul 2003 | A1 |
20030147385 | Montalvo | Aug 2003 | A1 |
20030174706 | Shankar | Sep 2003 | A1 |
20030189905 | Lee | Oct 2003 | A1 |
20030208616 | Laing | Nov 2003 | A1 |
20030216143 | Roese | Nov 2003 | A1 |
20040001433 | Gram | Jan 2004 | A1 |
20040003094 | See | Jan 2004 | A1 |
20040010600 | Baldwin | Jan 2004 | A1 |
20040049699 | Griffith | Mar 2004 | A1 |
20040057430 | Paavolainen | Mar 2004 | A1 |
20040081171 | Finn | Apr 2004 | A1 |
20040117508 | Shimizu | Jun 2004 | A1 |
20040120326 | Yoon | Jun 2004 | A1 |
20040156313 | Hofmeister et al. | Aug 2004 | A1 |
20040165595 | Holmgren | Aug 2004 | A1 |
20040165596 | Garcia | Aug 2004 | A1 |
20040205234 | Barrack | Oct 2004 | A1 |
20040213232 | Regan | Oct 2004 | A1 |
20050007951 | Lapuh | Jan 2005 | A1 |
20050044199 | Shiga | Feb 2005 | A1 |
20050074001 | Mattes | Apr 2005 | A1 |
20050094568 | Judd | May 2005 | A1 |
20050094630 | Valdevit | May 2005 | A1 |
20050122979 | Gross | Jun 2005 | A1 |
20050157645 | Rabie et al. | Jul 2005 | A1 |
20050157751 | Rabie | Jul 2005 | A1 |
20050169188 | Cometto | Aug 2005 | A1 |
20050195813 | Ambe | Sep 2005 | A1 |
20050207423 | Herbst | Sep 2005 | A1 |
20050213561 | Yao | Sep 2005 | A1 |
20050220096 | Friskney | Oct 2005 | A1 |
20050265356 | Kawarai | Dec 2005 | A1 |
20050278565 | Frattura | Dec 2005 | A1 |
20060007869 | Hirota | Jan 2006 | A1 |
20060018302 | Ivaldi | Jan 2006 | A1 |
20060023707 | Makishima et al. | Feb 2006 | A1 |
20060029055 | Perera | Feb 2006 | A1 |
20060034292 | Wakayama | Feb 2006 | A1 |
20060036765 | Weyman | Feb 2006 | A1 |
20060059163 | Frattura | Mar 2006 | A1 |
20060062187 | Rune | Mar 2006 | A1 |
20060072550 | Davis | Apr 2006 | A1 |
20060083254 | Ge | Apr 2006 | A1 |
20060093254 | Mozdy | May 2006 | A1 |
20060098589 | Kreeger | May 2006 | A1 |
20060140130 | Kalkunte | Jun 2006 | A1 |
20060168109 | Warmenhoven | Jul 2006 | A1 |
20060184937 | Abels | Aug 2006 | A1 |
20060221960 | Borgione | Oct 2006 | A1 |
20060227776 | Chandrasekaran | Oct 2006 | A1 |
20060235995 | Bhatia | Oct 2006 | A1 |
20060242311 | Mai | Oct 2006 | A1 |
20060245439 | Sajassi | Nov 2006 | A1 |
20060251067 | DeSanti | Nov 2006 | A1 |
20060256767 | Suzuki | Nov 2006 | A1 |
20060265515 | Shiga | Nov 2006 | A1 |
20060285499 | Tzeng | Dec 2006 | A1 |
20060291388 | Amdahl | Dec 2006 | A1 |
20060291480 | Cho | Dec 2006 | A1 |
20070036178 | Hares | Feb 2007 | A1 |
20070053294 | Ho | Mar 2007 | A1 |
20070083625 | Chamdani | Apr 2007 | A1 |
20070086362 | Kato | Apr 2007 | A1 |
20070094464 | Sharma | Apr 2007 | A1 |
20070097968 | Du | May 2007 | A1 |
20070098006 | Parry | May 2007 | A1 |
20070116224 | Burke | May 2007 | A1 |
20070116422 | Reynolds | May 2007 | A1 |
20070156659 | Lim | Jul 2007 | A1 |
20070177525 | Wijnands | Aug 2007 | A1 |
20070177597 | Ju | Aug 2007 | A1 |
20070183313 | Narayanan | Aug 2007 | A1 |
20070211712 | Fitch | Sep 2007 | A1 |
20070258449 | Bennett | Nov 2007 | A1 |
20070274234 | Kubota | Nov 2007 | A1 |
20070289017 | Copeland, III | Dec 2007 | A1 |
20080052487 | Akahane | Feb 2008 | A1 |
20080056135 | Lee | Mar 2008 | A1 |
20080065760 | Damm | Mar 2008 | A1 |
20080080517 | Roy | Apr 2008 | A1 |
20080095160 | Yadav | Apr 2008 | A1 |
20080101386 | Gray | May 2008 | A1 |
20080112400 | Dunbar et al. | May 2008 | A1 |
20080133760 | Berkvens | Jun 2008 | A1 |
20080159277 | Vobbilisetty | Jul 2008 | A1 |
20080172492 | Raghunath | Jul 2008 | A1 |
20080181196 | Regan | Jul 2008 | A1 |
20080181243 | Vobbilisetty | Jul 2008 | A1 |
20080186981 | Seto | Aug 2008 | A1 |
20080205377 | Chao | Aug 2008 | A1 |
20080219172 | Mohan | Sep 2008 | A1 |
20080225852 | Raszuk | Sep 2008 | A1 |
20080225853 | Melman | Sep 2008 | A1 |
20080228897 | Ko | Sep 2008 | A1 |
20080240129 | Elmeleegy | Oct 2008 | A1 |
20080267179 | LaVigne | Oct 2008 | A1 |
20080285458 | Lysne | Nov 2008 | A1 |
20080285555 | Ogasahara | Nov 2008 | A1 |
20080298248 | Roeck | Dec 2008 | A1 |
20080304498 | Jorgensen | Dec 2008 | A1 |
20080310342 | Kruys | Dec 2008 | A1 |
20090022069 | Khan | Jan 2009 | A1 |
20090037607 | Farinacci | Feb 2009 | A1 |
20090042270 | Dolly | Feb 2009 | A1 |
20090044270 | Shelly | Feb 2009 | A1 |
20090067422 | Poppe | Mar 2009 | A1 |
20090067442 | Killian | Mar 2009 | A1 |
20090079560 | Fries | Mar 2009 | A1 |
20090080345 | Gray | Mar 2009 | A1 |
20090083445 | Ganga | Mar 2009 | A1 |
20090092042 | Yuhara | Apr 2009 | A1 |
20090092043 | Lapuh | Apr 2009 | A1 |
20090106405 | Mazarick | Apr 2009 | A1 |
20090116381 | Kanda | May 2009 | A1 |
20090129384 | Regan | May 2009 | A1 |
20090138577 | Casado | May 2009 | A1 |
20090138752 | Graham | May 2009 | A1 |
20090161584 | Guan | Jun 2009 | A1 |
20090161670 | Shepherd | Jun 2009 | A1 |
20090168647 | Holness | Jul 2009 | A1 |
20090199177 | Edwards | Aug 2009 | A1 |
20090204965 | Tanaka | Aug 2009 | A1 |
20090213783 | Moreton | Aug 2009 | A1 |
20090222879 | Kostal | Sep 2009 | A1 |
20090232031 | Vasseur | Sep 2009 | A1 |
20090245137 | Hares | Oct 2009 | A1 |
20090245242 | Carlson | Oct 2009 | A1 |
20090246137 | Hadida | Oct 2009 | A1 |
20090252049 | Ludwig | Oct 2009 | A1 |
20090252061 | Small | Oct 2009 | A1 |
20090260083 | Szeto | Oct 2009 | A1 |
20090279558 | Davis | Nov 2009 | A1 |
20090292858 | Lambeth | Nov 2009 | A1 |
20090316721 | Kanda | Dec 2009 | A1 |
20090323698 | LeFaucheur | Dec 2009 | A1 |
20090323708 | Ihle | Dec 2009 | A1 |
20090327392 | Tripathi | Dec 2009 | A1 |
20090327462 | Adams | Dec 2009 | A1 |
20100027420 | Smith | Feb 2010 | A1 |
20100046471 | Hattori | Feb 2010 | A1 |
20100054260 | Pandey | Mar 2010 | A1 |
20100061269 | Banerjee | Mar 2010 | A1 |
20100074175 | Banks | Mar 2010 | A1 |
20100097941 | Carlson | Apr 2010 | A1 |
20100103813 | Allan | Apr 2010 | A1 |
20100103939 | Carlson | Apr 2010 | A1 |
20100131636 | Suri | May 2010 | A1 |
20100158024 | Sajassi | Jun 2010 | A1 |
20100165877 | Shukla | Jul 2010 | A1 |
20100165995 | Mehta | Jul 2010 | A1 |
20100168467 | Johnston | Jul 2010 | A1 |
20100169467 | Shukla | Jul 2010 | A1 |
20100169948 | Budko | Jul 2010 | A1 |
20100182920 | Matsuoka | Jul 2010 | A1 |
20100195489 | Zhou | Aug 2010 | A1 |
20100215042 | Sato | Aug 2010 | A1 |
20100215049 | Raza | Aug 2010 | A1 |
20100220724 | Rabie | Sep 2010 | A1 |
20100226368 | Mack-Crane | Sep 2010 | A1 |
20100226381 | Mehta | Sep 2010 | A1 |
20100246388 | Gupta | Sep 2010 | A1 |
20100257263 | Casado | Oct 2010 | A1 |
20100265849 | Harel | Oct 2010 | A1 |
20100271960 | Krygowski | Oct 2010 | A1 |
20100272107 | Papp | Oct 2010 | A1 |
20100281106 | Ashwood-Smith | Nov 2010 | A1 |
20100284414 | Agarwal | Nov 2010 | A1 |
20100284418 | Gray | Nov 2010 | A1 |
20100287262 | Elzur | Nov 2010 | A1 |
20100287548 | Zhou | Nov 2010 | A1 |
20100290464 | Assarpour | Nov 2010 | A1 |
20100290473 | Enduri | Nov 2010 | A1 |
20100299527 | Arunan | Nov 2010 | A1 |
20100303071 | Kotalwar | Dec 2010 | A1 |
20100303075 | Tripathi | Dec 2010 | A1 |
20100303083 | Belanger | Dec 2010 | A1 |
20100309820 | Rajagopalan | Dec 2010 | A1 |
20100309912 | Mehta | Dec 2010 | A1 |
20100329110 | Rose | Dec 2010 | A1 |
20110019678 | Mehta | Jan 2011 | A1 |
20110032945 | Mullooly | Feb 2011 | A1 |
20110035489 | McDaniel | Feb 2011 | A1 |
20110035498 | Shah | Feb 2011 | A1 |
20110044339 | Kotalwar | Feb 2011 | A1 |
20110044352 | Chaitou | Feb 2011 | A1 |
20110064086 | Xiong | Mar 2011 | A1 |
20110064089 | Hidaka | Mar 2011 | A1 |
20110072208 | Gulati | Mar 2011 | A1 |
20110085560 | Chawla | Apr 2011 | A1 |
20110085563 | Kotha | Apr 2011 | A1 |
20110110266 | Li | May 2011 | A1 |
20110134802 | Rajagopalan | Jun 2011 | A1 |
20110134803 | Dalvi | Jun 2011 | A1 |
20110134925 | Safrai | Jun 2011 | A1 |
20110142053 | Van Der Merwe | Jun 2011 | A1 |
20110142062 | Wang | Jun 2011 | A1 |
20110161494 | McDysan | Jun 2011 | A1 |
20110161695 | Okita | Jun 2011 | A1 |
20110176412 | Stine | Jul 2011 | A1 |
20110188373 | Saito | Aug 2011 | A1 |
20110194403 | Sajassi | Aug 2011 | A1 |
20110194563 | Shen | Aug 2011 | A1 |
20110228780 | Ashwood-Smith | Sep 2011 | A1 |
20110231570 | Altekar | Sep 2011 | A1 |
20110231574 | Saunderson | Sep 2011 | A1 |
20110235523 | Jha | Sep 2011 | A1 |
20110243133 | Villait | Oct 2011 | A9 |
20110243136 | Raman | Oct 2011 | A1 |
20110246669 | Kanada | Oct 2011 | A1 |
20110255538 | Srinivasan | Oct 2011 | A1 |
20110255540 | Mizrahi | Oct 2011 | A1 |
20110261828 | Smith | Oct 2011 | A1 |
20110268120 | Vobbilisetty | Nov 2011 | A1 |
20110268125 | Vobbilisetty | Nov 2011 | A1 |
20110273988 | Tourrilhes | Nov 2011 | A1 |
20110274114 | Dhar | Nov 2011 | A1 |
20110280572 | Vobbilisetty | Nov 2011 | A1 |
20110286457 | Ee | Nov 2011 | A1 |
20110296052 | Guo | Dec 2011 | A1 |
20110299391 | Vobbilisetty | Dec 2011 | A1 |
20110299413 | Chatwani | Dec 2011 | A1 |
20110299414 | Yu | Dec 2011 | A1 |
20110299527 | Yu | Dec 2011 | A1 |
20110299528 | Yu | Dec 2011 | A1 |
20110299531 | Yu | Dec 2011 | A1 |
20110299532 | Yu | Dec 2011 | A1 |
20110299533 | Yu | Dec 2011 | A1 |
20110299534 | Koganti | Dec 2011 | A1 |
20110299535 | Vobbilisetty | Dec 2011 | A1 |
20110299536 | Cheng | Dec 2011 | A1 |
20110317559 | Kern | Dec 2011 | A1 |
20110317703 | Dunbar et al. | Dec 2011 | A1 |
20120011240 | Hara | Jan 2012 | A1 |
20120014261 | Salam | Jan 2012 | A1 |
20120014387 | Dunbar | Jan 2012 | A1 |
20120020220 | Sugita | Jan 2012 | A1 |
20120027017 | Rai | Feb 2012 | A1 |
20120033663 | Guichard | Feb 2012 | A1 |
20120033665 | Da Silva | Feb 2012 | A1 |
20120033668 | Humphries | Feb 2012 | A1 |
20120033669 | Mohandas | Feb 2012 | A1 |
20120033672 | Page | Feb 2012 | A1 |
20120063363 | Li | Mar 2012 | A1 |
20120075991 | Sugita | Mar 2012 | A1 |
20120099567 | Hart | Apr 2012 | A1 |
20120099602 | Nagapudi | Apr 2012 | A1 |
20120106339 | Mishra | May 2012 | A1 |
20120117438 | Shaffer | May 2012 | A1 |
20120131097 | Baykal | May 2012 | A1 |
20120131289 | Taguchi | May 2012 | A1 |
20120134266 | Roitshtein | May 2012 | A1 |
20120147740 | Nakash | Jun 2012 | A1 |
20120158997 | Hsu | Jun 2012 | A1 |
20120163164 | Terry | Jun 2012 | A1 |
20120177039 | Berman | Jul 2012 | A1 |
20120210416 | Mihelich | Aug 2012 | A1 |
20120243539 | Keesara | Sep 2012 | A1 |
20120275297 | Subramanian | Nov 2012 | A1 |
20120275347 | Banerjee | Nov 2012 | A1 |
20120278804 | Narayanasamy | Nov 2012 | A1 |
20120287785 | Kamble | Nov 2012 | A1 |
20120294192 | Masood | Nov 2012 | A1 |
20120294194 | Balasubramanian | Nov 2012 | A1 |
20120320800 | Kamble | Dec 2012 | A1 |
20120320926 | Kamath et al. | Dec 2012 | A1 |
20120327766 | Tsai et al. | Dec 2012 | A1 |
20120327937 | Melman et al. | Dec 2012 | A1 |
20130003535 | Sarwar | Jan 2013 | A1 |
20130003737 | Sinicrope | Jan 2013 | A1 |
20130003738 | Koganti | Jan 2013 | A1 |
20130028072 | Addanki | Jan 2013 | A1 |
20130034015 | Jaiswal | Feb 2013 | A1 |
20130034021 | Jaiswal | Feb 2013 | A1 |
20130067466 | Combs | Mar 2013 | A1 |
20130070762 | Adams | Mar 2013 | A1 |
20130114595 | Mack-Crane et al. | May 2013 | A1 |
20130124707 | Ananthapadmanabha | May 2013 | A1 |
20130127848 | Joshi | May 2013 | A1 |
20130136123 | Ge | May 2013 | A1 |
20130148546 | Eisenhauer | Jun 2013 | A1 |
20130194914 | Agarwal | Aug 2013 | A1 |
20130219473 | Schaefer | Aug 2013 | A1 |
20130250951 | Koganti | Sep 2013 | A1 |
20130259037 | Natarajan | Oct 2013 | A1 |
20130272135 | Leong | Oct 2013 | A1 |
20130294451 | Li | Nov 2013 | A1 |
20130301642 | Radhakrishnan | Nov 2013 | A1 |
20130346583 | Low | Dec 2013 | A1 |
20140013324 | Zhang | Jan 2014 | A1 |
20140025736 | Wang | Jan 2014 | A1 |
20140044126 | Sabhanatarajan | Feb 2014 | A1 |
20140056298 | Vobbilisetty | Feb 2014 | A1 |
20140105034 | Sun | Apr 2014 | A1 |
20150010007 | Matsuhira | Jan 2015 | A1 |
20150030031 | Zhou | Jan 2015 | A1 |
20150143369 | Zheng | May 2015 | A1 |
Number | Date | Country |
---|---|---|
102801599 | Nov 2012 | CN |
102801599 | Nov 2012 | CN |
0579567 | May 1993 | EP |
0993156 | Dec 2000 | EP |
1398920 | Mar 2004 | EP |
2001167 | Aug 2007 | EP |
2001167 | Aug 2007 | EP |
1916807 | Oct 2007 | EP |
1916807 | Apr 2008 | EP |
2008056838 | May 2008 | WO |
2009042919 | Apr 2009 | WO |
2010111142 | Sep 2010 | WO |
2010111142 | Sep 2010 | WO |
2014031781 | Feb 2014 | WO |
Entry |
---|
Rosen, E. et al., “BGP/MPLS VPNs”, Mar. 1999. |
Office Action for U.S. Appl. No. 14/577,785, filed Dec. 19, 2014, dated Apr. 13, 2015. |
Office Action for U.S. Appl. No. 13/786,328, filed Mar. 5, 2013, dated Mar. 13, 2015. |
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 12, 2015. |
Abawajy J. “An Approach to Support a Single Service Provider Address Image for Wide Area Networks Environment” Centre for Parallel and Distributed Computing, School of Computer Science Carleton University, Ottawa, Ontario, K1S 5B6, Canada. |
Office Action dated Feb. 11, 2016, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014. |
Office Action dated Feb. 24, 2016, U.S. Appl. No. 13/971,397, filed Aug. 20, 2013. |
Office Action dated Feb. 24, 2016, U.S. Appl. No. 12/705,508, filed Feb. 12, 2010. |
Zhai F. Hu et al. ‘RBridge: Pseudo-Nickname; draft-hu-trill-pseudonode-nickname-02.txt’, May 15, 2012. |
Mahalingam “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks” Oct. 17, 2013 pp. 1-22, Sections 1, 4 and 4.1. |
Office action dated Apr. 30, 2015, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012. |
Office Action dated Apr. 1, 2015, U.S. Appl. No. 13/656,438, filed Oct. 19, 2012. |
Office Action dated May 21, 2015, U.S. Appl. No. 13/288,822, filed Nov. 3, 2011. |
Siamak Azodolmolky et al. “Cloud computing networking: Challenges and opportunities for innovations”, IEEE Communications Magazine, vol. 51, No. 7, Jul. 1, 2013. |
Office Action dated Apr. 1, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012. |
Office action dated Jun. 8, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014. |
Office Action Dated Jun. 10, 2015, U.S. Appl. No. 13/890,150, filed May 8, 2013. |
Office Action dated Jun. 18, 215, U.S. Appl. No. 13/098,490, filed May 2, 2011. |
Office Action dated Jun. 16, 2015, U.S. Appl. No. 13/048,817, filed Mar. 15, 2011. |
Huang, Nen-Fu et al. “An Effective Spanning Tree Algorithm for a Bridged LAN”, Mar. 16, 1992. |
Zhai, H. et al., “RBridge: Pseudo-Nickname draft-hu-trill-pseudonode-nickname-02.”, May 15, 2012. |
Narten, T. et al. “Problem Statement: Overlays for Network Virtualization draft-narten-nvo3-overlay-problem-statement-01”, Oct. 31, 2011. |
Knight, Paul et al. “Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts”, 2004. |
An Introduction to Brocade VCS Fabric Technology, Dec. 3, 2012. |
Kreeger, L. et al. “Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00”, Aug. 2, 2012. |
Knight, Paul et al., “Network based IP VPN Architecture using Virtual Routers”, May 2003. |
Louati, Wajdi et al., “Network-Based Virtual Personal Overlay Networks Using Programmable Virtual Routers”, 2005. |
Brocade Unveils “The Effortless Network”, 2009. |
The Effortless Network: HyperEdge Technology for the Campus LAN, 2012. |
Foundary FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, 2008. |
FastIron and TurbuIron 24x Configuration Guide, 2010. |
FastIron Configuration Guide, Supporting IronWare Software Release 07.0.00, 2009. |
Christensen, M. et al., Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches, 2006. |
Perlman, Radia et al. “RBridges: Base Protocol Specification”, <draft-ietf-trill-rbridge-protocol-16.txt>, 2010. |
Brocade Fabric OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions, 2009. |
Eastlake III, Donald et al., “RBridges: TRILL Header Options”, 2009. |
Perlman, Radia “Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology”, 2009. |
Perlman, Radia et al., “RBridge VLAN Mapping”, <draft-ietf-trill-rbridge-vlan-mapping-01.txt>, 2009. |
Knight, S. et al., “Virtual Router Redundancy Protocol”, 1998. |
“Switched Virtual Internetworking moves beyond bridges and routers”, 8178 Data Communications Sep. 23, 1994, No. 12. |
Touch, J. et al., “Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement”, 2009. |
Lapuh, Roger et al., “Split Multi-link Trunking (SMLT)”, 2002. |
Lapuh, Roger et al., “Split Multi-Link Trunking (SMLT) draft-Lapuh-network-smlt-08”, 2009. |
Nadas, S. et al., “Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6”, 2010. |
Office action dated Sep. 12, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010. |
Office action dated Apr. 26, 2012, U.S. Appl. No. 12/725,249, filed Mar. 16, 2010. |
Office action dated Dec. 5, 2012, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011. |
Office action dated May 22, 2013, U.S. Appl. No. 13/087,239, filed Apr. 14, 2011. |
Office action dated Dec. 21, 2012, U.S. Appl. No. 13/098,490, filed May 2, 2011. |
Office action dated Jul. 9, 2013, U.S. Appl. No. 13/098,490, filed May 2, 2011. |
Office action dated Mar. 27, 2014, U.S. Appl. No. 13/098,490, filed May 2, 2011. |
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011. |
Office action dated Jul. 16, 2013, U.S. Appl. No. 13/092,724, filed Jul. 16, 2013. |
Office action dated Apr. 9, 2014, U.S. Appl. No. 13/092,724, filed Apr. 22, 2011. |
Office action dated Jun. 10, 2013, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011. |
Office action dated Jan. 10, 2014, U.S. Appl. No. 13/092,580, filed Apr. 22, 2011. |
Office action dated Mar. 18, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011. |
Office action dated Jan. 16, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011. |
Office action dated Jul. 31, 2013, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011. |
Office action dated Jun. 21, 2013, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011. |
Office action dated Mar. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011. |
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011. |
Office action dated Jul. 3, 2013, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011. |
Office action dated Mar. 26, 2014, U.S. Appl. No. 13/092,701, filed Apr. 22, 2011. |
Office action dated Feb. 5, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011. |
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011. |
Office action dated Apr. 9, 2014, U.S. Appl. No. 13/092,752, filed Apr. 22, 2011. |
Office action dated Dec. 20, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010. |
Office action dated May 24, 2012, U.S. Appl. No. 12/950,974, filed Nov. 19, 2010. |
Office action dated Mar. 4, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011. |
Office action dated Sep. 5, 2013, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011. |
Office action dated Jan. 6, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011. |
Office action dated Jun. 20, 2014, U.S. Appl. No. 13/092,877, filed Apr. 22, 2011. |
Office action dated Jun. 7, 2012, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010. |
Office action dated Jan. 4, 2013, U.S. Appl. No. 12/950,968, filed Nov. 19, 2010. |
Office action dated Sep. 19, 2012, U.S. Appl. No. 13/092,864, filed Apr. 22, 2011. |
Office action dated May 31, 2013, U.S. Appl. No. 13/098,360, filed Apr. 29, 2011. |
Office action dated Oct. 2, 2013, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011. |
Office action dated Dec. 3, 2012, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011. |
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011. |
Office action dated Apr. 22, 2014, U.S. Appl. No. 13/030,806, filed Feb. 18, 2011. |
Office action dated Apr. 25, 2013, U.S. Appl. No. 13/030,688, filed Feb. 18, 2011. |
Office action dated Jun. 11, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011. |
Office action dated Feb. 22, 2013, U.S. Appl. No. 13/044,301, filed Mar. 9, 2011. |
Office action dated Oct. 26, 2012, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011. |
Office action dated May 16, 2013, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011. |
Office action dated Jan. 28, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011. |
Office action dated May 22, 2013, U.S. Appl. No. 13/148,526, filed Jul. 16, 2011. |
Office action dated Dec. 2, 2013, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011. |
Office action dated Jun. 19, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011. |
Office action dated Nov. 29, 2013, U.S. Appl. No. 13/092,873, filed Apr. 22, 2011. |
Office action dated Jul. 23, 2013, U.S. Appl. No. 13/365,993, filed Feb. 3, 2012. |
Office action dated Jul. 18, 2013, U.S. Appl. No. 13/365,808, filed Feb. 3, 2012. |
Office action dated Mar. 6, 2014, U.S. Appl. No. 13/425,238, filed Mar. 20, 2012. |
Office action dated Jun. 13, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011. |
Office action dated Nov. 12, 2013, U.S. Appl. No. 13/312,903, filed Dec. 6, 2011. |
Office action dated Jun. 18, 2014, U.S. Appl. No. 13/440,861, filed Apr. 5, 2012. |
Office action dated Feb. 28, 2014, U.S. Appl. No. 13/351,513, filed Jan. 17, 2012. |
Office action dated May 9, 2014, U.S. Appl. No. 13/484,072, filed May 30, 2012. |
Office action dated Oct. 21, 2013, U.S. Appl.No. 13/533,843, filed Jun. 26, 2012. |
Office action dated May 14, 2014, U.S. Appl. No. 13/533,843, filed Jun. 26, 2012. |
Office action dated Feb. 20, 2014, U.S. Appl. No. 13/598,204, filed Aug. 29, 2012. |
Office action dated Jun. 6, 2014, U.S. Appl. No. 13/669,357, filed Nov. 5, 2012. |
Office action dated Jul. 7, 2014, U.S. Appl. No. 13/044,326, filed Mar. 9, 2011. |
Eastlake, D. et al., ‘RBridges: TRILL Header Options’, Dec. 24, 2009, pp. 1-17, TRILL Working Group. |
Perlman, Radia et al., ‘RBridge VLAN Mapping’, TRILL Working Group, Dec. 4, 2009, pp. 1-12. |
Touch, J. et al., ‘Transparent Interconnection of Lots of Links (TRILL): Problem and Applicability Statement’, May 2009, Network Working Group, pp. 1-17. |
‘RBridges: Base Protocol Specification’, IETF Draft, Perlman et al., Jun. 26, 2009. |
Switched Virtual Networks. ‘Internetworking Moves Beyond Bridges and Routers’ Data Communications, McGraw Hill. New York, US, vol. 23, No. 12, Sep. 1, 1994 (Sep. 1, 1994), pp. 66-70,72,74, XP000462385 ISSN: 0363-6399. |
Office action dated Aug. 29, 2014, U.S. Appl. No. 13/042,259, filed Mar. 7, 2011. |
Office action dated Aug. 21, 2014, U.S. Appl. No. 13/184,526, filed Jul. 16, 2011. |
Brocade, ‘Brocade Fabrics OS (FOS) 6.2 Virtual Fabrics Feature Frequently Asked Questions’, pp. 1-6, 2009 Brocade Communications Systems, Inc. |
Brocade, ‘FastIron and TurboIron 24x Configuration Guide’, Feb. 16, 2010. |
Brocade, ‘The Effortless Network: Hyperedge Technology for the Campus LAN’ 2012. |
Brocade ‘An Introduction to Brocade VCS Fabric Technology’, Dec. 3, 2012. |
Christensen, M. et al., ‘Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches’, May 2006. |
FastIron Configuration Guide Supporting Ironware Software Release 07.0.00, Dec. 18, 2009. |
Foundary FastIron Configuration Guide, Software Release FSX 04.2.00b, Software Release FWS 04.3.00, Software Release FGS 05.0.00a, Sep. 2008. |
Huang, Nen-Fu et al., ‘An Effective Spanning Tree Algorithm for a Bridged LAN’, Mar. 16, 1992. |
Knight, ‘Network Based IP VPN Architecture using Virtual Routers’, May 2003. |
Knight P. et al: ‘Layer 2 and 3 Virtual Private Networks: Taxonomy, Technology, and Standardization Efforts’, IEEE Communications Magazine, IEEE Service Center, Piscataway, US, vol. 42, No. 6, Jun. 1, 2004 (Jun. 1, 2004), pp. 124-131, XP001198207, ISSN: 0163-6804, DOI: 10.1109/MCOM.2004.1304248. |
Knight S et al: ‘Virtual Router Redundancy Protocol’ Internet Citation Apr. 1, 1998 (Apr. 1, 1998), XP002135272 Retrieved from the Internet: URL:ftp://ftp.isi.edu/in-notes/rfc2338.txt [retrieved on Apr. 10, 2000]. |
Lapuh, Roger et al., ‘Split Multi-Link Trunking (SMLT)’, Network Working Group, Oct. 2012. |
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT) draft-lapuh-network-smlt-08’, Jan. 2009. |
Louati, Wajdi et al., ‘Network-based virtual personal overlay networks using programmable virtual routers’, IEEE Communications Magazine, Jul. 2005. |
Narten, T. et al., ‘Problem Statement: Overlays for Network Virtualization d raft-na rten-n vo3-over l ay-problem -statement-01’, Oct. 31, 2011. |
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jan. 16, 2014. |
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated May 22, 2013. |
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Jul. 16, 2013. |
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Feb. 28, 2014. |
Office Action for U.S. Appl. No. 13/533,843, filed Jun. 26, 2012, dated Oct. 21, 2013. |
Perlman, Radia et al., ‘Challenges and Opportunities in the Design of TRILL: a Routed layer 2 Technology’, 2009. |
S. Nadas et al., ‘Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6’, Internet Engineering Task Force, Mar. 2010. |
Office action dated Aug. 4, 2014, U.S. Appl. No. 13/050,102, filed Mar. 17, 2011. |
Perlman, Radia et al., ‘RBridges: Base Protocol Specification; Draft-ietf-trill-rbridge-protocol-16.txt’, Mar. 3, 2010, pp. 1-117. |
‘An Introduction to Brocade VCS Fabric Technology’, Brocade white paper, http://community.brocade.com/docs/DOC-2954, Dec. 3, 2012. |
U.S. Appl. No. 13/030,806 Office Action dated Dec. 3, 2012. |
Brocade ‘Brocade Unveils ‘The Effortless Network’’, http://newsroom.brocade.com/press-releases/brocade-unveils-the-effortless-network-nasdaq-brcd-0859535, 2012. |
Kreeger, L. et al., ‘Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00’, Jan. 30, 2012. |
Lapuh, Roger et al., ‘Split Multi-link Trunking (SMLT)’, draft-lapuh-network-smlt-08, Jul. 2008. |
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Jul. 18, 2013. |
Office Action for U.S. Appl. No. 13/365,993, filed Feb. 3, 2012, dated Jul. 23, 2013. |
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Apr. 26, 2013. |
Office Action for U.S. Appl. No. 12/725,249, filed Mar. 16, 2010, dated Sep. 12, 2012. |
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jan. 4, 2013. |
Office Action for U.S. Appl. No. 12/950,968, filed Nov. 19, 2010, dated Jun. 7, 2012. |
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated Dec. 20, 2012. |
Office Action for U.S. Appl. No. 12/950,974, filed Nov. 19, 2010, dated May 24, 2012. |
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Apr. 25, 2013. |
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Jun. 11, 2013. |
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Mar. 18, 2013. |
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Jul. 31, 2013. |
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Feb. 22, 2013. |
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jun. 11, 2013. |
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Oct. 2, 2013. |
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Oct. 26, 2012. |
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated May 16, 2013. |
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Jun. 21, 2013. |
Office Action for U.S. Appl. No. 13/092,580, filed Apr. 22, 2011, dated Jun. 10, 2013. |
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jan. 28, 2013. |
Office Action for U.S. Appl. No. 13/092,701, filed Apr. 22, 2011, dated Jul. 3, 2013. |
Office Action for U.S. Appl. No. 13/092,724, filed Apr. 22, 2011, dated Feb. 5, 2013. |
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 5, 2013. |
Office Action for U.S. Appl. No. 13/092,864, filed Apr. 22, 2011, dated Sep. 19, 2012. |
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jun. 19, 2013. |
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Mar. 4, 2013. |
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Sep. 5, 2013. |
Office Action for U.S. Appl. No. 13/098,360, filed Apr. 29, 2011, dated May 31, 2013. |
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Dec. 21, 2012. |
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Jul. 9, 2013. |
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 28, 2013. |
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated May 22, 2013. |
Office Action for U.S. Appl. No. 13/365,808, filed Jul. 18, 2013, dated Jul. 18, 2013. |
Office Action for U.S. Appl. No. 13/092,887 dated Jan. 6, 2014. |
Office Action for U.S. Appl. No. 13/030,806, filed Feb. 18, 2011, dated Dec. 3, 2012. |
Office Action for U.S. Appl. No. 13/098,490, filed May 2, 2011, dated Mar. 27, 2014. |
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Jun. 13, 2013. |
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 29, 2013. |
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Dec. 2, 2013. |
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Feb. 20, 2014. |
Office Action for U.S. Appl. No. 13/030,688, filed Feb. 18, 2011, dated Jul. 17, 2014. |
Office Action for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011, dated Jul. 7, 2014. |
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Apr. 9, 2014. |
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Jul. 25, 2014. |
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Jun. 20, 2014. |
Office Action for U.S. Appl. No. 13/312,903, filed Dec. 6, 2011, dated Aug. 7, 2014. |
Office Action for U.S. Appl. No. 13/351,513, filed Jan. 17, 2012, dated Jul. 24, 2014. |
Office Action for U.S. Appl. No. 13/425,238, filed Mar. 20, 2012, dated Mar. 6, 2014. |
Office Action for U.S. Appl. No. 13/556,061, filed Jul. 23, 2012, dated Jun. 6, 2014. |
Office Action for U.S. Appl. No. 13/742,207 dated Jul. 24, 2014, filed Jan. 15, 2013. |
Office Action for U.S. Appl. No. 13/950,974, filed Nov. 19, 2010, dated Dec. 2, 2012. |
Office Action for U.S. Appl. No. 13/087,239, filed Apr. 14, 2011, dated Dec. 5, 2012. |
Perlman R: ‘Challenges and opportunities in the design of TRILL: a routed layer 2 technology’, 2009 IEEE GLOBECOM Workshops, Honolulu, HI, USA, Piscataway, NJ, USA, Nov. 30, 2009 (Nov. 30, 2009), pp. 1-6, XP002649647, DOI: 10.1109/GLOBECOM.2009.5360776 ISBN: 1-4244-5626-0 [retrieved on Jul. 19, 2011]. |
TRILL Working Group Internet-Draft Intended status: Proposed Standard RBridges: Base Protocol Specificaiton Mar. 3, 2010. |
Office action dated Aug. 14, 2014, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011. |
Office action dated Jul. 7, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011. |
Office Action dated Dec. 19, 2014, for U.S. Appl. No. 13/044,326, filed Mar. 9, 2011. |
Office Action for U.S. Appl. No. 13/092,873, filed Apr. 22, 2011, dated Nov. 7, 2014. |
Office Action for U.S. Appl. No. 13/092,877, filed Apr. 22, 2011, dated Nov. 10, 2014. |
Office Action for U.S. Appl. No. 13/157,942, filed Jun. 10, 2011. |
Mckeown, Nick et al. “OpenFlow: Enabling Innovation in Campus Networks”, Mar. 14, 2008, www.openflow.org/documents/openflow-wp-latest.pdf. |
Office Action for U.S. Appl. No. 13/044,301, dated Mar. 9, 2011. |
Office Action for U.S. Appl. No. 13/184,526, filed Jul. 16, 2011, dated Jan. 5, 2015. |
Office Action for U.S. Appl. No. 13/598,204, filed Aug. 29, 2012, dated Jan. 5, 2015. |
Office Action for Application No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015. |
Office Action for U.S. Appl. No. 13/851,026, filed Mar. 26, 2013, dated Jan. 30, 2015. |
Office Action for U.S. Appl. No. 13/092,460, filed Apr. 22, 2011, dated Mar. 13, 2015. |
Office Action for U.S. Appl. No. 13/425,238, dated Mar. 12, 2015. |
Office Action for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011, dated Feb. 27, 2015. |
Office Action for U.S. Appl. No. 13/042,259, filed Mar. 7, 2011, dated Feb. 23, 2015. |
Office Action for U.S. Appl. No. 13/669,357, filed Nov. 5, 2012, dated Jan. 30, 2015. |
Office Action for U.S. Appl. No. 13/044,301, filed Mar. 9, 2011, dated Jan. 29, 2015. |
Office Action for U.S. Appl. No. 13/050,102, filed Mar. 17, 2011, dated Jan. 26, 2015. |
Office action dated Oct. 2, 2014, for U.S. Appl. No. 13/092,752, filed Apr. 22, 2011. |
Kompella, Ed K. et al., ‘Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling’ Jan. 2007. |
Office Action dated Jul. 31, 2015, U.S. Appl. No. 13/598,204, filed Aug. 29, 2014. |
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/473,941, filed Aug. 29, 2014. |
Office Action dated Jul. 31, 2015, U.S. Appl. No. 14/488,173, filed Sep. 16, 2014. |
Office Action dated Aug. 21, 2015, U.S. Appl. No. 13/776,217, filed Feb. 25, 2013. |
Office Action dated Aug. 19, 2015, U.S. Appl. No. 14/156,374, filed Jan. 15, 2014. |
Office Action dated Sep. 2, 2015, U.S. Appl. No. 14/151,693, filed Jan. 9, 2014. |
Office Action dated Sep. 17, 2015, U.S. Appl. No. 14/577,785, filed Dec. 19, 2014. |
Office Action dated Sep. 22, 2015 U.S. Appl. No. 13/656,438, filed Oct. 19, 2012. |
Office Action dated Nov. 5, 2015, U.S. Appl. No. 14/178,042, filed Feb. 11, 2014. |
Office Action dated Oct. 19, 2015, U.S. Appl. No. 14/215,996, filed Mar. 17, 2014. |
Office Action dated Sep. 18, 2015, U.S. Appl. No. 13/345,566, filed Jan. 6, 2012. |
Open Flow Configuration and Management Protocol 1.0 (OF-Config 1.0) Dec. 23, 2011. |
Office action dated Feb. 2, 2016, U.S. Appl. No. 13/092,460, filed Apr. 22, 2011. |
Office Action dated Feb. 2, 2016. U.S. Appl. No. 14/154,106, filed Jan. 13, 2014. |
Office Action dated Feb. 3, 2016, U.S. Appl. No. 13/098,490, filed May 2, 2011. |
Office Action dated Feb. 4, 2016, U.S. Appl. No. 13/557,105, filed Jul. 24, 2012. |
Number | Date | Country | |
---|---|---|---|
20140362854 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
61833385 | Jun 2013 | US |