I. Stackable Devices and Stacking Systems
As known in the art, a “stackable device” is a network device (typically an L2/L3 switch) that can operate independently as a standalone device or in concert with one or more other stackable devices in a “stack” or “stacking system.”
Most stacking systems in use today support linear or ring topologies, like the ring shown in
II. Broadcast/Multicast Packet Switching in Stacking Systems
Generally speaking, the data packets that are switched/forwarded by a stacking system can be classified into three types based on their respective destinations: (1) unicast, (2) broadcast, and (3) multicast. A unicast packet is directed to a single destination. Thus, when a unicast packet is received at an ingress data port of a stacking system, the unicast packet need only be switched through the stacking ports needed to deliver the packet to a single egress data port (of a single stackable device) in the system.
On the other hand, broadcast and multicast packets are directed to multiple destinations; in particular, a broadcast packet is directed to all nodes in the packet's VLAN, while a multicast packet is directed to certain, selective nodes (comprising a multicast group) in the packet's VLAN. Thus, when a broadcast or multicast packet is received at an ingress data port of a stacking system, the broadcast/multicast packet must generally reach, or be capable of reaching, every stackable device in the system that has egress data ports in (i.e., are members of) the packet's VLAN.
This gives rise to two potential problems. First, if an incoming broadcast/multicast packet is simply flooded throughout a stacking system (i.e., replicated to each stacking port) so that it can reach every stackable device in the system, the flooded packets may endlessly loop through the system's topology (assuming the topology is a ring or a mesh with looping paths). Fortunately, it is possible to avoid packet looping by implementing a feature known as “egress source ID filtering.” With this feature, each ingress packet is tagged with a source ID that identifies the stackable device on which the packet was received. In addition, a set of single-source spanning trees originating from each stackable device is calculated. The single-source spanning trees are then used to filter packets at the system's stacking ports in a manner that ensures a packet with a particular source ID is only switched along the paths of its corresponding tree. This effectively eliminates packet looping, while allowing each stackable device to be reachable from every other device in the system.
The second problem is that, even with egress source ID filtering in place, a broadcast/multicast packet may still be replicated to stackable devices in the system that do not need to receive the packet (i.e., do not have any data ports in the packet's VLAN). To better understand this, note that a data packet is generally received at an ingress data port of a stacking system, forwarded through the system's stacking ports, and then output via one or more egress data ports. In order for the packet to be allowed through the data and stacking ports in this forwarding path, each data/stacking port must be associated with (i.e., considered “in”) the packet's VLAN (via a “VLAN association”). For example, if the packet reaches a stackable device in the system via an input port (either data or stacking) that is not in the packet's VLAN, the packet will be dropped. Similarly, if a stackable device attempts to send out the packet via an output port (either data or stacking) that is not in the packet's VLAN, the transmission will be blocked.
However, with current stacking implementations, it is difficult to determine the appropriate VLAN associations for every stacking port in a complicated topology. For instance, a stackable device that has no data ports in a particular VLAN may still need to bridge that VLAN via one or more of its stacking ports for a stackable device that is several hops away. Thus, the common practice is to associate every possible VLAN to every stacking port in the system. This will cause an incoming broadcast/multicast packet to be replicated to every stacking port regardless of the packet's VLAN (as long as it is not blocked by egress source ID filtering), and thus result in transmission of the broadcast/multicast packet to every stackable device in the system, even if certain devices do not need it.
The foregoing practice wastes stacking port bandwidth, which can be particularly problematic in large stacking systems, or advanced stacking systems that have stacking ports/links of differing bandwidths. For example, in advanced stacking system 140 of
Techniques for reducing broadcast and multicast traffic in a stacking system are provided. In one embodiment, a master device in the stacking system can automatically determine a minimal set of VLAN associations for stacking links in the stacking system. The minimal set of VLAN associations can avoid unnecessary transmission of broadcast or multicast packets through the system's topology.
The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of particular embodiments.
In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details, or can be practiced with modifications or equivalents thereof.
The present disclosure describes techniques for reducing broadcast and multicast traffic within a stacking system. At a high level, a master device of the stacking system can automatically determine a minimal set of VLAN associations for the stacking links in the system, where the minimal set of VLAN associations minimize or eliminate “unnecessary” transmission of broadcast/multicast packets through the system's topology (i.e., the transmission of broadcast/multicast packets to stackable devices that do not have any data ports in the packets' VLANs). In one embodiment, the determination of the minimal set of VLAN associations can be based on a complete set of single-source spanning trees that are calculated in view of the topology. The master device can then cause VLANs to be assigned to stacking ports in the stacking system in accordance with the minimal set of VLAN associations.
With these techniques, the amount of broadcast and multicast traffic flowing through the system can be substantially reduced in comparison to existing practices/implementations (which typically involve associating all VLANs to all stacking ports). This, in turn, can avoid link saturation in large stacking systems, or advanced stacking systems that mix high bandwidth and low bandwidth stacking ports/links. Further, the algorithm for determining the minimal set of VLAN associations is not limited to certain types of topologies, and instead can apply to any general, mesh-like topology. The details of this algorithm are described in the sections that follow.
In the example of
The particular filter lists shown in
It should be noted that trees 300-340 of
As discussed in the Background section, one problem with switching broadcast/multicast traffic in a conventional stacking system is that, even with egress source ID filtering in place, there may be a significant number broadcast/multicast packets that are forwarded to stackable devices in the system that do not require them (i.e., stackable devices that do not have any data ports in the packets' VLANs). This is due to the common practice of associating every possible VLAN with every stacking port (for simplicity of configuration, and to ensure that each stackable device receives packets for VLANs of which the device has member data ports).
For example, with respect to
To address the foregoing and other similar issues, in various embodiments master device D1 can execute a novel algorithm that determines a minimal set of VLAN associations for the stacking links of system 200. As described previously, the minimal set of VLAN associations can define VLAN associations that prevent unnecessary broadcast/multicast packets from being passed through the stacking ports (ether in or out), thereby reducing the total amount of broadcast/multicast traffic in the system. Significantly, the algorithm can work with any mesh-like topology (e.g., linear, ring, star, tree, partial mesh, full mesh, etc.), and thus is not limited to simple linear or ring topologies.
In one embodiment, the algorithm can take as input a complete set of single-source spanning trees for a stacking system's topology (e.g., trees 300-340 of
With these rules, the algorithm can selectively associate VLANs to stacking ports in a manner that guarantees broadcast/multicast packets are propagated to downstream devices that need the packets (i.e., share common VLANs with the ingress device), while preventing broadcast/multicast packets from being propagated to downstream devices that do not need the packets (i.e., do not share any common VLANs with the ingress device).
At block 402, master device D1 can prepare a “device VLAN bitmask” for every stackable device in stacking system 200. Each device VLAN bitmask is a string of bits that represents the VLANs of which the device's data ports are members (each bit corresponds to a VLAN number). Generally speaking, there may be up to 4096 VLANs defined. Accordingly, the bitmask can comprise up to 4096 bits (512 bytes or 128 words). A bit set to 1 indicates that the stackable device has at least one data port in the corresponding VLAN. For example, if bit 123 is set to 1, the device has one or more data ports in VLAN 123. A bit set to 0 indicates that the stackable device does not have any data ports in the corresponding VLAN.
At block 404, master device D1 can prepare a “link VLAN bitmask” for every stacking link in stacking system 200. Each link VLAN bitmask is a string of bits that represents the calculated VLAN associations for the stacking ports comprising the stacking links. Like the device VLAN bitmasks, the link VLAN bitmasks can comprise up to 4096 bits (one bit per VLAN number). At this point in the algorithm, each link VLAN bitmask is initialized to zero.
Once the device VLAN bitmasks and link VLAN bitmasks are created, master device D1 can select a single-source spanning tree T in the set of computed single-source spanning trees (block 406). Master device D1 can then select a particular non-root device D in tree T (block 408), and create a “common bitmask” that is the result of performing a logical AND on the device VLAN bitmask for D and the device VLAN bitmask for the root device R of tree T (block 410). The common bitmask represents the VLANs that non-root device D and root device R have in common.
If the common bitmask created at block 410 is non-zero (i.e., contains any “1” bits) (block 412), master device D1 can walk up tree T from non-root device D to root device R (block 414). As part of this process, master device D1 can update the link VLAN bitmask for every stacking link L along the traversed path by performing a logical OR on the link VLAN bitmask for L and the common bitmask. This effectively adds the VLANs identified in the common bitmask to the link VLAN bitmask. On the other hand, if the common bitmask is determined to be zero at block 412, master device D1 can skip the processing of block 414.
At block 416, master device D1 can check whether all of the non-root devices in tree T have been processed. If not, master device D1 can return to block 408 in order to process the unprocessed devices.
If all of the non-root devices have been processed, master device D1 can further check whether all of the single-source spanning trees have been processed (block 418). If not, master device D1 can return to block 406 in order to process the unprocessed trees.
Finally, if all of the single-source spanning trees have been processed, master device D1 can conclude that the algorithm is complete and the minimal set of VLAN associations has been calculated (in the form of the link VLAN bitmasks). In response, master device D1 can transmit the calculated VLAN associations to the non-master devices (D2-D5) of system 200 (block 420). Each device can subsequently configure and enforce the VLAN associations at the stacking ports of the device.
The algorithm shown in
Although not shown in the
Depending on the environment, VLAN changes (i.e., changes to the VLANs of which a given stackable device's data ports are members) may occur more frequently. If such VLAN changes occur very often (e.g., more than 10 times a second), in certain embodiments master device D1 can implement measures to reduce the need to constantly re-execute the algorithm. For example, in one embodiment, master device D1 can aggregate multiple VLAN changes and trigger re-execution of the algorithm at a set interval (taking into account all the changes that occurred during that interval). This approach may delay correct broadcast/multicast forwarding until the re-execution is complete.
In another embodiment, master device D1 can associate a VLAN to all stacking ports of system 200 if the VLAN is added to any device in the system. Nothing is done if a VLAN is removed. This approach will not prevent stacking system 200 from correctly forwarding broadcast/multicast packets, but it may result in some redundant/unnecessary flooding of packets. Master device D1 can subsequently trigger the algorithm at a later point in time to calculate the minimal set of VLAN associations and thus trim down the unnecessary flooding.
As noted with respect to
To further clarify the operation of the algorithm of
Once the device VLAN bitmasks are created (and the link VLAN bitmasks are initialized), master device D1 will process the single-source spanning trees and the non-root devices in each tree according to blocks 406-418 of
Table 2 below shows the values of the link VLAN bitmasks for links L1-L6 after the processing of tree 300:
Next, assume that master device D1 processes tree 310 of
Table 3 below shows the values of the link VLAN bitmasks for links L1-L6 after the processing of tree 310:
Next, assume that master device D1 processes tree 320 of
Table 4 below shows the values of the link VLAN bitmasks for links L1-L6 after the processing of tree 320:
Next, assume that master device D1 processes tree 330 of
Table 5 below shows the values of the link VLAN bitmasks for links L1-L6 after the processing of tree 330:
Finally, assume that master device D1 processes tree 340 of
Table 6 below shows the values of the link VLAN bitmasks for links L1-L6 after the processing of tree 340:
At this point, there are no more trees for master device D1 to process. Accordingly, the algorithm will end and Table 6 represents the final, minimal set of VLAN associations for stacking system 200. Per block 420 of
As shown, network switch 500 includes a management module 502, a switch fabric module 504, and a number of I/O modules 506(1)-506(N). Management module 502 represents the control plane of network switch 500 and thus includes one or more management CPUs 508 for managing/controlling the operation of the device. Each management CPU 508 can be a general purpose processor, such as a PowerPC, Intel, AMD, or ARM-based processor, that operates under the control of software stored in an associated memory (not shown).
Switch fabric module 504 and I/O modules 506(1)-506(N) collectively represent the data, or forwarding, plane of network switch 500. Switch fabric module 504 is configured to interconnect the various other modules of network switch 500. Each I/O module 506(1)-506(N) can include one or more input/output ports 510(1)-510(N) that are used by network switch 500 to send and receive data packets. As noted with respect to
It should be appreciated that network switch 500 is illustrative and not intended to limit embodiments of the present invention. Many other configurations having more or fewer components than switch 500 are possible.
The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. For example, although certain embodiments have been described with respect to particular process flows and steps, it should be apparent to those skilled in the art that the scope of the present invention is not strictly limited to the described flows and steps. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added, or omitted. As another example, although certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are possible, and that specific operations described as being implemented in software can also be implemented in hardware and vice versa.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as set forth in the following claims.
The present application claims the benefit and priority under 35 U.S.C. 119(e) of U.S. Provisional Application No. 61/825,449, filed May 20, 2013, entitled “BROADCAST AND MULTICAST TRAFFIC REDUCTION BY VLAN ASSOCIATION IN A STACKING SYSTEM.” The entire contents of this application are incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4625308 | Kim et al. | Nov 1986 | A |
5481073 | Singer et al. | Jan 1996 | A |
5651003 | Pearce et al. | Jul 1997 | A |
5727170 | Mitchell et al. | Mar 1998 | A |
6111672 | Davis et al. | Aug 2000 | A |
6243756 | Whitmire et al. | Jun 2001 | B1 |
6366582 | Nishikado et al. | Apr 2002 | B1 |
6373840 | Chen | Apr 2002 | B1 |
6490276 | Salett et al. | Dec 2002 | B1 |
6496502 | Fite, Jr. et al. | Dec 2002 | B1 |
6516345 | Kracht | Feb 2003 | B1 |
6526345 | Ryoo | Feb 2003 | B2 |
6597658 | Simmons | Jul 2003 | B1 |
6725326 | Patra et al. | Apr 2004 | B1 |
6765877 | Foschiano et al. | Jul 2004 | B1 |
6807182 | Dolphin et al. | Oct 2004 | B1 |
6839342 | Parham et al. | Jan 2005 | B1 |
6839349 | Ambe et al. | Jan 2005 | B2 |
6850542 | Tzeng | Feb 2005 | B2 |
7093027 | Shabtay | Aug 2006 | B1 |
7099315 | Ambe et al. | Aug 2006 | B2 |
7106736 | Kalkunte | Sep 2006 | B2 |
7136289 | Vasavda et al. | Nov 2006 | B2 |
7184441 | Kadambi | Feb 2007 | B1 |
7206283 | Chang et al. | Apr 2007 | B2 |
7206309 | Pegrum et al. | Apr 2007 | B2 |
7274694 | Cheng et al. | Sep 2007 | B1 |
7313667 | Pullela et al. | Dec 2007 | B1 |
7327727 | Rich et al. | Feb 2008 | B2 |
7336622 | Fallis et al. | Feb 2008 | B1 |
7426179 | Harshavardhana et al. | Sep 2008 | B1 |
7480258 | Shuen et al. | Jan 2009 | B1 |
7496096 | Dong et al. | Feb 2009 | B1 |
7523227 | Yager et al. | Apr 2009 | B1 |
7561527 | Katz et al. | Jul 2009 | B1 |
7565343 | Watanabe | Jul 2009 | B2 |
7602787 | Cheriton | Oct 2009 | B2 |
7697419 | Donthi | Apr 2010 | B1 |
7933282 | Gupta et al. | Apr 2011 | B1 |
7962595 | Jabbar | Jun 2011 | B1 |
8209457 | Engel et al. | Jun 2012 | B2 |
8307153 | Kishore | Nov 2012 | B2 |
8750144 | Zhou et al. | Jun 2014 | B1 |
8949574 | Slavin | Feb 2015 | B2 |
9032057 | Agarwal et al. | May 2015 | B2 |
9038151 | Chua et al. | May 2015 | B1 |
9148387 | Lin et al. | Sep 2015 | B2 |
9185049 | Agarwal et al. | Nov 2015 | B2 |
9269439 | Levy et al. | Feb 2016 | B1 |
9282058 | Lin et al. | Mar 2016 | B2 |
9313102 | Lin et al. | Apr 2016 | B2 |
9559897 | Lin et al. | Jan 2017 | B2 |
9577932 | Ravipati et al. | Feb 2017 | B2 |
9628408 | Janardhanan | Apr 2017 | B2 |
9660937 | Agarwal et al. | May 2017 | B2 |
9692652 | Lin et al. | Jun 2017 | B2 |
9692695 | Lin et al. | Jun 2017 | B2 |
20010042062 | Tenev et al. | Nov 2001 | A1 |
20020046271 | Huang | Apr 2002 | A1 |
20020101867 | O'Callaghan | Aug 2002 | A1 |
20030005149 | Haas et al. | Jan 2003 | A1 |
20030081556 | Woodall | May 2003 | A1 |
20030137983 | Song | Jul 2003 | A1 |
20030169734 | Lu et al. | Sep 2003 | A1 |
20030174719 | Sampath | Sep 2003 | A1 |
20030182483 | Hawkins | Sep 2003 | A1 |
20030188065 | Golla et al. | Oct 2003 | A1 |
20050063354 | Garnett et al. | Mar 2005 | A1 |
20050141513 | Oh et al. | Jun 2005 | A1 |
20050198453 | Osaki | Sep 2005 | A1 |
20050243739 | Anderson et al. | Nov 2005 | A1 |
20050271044 | Hsu et al. | Dec 2005 | A1 |
20060013212 | Singh et al. | Jan 2006 | A1 |
20060023640 | Chang et al. | Feb 2006 | A1 |
20060072571 | Navada et al. | Apr 2006 | A1 |
20060077910 | Lundin et al. | Apr 2006 | A1 |
20060080498 | Shoham et al. | Apr 2006 | A1 |
20060092849 | Santoso et al. | May 2006 | A1 |
20060092853 | Santoso et al. | May 2006 | A1 |
20060114899 | Toumura et al. | Jun 2006 | A1 |
20060176721 | Kim et al. | Aug 2006 | A1 |
20060187900 | Akbar | Aug 2006 | A1 |
20060253557 | Talayco et al. | Nov 2006 | A1 |
20060280125 | Ramanan et al. | Dec 2006 | A1 |
20060294297 | Gupta | Dec 2006 | A1 |
20070081463 | Bohra et al. | Apr 2007 | A1 |
20070121673 | Hammer | May 2007 | A1 |
20070147271 | Nandy et al. | Jun 2007 | A1 |
20070174537 | Kao et al. | Jul 2007 | A1 |
20070291660 | Robson et al. | Dec 2007 | A1 |
20080137530 | Fallis et al. | Jun 2008 | A1 |
20080192754 | Ku et al. | Aug 2008 | A1 |
20080212497 | Getachew et al. | Sep 2008 | A1 |
20080259555 | Bechtolsheim et al. | Oct 2008 | A1 |
20080275975 | Pandey et al. | Nov 2008 | A1 |
20080281947 | Kumar | Nov 2008 | A1 |
20090125617 | Klessig et al. | May 2009 | A1 |
20090135715 | Bennah | May 2009 | A1 |
20090141641 | Akahane et al. | Jun 2009 | A1 |
20100172365 | Baird et al. | Jul 2010 | A1 |
20100182933 | Hu et al. | Jul 2010 | A1 |
20100185893 | Wang et al. | Jul 2010 | A1 |
20100257283 | Agarwal | Oct 2010 | A1 |
20100284414 | Agarwal | Nov 2010 | A1 |
20100293200 | Assarpour | Nov 2010 | A1 |
20100329111 | Wan et al. | Dec 2010 | A1 |
20110092202 | Mattisson et al. | Apr 2011 | A1 |
20110142077 | Wong et al. | Jun 2011 | A1 |
20110238923 | Hooker et al. | Sep 2011 | A1 |
20110268123 | Kopelman et al. | Nov 2011 | A1 |
20110280258 | Klingen | Nov 2011 | A1 |
20120020373 | Subramanian et al. | Jan 2012 | A1 |
20120087232 | Hanabe et al. | Apr 2012 | A1 |
20120131123 | Yan | May 2012 | A1 |
20120155485 | Saha et al. | Jun 2012 | A1 |
20120246400 | Bhadra et al. | Sep 2012 | A1 |
20130170495 | Suzuki et al. | Jul 2013 | A1 |
20130201984 | Wang | Aug 2013 | A1 |
20130215791 | Lin et al. | Aug 2013 | A1 |
20130232193 | Ali et al. | Sep 2013 | A1 |
20130262377 | Agarwal | Oct 2013 | A1 |
20140003228 | Shah | Jan 2014 | A1 |
20140006706 | Wang | Jan 2014 | A1 |
20140071985 | Kompella et al. | Mar 2014 | A1 |
20140075108 | Dong et al. | Mar 2014 | A1 |
20140112190 | Chou et al. | Apr 2014 | A1 |
20140112192 | Chou et al. | Apr 2014 | A1 |
20140122791 | Fingerhut | May 2014 | A1 |
20140126354 | Hui et al. | May 2014 | A1 |
20140153573 | Ramesh et al. | Jun 2014 | A1 |
20140181275 | Lin et al. | Jun 2014 | A1 |
20140269402 | Vasseur et al. | Sep 2014 | A1 |
20140314082 | Korpinen et al. | Oct 2014 | A1 |
20140334494 | Lin et al. | Nov 2014 | A1 |
20140341079 | Lin et al. | Nov 2014 | A1 |
20140376361 | Hui et al. | Dec 2014 | A1 |
20150016277 | Smith et al. | Jan 2015 | A1 |
20150036479 | Gopalarathnam | Feb 2015 | A1 |
20150055452 | Lee | Feb 2015 | A1 |
20150117263 | Agarwal et al. | Apr 2015 | A1 |
20150124826 | Edsall et al. | May 2015 | A1 |
20150229565 | Ravipati et al. | Aug 2015 | A1 |
20150271861 | Li et al. | Sep 2015 | A1 |
20150281055 | Lin et al. | Oct 2015 | A1 |
20150288567 | Lin et al. | Oct 2015 | A1 |
20160021697 | Vargantwar et al. | Jan 2016 | A1 |
20160028652 | Agarwal et al. | Jan 2016 | A1 |
20160173332 | Agarwal et al. | Jun 2016 | A1 |
20160173339 | Lin et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
1441580 | Sep 2003 | CN |
1791064 | Jun 2006 | CN |
101478435 | Jul 2009 | CN |
102684999 | Sep 2012 | CN |
2924927 | Sep 2015 | EP |
2015026950 | Feb 2015 | WO |
Entry |
---|
Amendment to Carrier Multiple Access with Collision Detection (CSMA/CD Access Method and Physical Layer Specifications—Aggregation of Multi[ple Link Segments; IEEE Std. 802.3ad; 2000; 183 pages. |
Appeal Brief Dated Jan. 18, 2013; U.S. Appl. No. 12/463,964 (2120-04200) (23p.). |
Brocade: “FastIron Ethernet Switch”; Administration Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 400 pages. |
Brocade: “FastIron Ethernet Switch”; IP Multicast Configuration Guide; Supporting Fastlron Software Release 08.0.00; Apr. 30, 2013; 212 pages. |
Brocade: “FastIron Ethernet Switch”; Stacking Configuration Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 170 pages. |
Brocade: “FastIron Ethernet Switch”; Traffic Management Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 76 pages. |
Cisco: “Cisco StackWise and StackWise Plus Technology”; technical white paper; C11-377239-01; Oct. 2010; Copyright 2010; 11 pages. |
Cisco: “Delivering High Availability in the Wiring Closet with Cisco Catalyst Switches”; technical white paper; C11-340384-01; Jan. 2007; Copyright 1992-2007; 8 pages. |
Configure, Verify, and Debug Link Aggregation Control Program (LACP); allied Telesyn; 2004; 10 pages. |
Dell: “Stacking Dell PowerConnect 7000 Series Switches”; A Dell Technical White Paper; Jul. 2011; 34 pages. |
DLDP Techology White Paper; H3C products and solutions; 2008; 8 pages; http://www.h3c.com/portal/Products—Solutions/Technology/LAN/Technology—White—Paper/200812/623012—57—0.htm. |
Examiners Answer Dated May 7, 2013; U.S. Appl. No. 12/463,964 (2120-4200) (12 p.) (Copy available via USPTO's IFW System). |
Extreme Networks Technical Brief: “SummitStack Stacking Technology”; 1346—06; Dec. 2010; 8 pages. |
Final Office Action Dated Jan. 23, 2012; U.S. Appl. No. 12/463,964 (2120-04200) (11 p.). |
Fischer et al.: “A Scalable ATM Switching System Architecture”; IEEE Journal on Selected Areas in Communications, IEEE Service Center, Piscataway, US, vol. 9, No. 8, Oct. 1, 1991; pp. 1299-1307. |
International Search Report and Written Opinion for International Appln. No. PCT/US2013/076251 dated May 22, 2014, 11 pages. |
Juniper Networks datasheet entitled: “Juniper Networks EX 4200 Ethernet Switches with Virtual Chassis Technology”; Dated Aug. 2013 (2120-04300) (12 p.). |
Understanding and Configuring the Undirectional Link Detection Protocol Feature; Cisco support communication; Jul. 9, 2007; Document ID No. 10591; 5 pages; http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/10591-77.html. |
Link Aggregation According to IEEE Standard 802.3ad; SysKonnect GmbH; 2002; 22 pages. |
Link Aggregation; http://en.wikipedia.org/wiki/Link—aggregation; downloaded from Internet on Dec. 16, 2013; 9 pages. |
M. Foschiano; Cisco Systems UniDirectional Link Detection (UDLD) Protocol; Memo; Apr. 2008; 13 pages; Cisco Systems. |
Migration from Cisco UDLD to industry standard DLDP; technical white paper; Feb. 2012; 12 pages; Hewlett-Packard Development Company. |
Office Action dated Mar. 21, 2011; U.S. Appl. No. 12/463,964 (2120-04200) (10 P.). |
Partial International Search Report for PCT/US2014/051903 (119-006802WO) dated Nov. 18, 2014. |
Reply Brief Dated Jul. 8, 2013; U.S. Appl. No. 12/463,964 (2120-04200) (14 p.). |
Response to Office Action dated Mar. 21, 2011; U.S. Appl. No. 12/463,964; Response filed Sep. 21, 2011 (2120-04200) (12 p.). |
Suckfuell: “Evolution of EWSD During the Eighties”; Institute of Electrical and Electronics Engineers; Global Telecommunications Conference; San Diego; Nov. 28-Dec. 1, 1983; [Global Telecommunications Conference], New York, IEEE, US, vol. 2, Nov. 1, 1983; pp. 577-581. |
U.S. Appl. No. 14/106,302, filed Dec. 13, 2013 by Lin et al. |
U.S. Appl. No. 14/207,146, filed Mar. 12, 2014 by Lin et al. |
U.S. Appl. No. 14/094,931, filed Dec. 3, 2013 by Lin et al. |
U.S. Appl. No. 14/268,507, filed May 2, 2014 by Agarwal. |
U.S. Appl. No. 14/463,419, filed Aug. 19, 2014 by Lee. |
U.S. Appl. No. 14/485,343, filed Sep. 12, 2014 by Lin et al. |
U.S. Appl. No. 14/506,943, filed Oct. 6, 2014 by Lin et al. |
U.S. Appl. No. 14/530,193, filed Oct. 31, 2014 by Ravipati et al. |
U.S. Appl. No. 61/745,396, filed Dec. 21, 2012 by Lin et al. |
U.S. Appl. No. 61/799,093, filed Mar. 15, 2013 by Lin et al. |
U.S. Appl. No. 61/822,216, filed May 10, 2013 by Lin et al. |
U.S. Appl. No. 61/825,449, filed May 20, 2013 by Lin et al. |
U.S. Appl. No. 61/825,451, filed May 20, 2013 by Lin et al. |
U.S. Appl. No. 61/868,982, filed Aug. 22, 2013 by Lee. |
U.S. Appl. No. 61/898,295, filed Oct. 31, 2013 by Agarwal. |
U.S. Appl. No. 61/938,805, filed Feb. 12, 2014 by Ravipati et al. |
U.S. Appl. No. 61/971,429, filed Mar. 27, 2014 by Sinha et al. |
U.S. Appl. No. 61/974,924, filed Apr. 3, 2014 by Lin et al. |
Extended European Search Report dated Jul. 30, 2015 for EP Appln. 15000834.0; 8 pages. |
Pei et al.: “Putting Routing Tables in Silicon”, IEEE Network, IEEE Service Center, New York, NY; vol. 6, No. 1, Jan. 1, 1992; pp. 42-50. |
Hsiao et al.: “A High-Throughput and High-Capacity IPv6 Routing Lookup System”, Computer Networks, Elsevier Science Publishers B.V., Amsterdam, NL, vol. 57, No. 3, Nov. 16, 2012, pp. 782-794. |
Notice of Allowance dated Aug. 3, 2015; U.S. Appl. No. 14/207,146 (38 pgs.). |
Notice of Allowance dated Sep. 17, 2015; U.S. Appl. No. 14/268,507 (15 pgs.). |
U.S. Appl. No. 14/876,639, filed Oct. 6, 2015 by Agarwal et al. |
Office Action dated Jul. 16, 2015; U.S. Appl. No. 14/094,931; (41 pgs.). |
International Search Report and Written Opinion for International Appln. No. PCT/US2014/051903 dated Jan. 27, 2015, 16 pages. |
Final Office Action dated Feb. 13, 2015; U.S. Appl. No. 13/850,118; (2120-04201) (14 p.). |
Notice of Allowance dated Dec. 14, 2015; U.S. Appl. No. 14/094,931 (25 pgs.). |
U.S. Appl. No. 14/869,743, filed Sep. 29, 2015 by Agarwal et al. |
U.S. Appl. No. 62/092,617, filed Dec. 16, 2014 by Agarwal et al. |
Office Action dated Feb. 18, 2016; U.S. Appl. No. 14/463,419; (74 pgs.). |
Office Action dated Apr. 29, 2016; U.S. Appl. No. 14/485,343; (72 pgs.). |
Final Office Action dated Jun. 3, 2016; U.S. Appl. No. 14/106,302; (35 pgs.). |
Rooney et al: “Associative Ternary Cache for IP Routing”, IEEE, pp. 409-416, 2004. |
“Starburst: Building Next-Generation Internet Devices”, Sharp et al., Bell Labs Technical Journal, Lucent Technologies, Inc., pp. 6-17, 2002. |
NonFinal Office Action dated Jun. 23, 2016; U.S. Appl. No. 14/530,193; (73 pgs.). |
NonFinal Office Action dated Jul. 13, 2016; U.S. Appl. No. 14/876,639; (69 pgs.). |
Notice of Allowance dated Oct. 30, 2015; U.S. Appl. No. 13/850,118 (2120-04201) (12 pgs.). |
Response to Office Action dated Jul. 16, 2015; U.S. Appl. No. 14/094,931; Response filed Nov. 12, 2015 (13 p.). |
Office Action dated Nov. 20, 2015; U.S. Appl. No. 14/106,302; (14 pgs.). |
Notice of Allowance dated Jan. 6, 2017; U.S. Appl. No. 14/530,193; (18 pgs.). |
Notice of Allowance dated Feb. 8, 2017; U.S. Appl. No. 14/876,639; (25 pgs.). |
Final Office Action dated Jan. 26, 2017; U.S. Appl. No. 14/463,419; (57 pgs.). |
NonFinal Office Action dated Nov. 9, 2016; U.S. Appl. No. 14/506,943; (18 pgs.). |
Notice of Allowance dated Oct. 13, 2016; U.S. Appl. No. 14/106,302; (23 pgs.). |
Final Office Action dated Nov. 1, 2016; U.S. Appl. No. 14/485,343; (31 pgs.). |
29-Chinese Office Action dated Jul. 24, 2017 Appln. No. 201510142392.X; 8 pages. |
30-Pei et al.: Putting Routing Tables in Silicon, IEEE Network Magazine, Jan. 31, 1992, 9 pages. |
31-Qingsheng et al.: Designing of Packet Processing Chip Routers, Aug. 31, 2001, China Academic Journal Electronic Publishing House, 4 pages (No English version). |
32-Hsiao et al.: A High-Throughput and High-Capacity IPv6 Routing Lookup System, Nov. 16, 2012, Computer Networks, 13, pages. |
NonFinal Office Action dated Jun. 15, 2017; U.S. Appl. No. 14/463,419; (35 pgs.). |
Notice of Allowance dated Mar. 22, 2017; U.S. Appl. No. 14/506,943; (22 pgs.). |
Notice of Allowance dated Apr. 13, 2017; U.S. Appl. No. 14/485,343; (25 pgs.). |
NonFinal Office Action dated May 4, 2017; U.S. Appl. No. 15/051,601; (62 pgs.). |
Chinese Office Action dated May 16, 2017 Appln. No. 201380065745.X. |
Notice of Allowance dated Aug. 24, 2017; U.S. Appl. No. 15/051,601; (23 pgs.). |
NonFinal Office Action dated Oct. 26, 2017; U.S. Appl. No. 14/869,743; (70 pgs.). |
Number | Date | Country | |
---|---|---|---|
20140341080 A1 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
61825449 | May 2013 | US |