1. Layer 3 Routing
As known in the art, a router is a network device that interconnects multiple networks and forwards data packets between the networks (a process referred to as Layer 3, or L3, routing). To determine the best path to use in forwarding an ingress packet, a router examines the destination IP address of the packet and compares the destination IP address to routing entries in a routing table. Each routing entry corresponds to a subnet route (e.g., 192.168.2.0/24) or a host route (e.g., 192.168.2.129/32). If the destination IP address matches the subnet/host route of a particular routing entry, the router forwards the packet out of an egress port to a next hop address specified by the entry, thereby sending the packet towards its destination. In some cases, the destination IP address of an ingress packet may match multiple routing entries corresponding to multiple subnet/host routes. For example, the IP address 192.168.2.129 matches subnet routes 192.168.2.128/26 and 192.168.2.0/24, as well as host route 192.168.2.129/32. When this occurs, the router can perform its selection via longest prefix match (LPM), which means that the router will select the matched routing entry with the highest subnet mask (i.e., the most specific entry).
For performance reasons, many conventional routers perform the routing operations described above using a combination of software and hardware routing tables. For instance,
When an ingress packet is received at router 100, HW routing engine 108 of packet processor 106 first looks for a LPM match for the packet's destination IP address in HW routing table 110. As mentioned above, HW routing engine 108 can perform this lookup very quickly (e.g., at line rate) because of table 110's hardware design. If a match is found, HW routing engine 108 forwards the packet to the next hop specified in the matched entry, without involving management CPU 102. If a match is not found, HW routing engine 108 takes a predefined action, such as dropping the packet or sending it to management CPU 102. If sent to management CPU 102, CPU 102 can perform additional inspection/processing to determine how the packet should be forwarded (such as performing a lookup in SW routing table 104).
2. Routing Tries
In certain implementations, router 100 maintains SW routing table 104 as a binary trie (referred to as a “routing trie”), which makes traversal and searching of SW routing table 104 more efficient.
To illustrate the rules above, consider a routing table that includes two routing entries corresponding to two routes: 01001010/8 and 01010101/8 (represented in binary form). In this example, the routing trie for the table will contain three nodes: two route nodes (one for each of the two routes), and a root node that is a branch node associated with prefix 010/3 (because its two child nodes differ starting from the 4th bit). Note that if a new route 010/3 is added, the routing trie will still contain three nodes—the branch node associated with prefix 010/3 will become a route node.
3. Multi-Packet Processor Networking Systems
While router 100 of
For example,
One inefficiency with performing L3 routing in a MPP networking system like stacking system 300 or chassis system 400 as described above pertains to the way in which the multiple HW routing tables of the system are utilized. In particular, since ingress packets may arrive at any packet processor of the system, the same set of routing entries are replicated in the HW routing table of every packet processor. As a result, the HW routing table capacity of the system is constrained by the size of the smallest HW routing table. For instance, in stacking system 300, assume that HW routing table 312(1) supports 16K entries while HW routing tables 312(2) and 312(3) support 32K entries each. In this scenario, every HW routing table 312(1)-312(3) will be limited to holding a maximum of 16K entries (since additional entries beyond 16K cannot be replicated in table 312(1)). This means that a significant percentage of the system's HW routing resources (e.g., 16K entries in tables 312(2) and 312(3) respectively) will go unused. This also means that the HW routing table capacity of the system cannot scale upward as additional switches are added.
Techniques for aggregating hardware routing resources in a system of devices are provided. In one embodiment, a device in the system of devices can divide routing entries in a software routing table of the system into a plurality of route subsets. The device can further assign each route subset in the plurality of route subsets to one or more devices in the system. The device can then install, for each route subset that is assigned to the device, routing entries in the route subset into a hardware routing table of the device.
The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of particular embodiments.
In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details, or can be practiced with modifications or equivalents thereof.
1. Overview
The present disclosure describes techniques for aggregating HW routing resources in a MPP networking system, such that the system is no longer limited by the capacity of the system's smallest HW routing table. In one set of embodiments, this can be achieved by dividing the routing entries in the system's SW routing table (represented as a routing trie) into a number of route subsets, assigning each route subset to one or more devices/modules in the system, and installing the route subsets that are assigned to a particular device/module in the local HW routing table of that device/module (without installing the other route subsets). In this way, the routing entries in the SW routing table can be effectively split across all of the HW routing tables in the system, rather than being replicated in each one. The system can also install, in the HW routing table of each device/module, one or more special routing entries (referred to herein as “redirection entries”) for routes that are assigned to remote devices/modules. These redirection entries can point to the remote device/module as the next hop.
When an ingress packet is received at a particular device/module, the device/module can perform a first L3 lookup to determine whether a routing entry that matches the packet's destination IP address is installed in the device/module's local HW routing table. If a matching routing entry is found, the device/module can cause the packet to be forwarded out of an egress data port of the system based on the entry.
On the other hand, if a matching routing entry is not found in the local HW routing table (which means that the routing entry is installed in the HW routing table of another device/module, referred to as the “remote” device/module), the L3 lookup can match the redirection entry noted above that identifies the remote device/module as the next hop. This, in turn, can cause the device/module to forward the packet to the remote device/module over an intra-system link (e.g., a stacking link in the case of a stacking system, or an internal switch fabric link in the case of a chassis system). Upon receiving the forwarded packet, the remote device/module can perform a second L3 lookup into its HW routing table, which will include a locally installed routing entry matching the destination IP address of the packet. The remote device/module can then forward, based on the locally installed routing entry, the packet out of an appropriate egress data port towards its destination.
Since the SW routing table of the MPP networking system is distributed across (instead of being replicated in) the system's HW routing tables, the approach above allows the system to achieve a total HW routing table capacity that is approximately equal to the sum of the individual HW routing table capacities of its constituent devices/modules. For example, if the MPP networking system comprises three HW routing tables that each support 16K entries, the system can have a total routing table capacity of 16K+16K+16K=48K entries (minus a certain number of entries for redirection). This, in turn, enables the MPP networking system to more efficiently utilize its HW routing resources, as well as scale to support more routes as additional devices/modules (with additional HW routing tables) are added. Generally speaking, the foregoing approach will not adversely affect the system's routing performance in a significant manner since an ingress packet is processed for routing at most twice (e.g., a first time for the L3 lookup at the ingress device/module and a second time for the L3 lookup at the remote device/module, if necessary).
In certain embodiments, the assignments of route subsets to devices/modules can be implemented using a novel ownership model where each route subset is associated with an “owner set” including between zero and N owners (where N is the total number of devices/modules in the system, and where an owner set with zero owners indicates that associated routes should be installed to all devices/modules). As described in further detail below, this ownership model is advantageous because it supports redundancy (i.e., ownership of a routing entry by multiple devices/modules) and simplifies route re-assignments for load balancing purposes.
For clarity of explanation, in the sections that follow, several examples and embodiments describe the techniques of the present invention in the context of stacking systems. However, it should be appreciated that these techniques may also be applied to other types of MPP networking systems, such as chassis system 400 of
2. System Environment
In the example of
As discussed in the Background section, one inefficiency with performing L3 routing in a MPP networking system like stacking system 500 is that the same routing entries are typically replicated in each HW routing table of the system. This replication means that the HW routing table capacity of the stacking system is limited by the size of the smallest HW routing table, regardless of the size of the other HW routing tables or the total number of HW routing tables in the system.
To address the foregoing and other similar issues, each stackable switch 502(1)-502(N) of
At a high level, route division components 504(1)-504(N), route assignment components 506(1)-506(N), and route programming components 508(1)-508(N) can work in concert to distribute the routing entries in stacking system 500's SW routing table across HW routing tables 312(1)-312(N), instead of replicating the entries in each HW routing table. For instance, route division components 504(1)-504(N) can first divide the routing entries in the SW routing table into a number of route subsets. Route assignment components 506(1)-506(N) can subsequently assign each of the route subsets to one or more stackable switches 502(1)-502(N), and route programming components 508(1)-508(N) can install the routes in each route subset into the HW routing table(s) of the stackable switch(es) to which the route subset is assigned. As part of this latter step, route programming components 508(1)-508(N) can install “redirection entries” in each HW routing table 312(1)-312(N) for those routing entries that are not installed locally (i.e., are only installed in the HW routing tables of other, “remote” switches in the system). These redirection entries can point to the remote switches as the next hop, thereby allowing the host switch to know where to send packets that match those entries.
Then, at runtime of stacking system 500, HW routing engines 310(1)-310(N) of stackable switches 502(1)-502(N) (which are modified in accordance with embodiments of the present invention) can forward incoming packets in a distributed manner using the routing entries and redirection entries that have been installed in HW routing tables 312(1)-312(N) as mentioned above. For example, if a particular HW routing engine 310(X) finds a matching routing entry (using LPM) for an ingress packet in its local HW routing table 312(X), HW routing engine 310(X) can cause the packet to be forwarded out of an egress data port of stacking system 500 based on the routing entry. However, if a matching redirection entry is found in HW routing table 312(X) using LPM (instead of a matching routing entry), HW routing engine 310(X) can forward, based on the redirection entry, the packet to another stackable switch of stacking system 500 (e.g., “remote” switch 502(Y)) that does have an appropriate routing entry in its HW routing table for the packet. HW routing engine 310(Y) of remote switch 502(Y) can then match the packet against the routing entry in its local HW routing table 312(Y) and forward the packet out of an egress data port towards its destination. In this way, stacking system 500 can correctly route packets that are received at any member switch, without requiring the same routing entries to be replicated in the HW routing table of each switch.
Additional details regarding the operation of route division components 504(1)-504(N), route assignment components 506(1)-506(N), route programming components 508(1)-508(N), and HW routing engines 310(1)-310(N) are presented in the sections that follow.
3. Dividing, Assigning, and Installing Routing Entries
It should be noted that flowchart 600 (and the other flowcharts in the present disclosure) assumes: (1) SW routing tables 306(1)-306(N) of
In alternative embodiments, the steps of flowchart 600 (and certain other flowcharts described herein) can be performed solely by the route division, assignment, and/or programming components of the master switch in stacking system 500 (i.e., master switch 502(2)). This referred to as a “centralized” approach. With the centralized approach, there is no need for the non-master switches to execute instances of components 506-508; instead, the master switch can determine how to divide, assign, and install routes for each stackable switch in the stacking system, and can simply provide each non-master switch a list of routing entries to be installed in its local HW routing table. This centralized approach also obviates the need for synchronizing the SW routing table across switches—only a single copy of the SW routing table needs to be maintained by the master switch.
Generally speaking, as long as every stackable switch 502(1)-502(N) of stacking system 500 executes the same algorithms, the distributed approach should generate the same results (i.e., the same route installations in HW routing tables 312(1)-312(N)) as the centralized approach.
Turning now to block 602 of
In various embodiments, the number of route subsets created at block 602 is not limited by the number of stackable switches in stacking system 500. For instance, although there are N stackable switches 502(1)-502(N), the SW routing table can be divided into P route subsets, where P is less than, equal to, or greater than N. In a particular embodiment, if there are Q total routing entries in SW routing table 306(X), route division component 504(X) can divide the SW routing table such that each of the P subsets comprises approximately Q/N entries.
At block 604, route assignment component 506(X) can receive the route subsets created by route division component 504(X) and can assign each route subset to an “owner set” comprising zero or more of stackable switches 502(1)-502(N). In this manner, route assignment component 506(X) can determine which subsets should be installed to which switches/HW routing tables of stacking system 500. Generally speaking, the owner sets can be non-exclusive—in other words, two different owner sets (for two different route subsets) can include the same stackable switch, or “owner.” Further if an owner set includes zero owners, that indicates the route subset assigned to that owner set can be installed into the HW routing table of every stackable switch in the stacking system.
The ownership model described above allows for very flexible assignment of routes to switches/HW routing tables. For example, this model enables certain important routes (e.g., subnet routes that cover a large range of addresses, or the default route) to be assigned to multiple owners for redundancy and/or performance reasons. In one embodiment, routes with prefix length equal to or less than 8 can be assigned to an empty owner set (and thus be assigned to every stackable switch).
Further, this ownership model allows for relatively simple load balancing, which is typically needed/desired when 1) a stackable switch joins or leaves the stacking system, or 2) a stackable switch begins running out of free space in its HW routing table due to route additions. For instance, when routing entries need to be offloaded from the HW routing table of a particular stackable switch, the owner sets can simply be modified to re-shuffle route subset assignments, rather than re-dividing the SW routing table/trie. This load balancing process is described in further detail in Section 5.3 below.
Once the route subsets have been assigned to owner sets per block 604, route programming component 508(X) of each stackable switch 502(X) can install the routing entries in the route subsets owned by switch 502(X) into its local HW routing table 312(X) (block 606). As part of this step, route programming component 508(X) can also install one or more redirection entries that correspond to routing entries installed on other stackable switches (i.e., remote switches) in stacking system 500. Each of these redirection entries can include, as its next hop address, the identity/address of the remote switch. In this way, stackable switch 502(X) can know where to forward an ingress packet for further routing if it's local HW routing table 312(X) does not include an actual routing entry matching the destination IP address of the packet.
To clarify the processing at block 606,
At block 702, route programming component 508(X) can enter a loop for each route node R in SW routing table 306(X), which is represented as a routing trie. There are different ways in which loop 702 can be implemented, such via a pre-order (i.e., parent first) or post-order (i.e., children first) traversal of the SW routing trie. The particular traversal method chosen may depend on the hardware design of HW routing table 312(X) (e.g., some TCAMs require a particular traversal order between child and parent nodes).
Within loop 702, route programming component 508(X) can first check if R has no owner (i.e., is assigned to an owner set with zero owners, which means the route is assigned to every stackable switch), or stackable switch 502(X) is in R's owner set. If so, route programming component 508(X) can install a routing entry for R into HW routing table 312(X) with an action to forward packets to R's next hop address (block 706), and can proceed to the end of the loop iteration (block 712).
On the other hand, if the check at block 704 fails, route programming component 508(X) can proceed to check whether R and a parent route node in the SW routing trie share a common owner (block 708). As used herein, the term “parent route node” refers to the closest ancestor node in the SW routing trie that is a route node. This may not be the direct parent node of R in the trie if the direct parent node is a branch node.
If R and the parent route node do share a common owner, that means the HW entry for the parent node will cover the route corresponding to R. As a result, there is no need to install anything into HW routing table 312(X) for R and route programming component 508(X) can proceed to the end of the loop iteration (block 712).
If R and the parent route node do not share a common owner, route programming component 508(X) can install a redirection entry for R into HW routing table 312(X) with an action to forward packets to a stackable switch in R's owner set (block 710). As discussed previously, this redirection entry is different from the routing entry installed at block 706 because the redirection entry will simply cause stackable switch 502(X) to forward, via one or more stacking links, matching packets to the route owner for further L3 processing, rather than out of an egress data port of stacking system 500.
In certain embodiments, route programming component 508(X) may determine at block 710 that R has multiple owners. In this scenario, route programming component 508(X) can select an owner at random for inclusion as the next hop in the redirection entry. Alternatively, route programming component 508(X) can select an owner based on one or more criteria for, e.g., optimization purposes. For example, the criteria can include shortest path (i.e., least hop count), maximum stacking port bandwidth, and/or the like.
Finally, the current loop iteration can end (block 712) and route programming component 508(X) can return to block 702 to process additional route nodes in the SW routing trie (until the entire trie has been traversed).
It should be appreciated that flowchart 700 is illustrative and various modifications/alternative implementations are possible. For instance, in a particular embodiment, flowchart 700 can be modified to reduce the number of entries installed in HW routing table 312(X), thereby achieving “hardware compression.” This can involve, e.g., qualifying the routing entry installation performed at block 706 such that, if R's next hop is equal to its parent route node's next hop, the routing entry is not installed (since the parent's routing entry should cover R).
One caveat with the modification above is that, if the next hop of any existing route node in the SW routing trie changes, that route node and all of its direct child route nodes should be examined/adjusted to add or remove routing entries as needed from the appropriate HW routing tables. The definition of “direct child routes” and how to perform this adjustment are discussed in Section 5.1 below.
4. Runtime Packet Forwarding Flow
Once the route division, assignment, and programming components of stackable switches 502(1)-502(N) have carried out the processing of
Starting with block 802 of
At block 808, HW routing engine 310(X) can determine whether the LPM matched entry in HW routing table 312(X) is a routing entry (i.e., an entry installed per block 706 of
On other hand, if the LPM matched entry at block 808 is a redirection entry, HW routing engine 310(X) can forward the packet out of a stacking port 314(X) of stackable switch 502(X) towards the remote stackable switch identified as the next hop within the redirection entry (e.g., switch 502(Y)) (block 812). Unlike the forwarding at block 810, the forwarding at block 812 typically will not include any modifications to the L2 or L3 headers of the packet. Flowchart 800 can then proceed to
At blocks 814 and 816 of
Finally, HW routing engine 310(Y) can cause the packet to be forwarded out of an egress data port 316(Y) of stackable switch 502(Y) towards a next hop identified by the matched routing entry (block 820). The forwarding at block 820 will generally include the same packet modifications described with respect to block 810 of
To illustrate the processing of
At step (4) (reference numeral 908), the HW routing engine of stackable switch 502(2) receives the forwarded packet and determines (based on, e.g., a special stack tag affixed to the packet) that it needs to perform a second L3 lookup for the packet in its local HW routing table. Based on this second L3 lookup, the HW routing engine of stackable switch 502(2) matches the actual routing entry for the packet's destination IP address and determines the egress switch/port for forwarding the packet out of stacking system 500. Stackable switch 502(2) then forwards the packet over one or more stacking links to the egress switch (in this case, stackable switch 502(N)) (step (5), reference numeral 910), which subsequently sends the packet out of an appropriate egress data port to the next hop destination (step (6), reference numeral 912).
5. Re-Programming the HW Routing Tables
While stacking system 500 is running and performing L3 routing of packets per flowchart 800 of
Techniques for handling each of these scenarios are described in turn below.
5.1 Route Deletion
If a route node to be deleted from the SW routing trie is not installed in the HW routing table of any stackable switch in stacking system 500, the route node can be simply removed from the SW routing trie without further processing. However, if the route node is installed in one or more HW routing tables, the installed HW entries should be removed and the direct child routes of the deleted route node should be adjusted in the HW tables.
Starting with block 1002, route programming component 508(X) can remove the routing entry to be deleted from the SW routing trie (i.e., entry B) from HW routing table 312(X). Route programming component 508(X) can then enter a loop for each direct child route node (i.e., C) of B in the SW routing trie (block 1004). As used herein, “direct child route” C of a route node B is a route node in the sub-trie of the SW routing trie that is rooted by B, where there are no route nodes between B and C. For example, in routing trie 200 of
In one embodiment, route programming component 508(X) can traverse all of the direct child routes of B via a pre-order traversal of the sub-trie rooted by B. For instance, the following is a pseudo code listing of an exemplary recursive function for performing this pre-order traversal. The “node” input parameter is the root node of the sub-trie to be traversed.
In other embodiments, other types of traversal methods may be used (e.g., post-order, etc.).
Within loop 1004, route programming component 508(X) can first check whether C has no owner or stackable switch 502(X) is C's owner (block 1006). If so, route programming component 508(X) can determine that a routing entry for C is installed in HW routing table 312(X) and that no changes are needed for the installed entry. Accordingly, route programming component 508(X) can proceed to the end of the current loop iteration (block 1014).
If the check at block 1006 fails, route programming component 508(X) can move on to checking whether C and a parent route node of B (i.e., A) share a common owner (block 1008). If so, route programming component 508(X) can remove the routing entry for C from HW routing table 312(X) (if such an entry is already installed), since the HW entry for A will cover C (1010). Route programming component 508(X) can then proceed to the end of the current loop iteration (block 1014).
On the other hand, if route programming component 508(X) determines that C and A do not share a common owner at block 1008, route programming component 508(X) can install a routing entry for C into HW routing table 312(X) (if such an entry is not already installed), since the removed entry for B was previously used to cover C (block 1012). This installation logic is similar to blocks 708 and 710 of
Finally, the current loop iteration can end (block 1014) and route programming component 508(X) can return to block 1004 to process additional direct child routes for B (until all of the direct child routes have been traversed).
It should be appreciated that flowchart 1000 is illustrative and various modifications/alternative implementations are possible. For example, if flowchart 1000 is implemented using the centralized approach discussed previously, the master switch of stacking system 500 can execute flowchart 700 for each stackable switch that has route B installed in HW. In this embodiment, flowchart 700 can be optimized such that the master switch runs it only once (e.g., the algorithm can loop through all stackable switches before block 1006). One of ordinary skill in the art will recognize other modifications, variations, and alternatives.
5.2 Route Addition
When a route node is added to the SW routing trie and is assigned to a route subset, the new route should generally be installed into the HW routing tables of the owners of the route subset.
At block 1102, route programming component 508(X) can execute blocks 704-710 of
5.3 Load Balancing
As noted above, load balancing of routes in stacking system 500 is typically needed/desired when 1) a stackable switch joins or leaves system 500, or 2) a stackable switch runs out of free entries in its HW routing table due to route additions. One way to perform this load balancing is to re-partition the SW routing trie into different route subsets. However, this approach is complex and potentially time-consuming (if many HW additions and/or deletions are required).
A better approach, which is enabled by the ownership model discussed in previous sections, is to adjust the owner sets that are associated with the route subsets in order to redistribute route load. For example, assume that the routes of the SW routing trie are divided into P route subsets, where P is larger or smaller than the number of stackable switches in the system (i.e., N). Each route subset corresponds to an owner set, and thus there are P owner sets. In this scenario, the owners in each owner set 1-P can be dynamically added or removed to balance each switch's HW load. Significantly, this load balancing method does not require re-partitioning of the SW routing trie, and thus is simpler and more efficient than the re-partitioning approach. However, in some embodiments, all stackable switches that have changed ownership may need to re-traverse the SW routing trie in order to adjust its HW routing entries.
It should be noted that the simplest approach for carrying out this adjustment process is to remove all of the HW routing entries for each switch that has changed owner sets and re-programming the HW routing tables of those switches via flowchart 700 of
At block 1202 of flowchart 1200, route programming component 508(X) can execute flowchart 700 of
At block 1204, route programming component 508(X) can traverse the SW routing trie and remove the installed entries in HW routing table 312(X) that have not been marked per block 1202.
Finally, at block 1206, route programming component 508(X) can traverse the SW routing trie once again and can install routing entries into HW routing table 312(X) that are marked in the SW routing trie but have not yet been installed. In some embodiments, the order of blocks 1204 and 1206 can be reversed depending on different concerns. For example, removing installed entries (per block 1204) before adding new entries (per block 1206) may cause some incoming data traffic to not match any of the entries in the HW routing table. On the other hand, adding new entries before removing installed entries requires a sufficient amount of temporary space in the HW routing table, which may not be available. One way to address both of these concerns it to implement a hybrid approach that can calculate the amount of temporary space required and can determine whether to execute block 1206 before 1204 (or vice versa) based on that space requirement and the space actually available in the table.
The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. For example, although certain embodiments have been described with respect to particular process flows and steps, it should be apparent to those skilled in the art that the scope of the present invention is not strictly limited to the described flows and steps. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added, or omitted. As another example, although certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are possible, and that specific operations described as being implemented in software can also be implemented in hardware and vice versa.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as set forth in the following claims.
The present application claims the benefit and priority under 35 U.S.C. 119(e) of U.S. Provisional Application No. 61/971,429, filed Mar. 27, 2014, entitled “TECHNIQUES FOR AGGREGATING HARDWARE RESOURCES IN A MULTI-PACKET PROCESSSOR NETWORKING SYSTEM.” The entire contents of this provisional application are incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4625308 | Kim et al. | Nov 1986 | A |
5481073 | Singer et al. | Jan 1996 | A |
5651003 | Pearce et al. | Jul 1997 | A |
6111672 | Davis et al. | Aug 2000 | A |
6243756 | Whitmire et al. | Jun 2001 | B1 |
6366582 | Nishikado et al. | Apr 2002 | B1 |
6373840 | Chen | Apr 2002 | B1 |
6490276 | Salett et al. | Dec 2002 | B1 |
6496502 | Fite, Jr. et al. | Dec 2002 | B1 |
6516345 | Kracht | Feb 2003 | B1 |
6526345 | Ryoo | Feb 2003 | B2 |
6597658 | Simmons | Jul 2003 | B1 |
6725326 | Patra et al. | Apr 2004 | B1 |
6765877 | Foschiano et al. | Jul 2004 | B1 |
6807182 | Dolphin et al. | Oct 2004 | B1 |
6839342 | Parham et al. | Jan 2005 | B1 |
6839349 | Ambe et al. | Jan 2005 | B2 |
6850542 | Tzeng | Feb 2005 | B2 |
7093027 | Shabtay et al. | Aug 2006 | B1 |
7099315 | Ambe et al. | Aug 2006 | B2 |
7106736 | Kalkunte | Sep 2006 | B2 |
7136289 | Vasavda et al. | Nov 2006 | B2 |
7184441 | Kadambi et al. | Feb 2007 | B1 |
7206283 | Chang et al. | Apr 2007 | B2 |
7206309 | Pegrum et al. | Apr 2007 | B2 |
7274694 | Cheng et al. | Sep 2007 | B1 |
7313667 | Pullela et al. | Dec 2007 | B1 |
7327727 | Rich et al. | Feb 2008 | B2 |
7336622 | Fallis et al. | Feb 2008 | B1 |
7426179 | Harshavardhana et al. | Sep 2008 | B1 |
7480258 | Shuen et al. | Jan 2009 | B1 |
7496096 | Dong et al. | Feb 2009 | B1 |
7523227 | Yager et al. | Apr 2009 | B1 |
7565343 | Watanabe | Jul 2009 | B2 |
7602787 | Cheriton | Oct 2009 | B2 |
7697419 | Donthi | Apr 2010 | B1 |
7933282 | Gupta et al. | Apr 2011 | B1 |
8209457 | Engel et al. | Jun 2012 | B2 |
8307153 | Kishore | Nov 2012 | B2 |
8750144 | Zhou et al. | Jun 2014 | B1 |
8949574 | Slavin | Feb 2015 | B2 |
9032057 | Agarwal et al. | May 2015 | B2 |
9038151 | Chua et al. | May 2015 | B1 |
9148387 | Lin et al. | Sep 2015 | B2 |
9185049 | Agarwal et al. | Nov 2015 | B2 |
9269439 | Levy et al. | Feb 2016 | B1 |
9282058 | Lin et al. | Mar 2016 | B2 |
9313102 | Lin et al. | Apr 2016 | B2 |
9559897 | Lin et al. | Jan 2017 | B2 |
9577932 | Ravipati et al. | Feb 2017 | B2 |
20010042062 | Tenev et al. | Nov 2001 | A1 |
20020046271 | Huang | Apr 2002 | A1 |
20020101867 | O'Callaghan et al. | Aug 2002 | A1 |
20030005149 | Haas et al. | Jan 2003 | A1 |
20030169734 | Lu et al. | Sep 2003 | A1 |
20030174719 | Sampath et al. | Sep 2003 | A1 |
20030188065 | Golla et al. | Oct 2003 | A1 |
20050063354 | Garnett et al. | Mar 2005 | A1 |
20050141513 | Oh et al. | Jun 2005 | A1 |
20050198453 | Osaki | Sep 2005 | A1 |
20050243739 | Anderson et al. | Nov 2005 | A1 |
20050271044 | Hsu et al. | Dec 2005 | A1 |
20060013212 | Singh et al. | Jan 2006 | A1 |
20060023640 | Chang et al. | Feb 2006 | A1 |
20060072571 | Navada et al. | Apr 2006 | A1 |
20060077910 | Lundin et al. | Apr 2006 | A1 |
20060080498 | Shoham et al. | Apr 2006 | A1 |
20060092849 | Santoso et al. | May 2006 | A1 |
20060092853 | Santoso et al. | May 2006 | A1 |
20060176721 | Kim et al. | Aug 2006 | A1 |
20060187900 | Akbar | Aug 2006 | A1 |
20060253557 | Talayco et al. | Nov 2006 | A1 |
20060280125 | Ramanan et al. | Dec 2006 | A1 |
20060294297 | Gupta | Dec 2006 | A1 |
20070081463 | Bohra et al. | Apr 2007 | A1 |
20070121673 | Hammer | May 2007 | A1 |
20070174537 | Kao et al. | Jul 2007 | A1 |
20080137530 | Fallis et al. | Jun 2008 | A1 |
20080192754 | Ku et al. | Aug 2008 | A1 |
20080259555 | Bechtolsheim et al. | Oct 2008 | A1 |
20080275975 | Pandey et al. | Nov 2008 | A1 |
20080281947 | Kumar | Nov 2008 | A1 |
20090125617 | Klessig et al. | May 2009 | A1 |
20090135715 | Bennah | May 2009 | A1 |
20090141641 | Akahane et al. | Jun 2009 | A1 |
20100172365 | Baird et al. | Jul 2010 | A1 |
20100182933 | Hu et al. | Jul 2010 | A1 |
20100185893 | Wang et al. | Jul 2010 | A1 |
20100257283 | Agarwal | Oct 2010 | A1 |
20100284414 | Agarwal et al. | Nov 2010 | A1 |
20100293200 | Assarpour | Nov 2010 | A1 |
20100329111 | Wan et al. | Dec 2010 | A1 |
20110238923 | Hooker et al. | Sep 2011 | A1 |
20110268123 | Kopelman et al. | Nov 2011 | A1 |
20120020373 | Subramanian et al. | Jan 2012 | A1 |
20120087232 | Hanabe et al. | Apr 2012 | A1 |
20120155485 | Saha et al. | Jun 2012 | A1 |
20120246400 | Bhadra et al. | Sep 2012 | A1 |
20130170495 | Suzuki et al. | Jul 2013 | A1 |
20130201984 | Wang | Aug 2013 | A1 |
20130215791 | Lin et al. | Aug 2013 | A1 |
20130232193 | Ali et al. | Sep 2013 | A1 |
20130262377 | Agarwal | Oct 2013 | A1 |
20140003228 | Shah et al. | Jan 2014 | A1 |
20140006706 | Wang | Jan 2014 | A1 |
20140071985 | Kompella et al. | Mar 2014 | A1 |
20140075108 | Dong et al. | Mar 2014 | A1 |
20140112190 | Chou et al. | Apr 2014 | A1 |
20140112192 | Chou et al. | Apr 2014 | A1 |
20140122791 | Fingerhut | May 2014 | A1 |
20140126354 | Hui et al. | May 2014 | A1 |
20140153573 | Ramesh | Jun 2014 | A1 |
20140181275 | Lin et al. | Jun 2014 | A1 |
20140269402 | Vasseur et al. | Sep 2014 | A1 |
20140314082 | Korpinen et al. | Oct 2014 | A1 |
20140334494 | Lin et al. | Nov 2014 | A1 |
20140341079 | Lin et al. | Nov 2014 | A1 |
20140341080 | Lin et al. | Nov 2014 | A1 |
20140376361 | Hui et al. | Dec 2014 | A1 |
20150016277 | Smith et al. | Jan 2015 | A1 |
20150036479 | Gopalarathnam | Feb 2015 | A1 |
20150055452 | Lee | Feb 2015 | A1 |
20150117263 | Agarwal et al. | Apr 2015 | A1 |
20150124826 | Edsall | May 2015 | A1 |
20150229565 | Ravipati et al. | Aug 2015 | A1 |
20150288567 | Lin et al. | Oct 2015 | A1 |
20160028652 | Agarwal et al. | Jan 2016 | A1 |
20160173332 | Agarwal et al. | Jun 2016 | A1 |
20160173339 | Lin et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
2015026950 | Feb 2015 | WO |
Entry |
---|
Extended European Search Report dated Jul. 30, 2015 for EP Appln. 15000834.0; 8 pages. |
Pei et al.: “Putting Routing Tables in Silicon”, IEEE Network, IEEE Service Center, New York, NY; vol. 6, No. 1, Jan. 1, 1992; pp. 42-50. |
Hsiao et al.: “A High-Throughput and High-Capacity IPv6 Routing Lookup System”, Computer Networks, Elsevier Science Publishers B.V., Amsterdam, NL, vol. 57, No. 3, Nov. 16, 2012, pp. 782-794. |
BROCADE: “FastIron Ethernet Switch”; Administration Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 400 pages. |
BROCADE: “FastIron Ethernet Switch”; IP Multicast Configuration Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 212 pages. |
BROCADE: “FastIron Ethernet Switch”; Stacking Configuration Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 170 pages. |
BROCADE: “FastIron Ethernet Switch”; Traffic Management Guide; Supporting FastIron Software Release 08.0.00; Apr. 30, 2013; 76 pages. |
Cisco: “Cisco StackWise and StackWise Plus Technology”; technical white paper; C11-377239-01; Oct. 2010; Copyright 2010; 11 pages. |
Cisco: “Delivering High Availability in the Wiring Closet with Cisco Catalyst Switches”; technical white paper; C11-340384-01; Jan. 2007; Copyright 1992-2007; 8 pages. |
Configure, Verify, and Debug Link Aggregation Control Program (LACP); allied Telesyn; 2004; 10 pages. |
Dell: “Stacking Dell PowerConnect 7000 Series Switches”; A Dell Technical White Paper; Jul. 2011; 34 pages. |
DLDP Techology White Paper; H3C products and solutions; 2008; 8 pages; http://www.h3c.com/portal/Products—Solutions/Technology/LAN/Technology—White—Paper/200812/623012—57—0.htm. |
Extreme Networks Technical Brief: “SummitStack Stacking Technology”; 1346—06; Dec. 2010; 8 pages. |
Fischer et al.: “A Scalable ATM Switching System Architecture”; IEEE Journal on Selected Areas in Communications, IEEE Service Center, Piscataway, US, vol. 9, No. 8, Oct. 1, 1991; pp. 1299-1307. |
Understanding and Configuring the Undirectional Link Detection Protocol Feature; Cisco support communication; Jul. 9, 2007; Document ID No. 10591; 5 pages; http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/10591-77.html. |
Juniper Networks datasheet entitled: “Juniper Networks EX 4200 Ethernet Switches with Virtual Chassis Technology”; Dated Aug. 2013 (2120-04300) (12 p.). |
International Search Report and Written Opinion for International Appln. No. PCT/US2013/076251 dated May 22, 2014, 11 pages. |
Link Aggregation; http://en.wikipedia.org/wiki/Link—aggregation; downloaded from Internet on Dec. 16, 2013; 9 pages. |
M. Foschiano; Cisco Systems UniDirectional Link Detection (UDLD) Protocol; Memo; Apr. 2008; 13 pages; Cisco Systems. |
Migration from Cisco UDLD to industry standard DLDP; technical white paper; Feb. 2012; 12 pages; Hewlett-Packard Development Company. |
Partial International Search Report for PCT/US2014/051903 dated Nov. 18, 2014. |
Suckfuell: “Evolution of EWSD During the Eighties”; Institute of Electrical and Electronics Engineers; Global Telecommunications Conference; San Diego; Nov. 28-Dec. 1, 1983; [Global Telecommunications Conference], New York, IEEE, US, vol. 2, Nov. 1, 1983; pp. 577-581. |
International Search Report and Written Opinion for International Appln. No. PCT/US2014/051903 dated Jan. 27, 2015, 16 pages. |
Rooney et al.: “Associative Ternary Cache for IP Routing”, IEEE, pp. 409-416, 2004. |
“Starburst: Building Next-Generation Internet Devices”, Sharp et al., Bell Labs Technical Journal, Lucent Technologies, Inc., pp. 6-17, 2002. |
Number | Date | Country | |
---|---|---|---|
20150281055 A1 | Oct 2015 | US |
Number | Date | Country | |
---|---|---|---|
61971429 | Mar 2014 | US |