The present invention relates generally to communication networks, and particularly to high-performance network topologies.
Computer networks, such as data centers or High-Performance Computing (HPC) compute-node clusters, typically use network elements such as switches and routers that are interconnected using various interconnection topologies. Some known topologies include, for example, mesh, Fat-Tree (FT) and Dragonfly (DF).
The Dragonfly topology is described, for example, by Kim et al., in “Technology-Driven, Highly-Scalable Dragonfly Topology,” Proceedings of the 2008 International Symposium on Computer Architecture, Jun. 21-25, 2008, pages 77-88, which is incorporated herein by reference. U.S. Patent Application Publication 2010/0049942 to Kim et al., whose disclosure is incorporated herein by reference, describes a Dragonfly processor interconnect network that comprises a plurality of processor nodes, a plurality of routers, each router directly coupled to a plurality of terminal nodes, the routers coupled to one another and arranged into a group, and a plurality of groups of routers, such that each group is connected to each other group via at least one direct connection.
Jiang et al. describe indirect global adaptive routing (IAR) schemes in Dragonfly networks, in which the adaptive routing decision uses information that is not directly available at the source router, in “Indirect Adaptive Routing on Large Scale Interconnection Networks,” Proceedings of the 2009 International Symposium on Computer Architecture, Jun. 20-24, 2009, pages 220-231, which is incorporated herein by reference.
Garcia et al. describe a routing/flow-control scheme for Dragonfly networks, which decouples the routing and the deadlock avoidance mechanisms, in “On-the-Fly Adaptive Routing in High-Radix Hierarchical Networks,” Proceedings of the 2012 International Conference on Parallel Processing (ICPP), Sep. 10-13, 2012, which is incorporated herein by reference.
Prisacari et al. investigate indirect routing over Dragonfly networks, in in “Performance implications of remote-only load balancing under adversarial traffic in Dragonflies,” Proceedings of the 8th International Workshop on Interconnection Network Architecture: On-Chip, Multi-Chip, Jan. 22, 2014, which is incorporated herein by reference.
Matsuoka describes techniques for implementing high-bandwidth collective communications, in “You Don't Really Need Big Fat Switches Anymore—Almost,” Information Processing Society of Japan (IPSJ) Journal, 2008, which is incorporated herein by reference.
An embodiment of the present invention that is described herein provides a communication network including multiple nodes, which are arranged in groups such that the nodes in each group are interconnected in a bipartite topology and the groups are interconnected in a mesh topology. The nodes are configured to convey traffic between source hosts and respective destination hosts by routing packets among the nodes on paths that do not traverse any intermediate hosts other than the source and destination hosts.
In some embodiments, the nodes are configured to prevent cyclic flow-control deadlocks by modifying a virtual lane value of a packet, which originates in a source group and destined to a destination group, when the packet traverses an intermediate group. In an embodiment, the nodes are configured to prevent the cyclic flow-control deadlocks using no more than two different virtual lane values for any packet.
In a disclosed embodiment, a given node is configured to modify the virtual lane value of the packet in response to detecting that the packet enters the given node in an inbound direction from the source group and is to exit the given node in an outbound direction en-route to the destination group. In an embodiment, a given node is configured to define a set of permitted onward routes for the packet depending on the virtual lane value of the packet, and to choose an onward route for the packet from among the permitted onward routes.
In some embodiments, a given node is configured to permit routing of the packet via intermediate groups if the virtual lane value of the packet has not been modified, and to prohibit routing of the packet via intermediate groups if the virtual lane value of the packet has been modified. In an embodiment, a given node is configured to permit onward routing of a received packet, which originates in a source group and destined to a destination group, via an intermediate group only upon verifying that the received packet did not previously traverse any intermediate group.
In some embodiments, a given node is configured to define a set of permitted onward routes for a packet depending on a virtual lane value of the packet and on a port of the given node via which the packet was received. In an embodiment, the given node is configured to store two or more forwarding tables, and to define the set of permitted onward routes by choosing one of the forwarding tables depending on the virtual lane value and the port. In a disclosed embodiment, the given node is configured to represent at least two of the forwarding tables using a single table, and to access the single table based on the virtual lane value. In another embodiment, the nodes are configured to select routes for the packets, while giving first priority to direct routes from a source group to a destination group, and second priority to indirect routes that traverse an intermediate group en-route from the source group to the destination group.
In some embodiments, a node that lies on a routing path is configured to identify a compromised ability to forward the packets onwards along the routing path, and to send to one or more preceding nodes along the routing path a notification that requests the preceding nodes to find an alternative routing path for the packets. In an embodiment, the node is configured to specify in the notification a value that specifies which type of node is to consume the notification. In an embodiment, the routing path includes first and second preceding nodes of a same type, and a preceding node along the routing path is configured to replace the value with a different value, so as to cause one of the first and second preceding nodes to pass the notification and the other of the first and second preceding nodes to consume the notification.
There is additionally provided, in accordance with an embodiment of the present invention, a method for communication, including, in a network that includes multiple nodes, which are arranged in groups such that the nodes in each group are interconnected in a bipartite topology and the groups are interconnected in a mesh topology, defining between source hosts and respective destination hosts paths that do not traverse any intermediate hosts other than the source and destination hosts. Traffic is conveyed between the source hosts and the respective destination hosts by routing packets among the nodes on the paths.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
Embodiments of the present invention that are described herein provide improved communication network configurations and associated methods. In some embodiments, a communication network, e.g., an Infiniband™ network, comprises multiple nodes such as switches or routers. Some of the nodes are connected to user endpoints, referred to as hosts, and the network serves the hosts by routing packets among the nodes.
The nodes are arranged in multiple groups, such that the nodes in each group are interconnected in a bipartite topology, and the groups are interconnected in a mesh topology. The term “bipartite topology” means that the nodes of a given group are divided into two subsets, such that all intra-group links connect a node of one subset with a node of the other subset. One non-limiting example of a bipartite topology is the Fat-Tree (FT) topology. The term “mesh topology” means that every group is connected to every other group using at least one direct inter-group link.
In some disclosed embodiments, the nodes prevent cyclic flow-control deadlocks (also referred to as credit loops) from occurring, by modifying the Virtual Lane (VL) values of the packets as they traverse the network. In an embodiment, when a packet is routed from a source group to a destination group via an intermediate group, one of the nodes in the intermediate group modifies the packet VL. Since the nodes typically apply flow control independently per VL, this technique prevents credit loops from occurring. In an example embodiment, the VL modification is performed by the “reflecting node”—The node that changes the packet direction from inbound (from the mesh network into the intermediate group) to outbound (from the intermediate group back to the mesh network). In these embodiments, it is possible to prevent credit loops by using no more than two VL values.
In some embodiments, the nodes carry out a progressive Adaptive Routing (AR) process that is based on the above-described VL modification mechanism. The disclosed AR process aims to allow each node maximal flexibility in making routing decisions, while meeting the constraint that a packet should not traverse more than one intermediate group.
In an embodiment, when a node receives a packet, the node checks whether the packet has already traversed an intermediate group or not by checking the packet VL. If the packet VL has not been modified, the node concludes that the packet did not traverse any intermediate group yet. In such a case, the node is free to route the packet either directly to the destination group or via an intermediate group. If the packet VL has been modified, the node concludes that the packet has already traversed an intermediate group before, and therefore has to choose a direct route to the destination group. In this way, the VL modification mechanism, which is used for preventing credit loops, is also used for progressive AR. This technique is also applicable in other routing schemes in which a given node is permitted to choose a route from among a set of possible routes.
Other disclosed embodiments perform AR using inter-node AR Notification (ARN) signaling. In these embodiments, when a node detects an event that compromises its ability to forward the packets over the next link, the detecting node sends an ARN packet backwards along the path. The ARN typically requests preceding nodes along the route to modify the route so as not to traverse the detecting node.
The networks and associated methods described herein reduce the node cost, size and power consumption. For example, since the disclosed techniques prevent credit loops using no more than two VL values, they enable considerable reduction in the number of memory buffers needed in the nodes. The disclosed techniques are also highly scalable, and enable an increase in bandwidth utilization. Moreover, the disclosed techniques may be implemented using existing building blocks designed for Fat-Tree (FT) networks.
Network 20 comprises multiple network nodes 24, also referred to herein simply as nodes for brevity. Nodes 24 typically comprise packet switches or routers. Nodes 24 are arranged in multiple groups 28. Groups 28 are connected to one another using network links 32, e.g., optical fibers, each connected between a port of a node in one group and a port in a node of another group. Links 32 are referred to herein as inter-group links or global links.
The set of links 32 is referred to herein collectively as an inter-group network or global network. In the disclosed embodiments, the inter-group network has a mesh topology, i.e., every group 28 is connected to every other group 28 using at least one direct inter-group link 32. Put in another way, any pair of groups 28 comprise at least one respective pair of nodes 24 (one node in each group) that are connected to one another using a direct inter-group link 32.
The nodes within each group 28 are interconnected by network links 36, e.g., optical fibers. Each link 36 is connected between respective ports of two nodes within a given group 28. Links 36 are referred to herein as intra-group links or local links, and the set of links 36 in a given group 28 is referred to herein collectively as an intra-group network or local network. Within each group 28, nodes 24 are connected in a bipartite graph topology. In such a topology, the nodes are divided into two subsets, such that all links 36 connect a node of one subset and a node of the other subset. In other words, no direct links connect between nodes of the same subset.
In an example embodiment, the nodes in each group are connected in a Fat Tree (FT) topology. Alternatively, however, the intra-group network may be implemented using any other suitable bipartite topology. The bipartite topology may be full (i.e., every node in one subset is directly connected to every node in the other subset) or partial. The two subsets may be of the same size or of different sizes. In a partial bipartite topology, not every node of one subset is connected to every node of the other subset. A bipartite topology may be partial by design, e.g., in order to save cost, or as a result of link failure. In some bipartite topologies, a certain pair of nodes may be connected by two or more local links in parallel. In the context of the present patent application and in the claims, all such variations are regarded as bipartite topologies.
Within a given group 28, one subset of nodes 24 (referred to as spine nodes) connect the group to the mesh network using links 32. The other subset of nodes 24 (referred to as leaf nodes) are connected to hosts 38, also referred to as endpoints or clients. Hosts 38 may comprise any suitable type of computing devices, such as servers. In the example of
An inset at the bottom-left of the figure shows a simplified view of the internal configuration of node 24, in an example embodiment. The other nodes typically have a similar structure. In this example, node 24 comprises multiple ports 40 for connecting to links 32 and/or 36 and/or endpoints 38, a switch fabric 44 that is configured to forward packets between ports 40, and a processor 48 that carries out the methods described herein.
In the embodiments described herein, network 20 operates in accordance with the InfiniBand™ standard. Infiniband communication is specified, for example, in “InfiniBand™ Architecture Specification,” Volume 1, Release 1.2.1, November, 2007, which is incorporated herein by reference. In particular, section 7.6 of this specification addresses Virtual Lanes (VL) mechanisms, section 7.9 addresses flow control, and chapter 14 addresses subnet management (SM) issues. In alternative embodiments, however, network 20 may operate in accordance with any other suitable communication protocol or standard, such as IPv4, IPv6 (which both support ECMP) and “controlled Ethernet.”
In some embodiments, network 20 is associated with a certain Infiniband subnet, and is managed by a processor referred to as a subnet manager (SM). The SM tasks may be carried out, for example, by software running on one or more of processors 48 of nodes 24, and/or on a separate processor. Typically, the SM configures switch fabrics 44 and/or processors 48 in the various nodes 24 to carry out the methods described herein.
The configurations of network 20 and node 24 shown in
The different elements of nodes 24 may be implemented using any suitable hardware, such as in an Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). In some embodiments, some elements of nodes 24 can be implemented using software, or using a combination of hardware and software elements. In some embodiments, processors 48 comprise general-purpose processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
As can be seen in
Path 50A is the shortest path, traversing a local link, then a global link, and finally another local link. Such a path is most likely to have the smallest latency, especially since it the traffic is routed directly from group 28A to group 28D without traversing any intermediate group. This type of path is referred to herein as “Type-A.” For a given source and destination, there may exist zero, one, or multiple Type-A paths.
Path 50B traverses a local link, then two global links (via group 28E—serving in this case as an intermediate group), and finally another local link. In this path, in intermediate group 28E the traffic traverses only one of the spine nodes (denoted Y) without reaching the leaf nodes. The traffic is “reflected” by this spine node, and therefore spine node Y in this scenario is referred to as a “reflecting node.” This type of path is referred to herein as “Type-B.” For a given source and destination, there may exist zero, one, or multiple Type-B paths.
Path 50C traverses a local link, then a global link to intermediate group 28E, then two local links inside group 28E, and finally another local link. In this path, in intermediate group 28E the traffic traverses two spine nodes and one leaf node (denoted X). In intermediate group 28E, the traffic is “reflected” by this leaf node, i.e., leaf node X serves as a reflecting node in path 50C. This type of path is referred to herein as “Type-C.” For a given source and destination, there typically exist multiple Type-C paths.
It should be noted that in the disclosed embodiments, the routing paths do not traverse any endpoint 38 other than the source and destination. Path 50C, for example, traverses a leaf node denoted X in group 28E that is connected to an endpoint denoted S2. Nevertheless, the traffic does not exit node X toward S2, only routed by node X from one spine node to another in the group.
Note also that the choice of routing paths is typically constrained to traverse no more than a single intermediate group. This constraint maintains small latency, while still enabling a high degree of resilience and flexibility for load balancing.
In Infiniband, and in some other network protocols, network nodes employ credit-based flow control to regulate the traffic over the network links and avoid overflow in the node buffers. In some practical scenarios, a cyclic flow-control deadlock may occur when all the ports along some closed-loop path wait for credits, and thus halt the traffic. This sort of cyclic flow-control deadlock scenario is sometimes referred to as a credit loop.
In some embodiments, nodes 24 prevent credit loops by modifying the Virtual Lanes (VLs) of the packets as they route the packets via network 20. Since the nodes typically buffer traffic and apply flow control independently per VL, modifying the VL of the packets at some point along a cyclic path will prevent credit loops from occurring on this path.
As demonstrated above, when packets are routed via an intermediate group, one of the nodes in the intermediate group serves as a reflecting node. The reflecting node receives the packets in the inbound direction (from the global network into the local network), and forwards the packets in the outbound direction (from the local network back into the global network). The reflecting node may comprise a spine node (e.g., node Y in path 50B) or a leaf node (e.g., node X in path 50C).
In some embodiments, a given node modifies the VL of the packets for which it serves as a reflecting node. For other packets, which pass through the node but are not reflected, the node retains the existing VL.
Consider, for example, path 50B in
The node checks whether it serves as a reflecting node for this packet, at a reflection checking step 64. Typically, the node checks whether the packet has arrived in the inbound direction (from the direction of the global network) and is to be forwarded in the outbound direction (back toward the global network). In a spine node, the node typically checks whether the packet arrived over a global link and is to be forwarded over a global link. In a leaf node, the node typically checks whether the packet arrived from a spine node and is to be forwarded to a spine node.
If the node concludes that it serves as a reflecting node, the node modifies the packet VL, at a VL modification step 68. The node then routes the packet, at a routing step 72. If, at step 64, the node concludes that it does not serve as a reflecting node, the node does not change the packet VL, and the method proceeds directly to routing step 72.
Using this technique, nodes 24 are able to eliminate credits loops while using no more than two VL values. In the present example the nodes use VL=0 and VL=1, but any other suitable VL values can be used. In an example embodiment, throughout network 20 packets are initially assigned VL=0, 2, 4 or 6. Reflecting nodes increment the VL by one, i.e., change VL=0 to VL=1, VL=2 to VL=3, VL=4 to VL=5, and VL=6 to VL=7.
Alternatively, any other suitable VL modification can be used. For example, routing schemes that allow up to two intermediate groups can be implemented using three VL values, and routing schemes that allow up to three intermediate groups can be implemented using four VL values. Alternatively, it is possible to allow two intermediate groups, e.g., in case of link failure, using only two VL values, with some risk of credit loops.
In some embodiments, nodes 24 carry out a progressive Adaptive Routing (AR) process that is based on the VL modification mechanism described above. The rationale behind the disclosed process is the following:
At a checking step 88, the node checks, based on the packet VL, whether or not the packet already passed through an intermediate group before arriving at the node. As explained above, if the VL is still the originally-assigned VL, the node concludes that the packet did not go through an intermediate group. If the packet VL has been modified, the node concludes that the packet already traversed an intermediate group.
If the packet did not go through an intermediate group yet, the node permits applying AR to the packet, at an AR activation step 92. If the packet already passed through an intermediate group, the node prohibits applying AR to the packet, at an AR deactivation step 96.
At a selective AR step, the node chooses a route for the packet and routes the packet to the chosen route. If AR is permitted (at step 92), the node is free to choose either a direct path to the destination group, or an indirect route that goes through an intermediate group. If AR is disabled (at step 96), the node has to choose a direct route to the destination group.
The method of
The above-described AR scheme can be implemented in nodes 24 in various ways. In an example embodiment, each node 24 holds three forwarding tables denoted pLFT0, pLFT1, pLFT2. For a given packet, the node chooses the appropriate pLFT based on the port (or equivalently the receive queue—RQ) via which the packet arrived, and on the VL value of the packet. The node then accesses the chosen pLFT, which specifies the AR options (if any) for onward routing of the packet.
Consider, for example, the bottom of the figure, which specifies the routing scheme in a leaf node. A packet entering a leaf node from “below” (i.e., from a local host 38) is routed using pLFT0. If the packet is addressed to a local host served by the leaf node, the node forwards it locally to that host. Otherwise, the node forwards the packet to the global network using adaptive routing, using either a type-A, type-B or type-C path, in descending order of priority. All routing options are valid because, since the packet arrived from a local host, it did not traverse any intermediate group yet.
A packet entering the leaf node from “above” (i.e., from a spine node) is routed using pLFT1. In this scenario there are two options: If the packet is addressed to a local host 38, it is forwarded on a local link. Otherwise, the node concludes that the packet is in the process of being forwarded over a type-C path (in which the present leaf node is the reflecting node). As such, the leaf node forwards the packet to the global network to continue the type-C path. No “AR Up A” or “AR Up B” options are available in this case.
The top of the figure specifies the routing of a packet by a spine node. For a spine node, the routing decision sometimes depends on the packet VL, as well. A packet entering the spine node from “above” (from the global network) is routed using pLFT1, regardless of VL. If the packet is addressed to a local host 38 in the group, the node routes it over the appropriate local link. Otherwise, the node forwards the packet back to the global network using adaptive routing, using either a type-B or type-C path, in descending order of priority.
A packet entering the spine node from “below” (on a local link 36 from a leaf node) is routed depending on its VL value. If VL=0 (indicating the packet did not traverse an intermediate group yet), the node routes the packet using pLFT0. All routing options are valid (local routing in the group, or routing to the global network using either a type-A, type-B or type-C path, in descending order of priority).
If VL=1 (indicating the packet already traversed an intermediate group), the spine node routes the packet using pLFT2. In this case, the spine node concludes that the packet is in the process of being forwarded over a type-C path. The only choice in this case is onward routing to the global network using a type-C path.
In some embodiments, it is possible to combine pLFT0 and pLFT2 in a single forwarding table, in order to save memory. When accessing the combined table, the node is permitted to access the entire table if VL=0, and only the “Up C” option if VL=1.
The forwarding table implementation of
The above technique is not limited to AR, and can be used with any other suitable routing scheme in which a given node is permitted to choose a route from among a set of possible routes. One non-limiting example of such a routing scheme is Equal-Cost Multi-Path (ECMP) routing, in which the traffic is typically distributed to multiple ports by computing a hash function over the packet source and destination addresses. In an example embodiment, a node along the routing path decides whether (and to what extent) to allow ECMP based on the VL value of the packet. In one embodiment, if VL=0, then the node permits ECMP among paths of types A, B and C. If VL=1, the node permits ECMP among paths of type C only. Other possible examples are random or weighted-random forwarding over a list of allowed ports.
In some embodiments, nodes 24 of network 20 carry out adaptive routing using inter-node AR notifications. In these embodiments, a node 24 along a routing path may detect an event that compromises its ability to forward the packets over the next link (e.g., link-fault, congestion or head-of-line time-out). This event is referred to herein as an AR event. Upon detecting an AR event, the detecting node generates a packet referred to as an AR Notification (ARN), and sends the ARN backwards along the path. The ARN typically indicates the AR event to the preceding nodes along the path, and requests the preceding nodes to modify the route so as not to traverse the detecting node.
When a preceding node receives the ARN, it typically checks whether it is in a position to modify the path, and whether the traffic in question is permitted to undergo AR. If so, the node consumes the ARN and modifies the path. Otherwise, the switch forwards the ARN to the previous node along the route. Further aspects of routing using ARN are addressed in U.S. patent application Ser. No. 13/754,921, filed Jan. 31, 2013, entitled “Adaptive Routing Using Inter-Switch Notifications,” which is assigned to the assignee of the present patent application and whose disclosure is incorporated herein by reference.
The ARN packet indicates the type of path, the type of node and pLFT of the detecting node, the type and pLFT of the consuming node, and a string value “1”. This ARN indicates to the leaf node that there may exist other type-A paths leading to the same desired destination group (group 28D) via other spine nodes. In response, the leaf node may attempt routing the packet via another spine node in group 28A.
As another example, consider path 50B (a type-B path). In this example, spine node Y in group 28E (serving as an intermediate group) discovers that the next hop (the global link to group 28D) is unusable. Node Y thus sends an ARN packet to leaf node T, which is the preceding node on path 50B. This ARN (with string value “2”) indicates to leaf node T that there may exist other type-B paths leading to group 28D via other spine nodes (possibly in other intermediate groups). In response, the leaf node may attempt routing the packet via another spine node.
As yet another example, consider path 50C (a type-C path). In this scenario, a spine node Z in group 28E (serving as an intermediate group) discovers that the next hop (the global link to group 28D) is unusable. Node Z sends an ARN packet to leaf node X in the same group, which is the preceding node on path 50C. This ARN (with string value “3”) indicates to leaf node T that there may exist other type-C paths leading to group 28D via other spine nodes in group 28E. In response, the leaf node may attempt routing the packet via another spine node in the group. As before, AR is performed using type-A, type-B or type-C paths, in descending order of priority.
On path 50C, the ARN packet is forwarded backwards four times (denoted ‘1’, ‘2’, ‘3’, ‘4’ in the figure) all the way back to the first leaf node on the path. Generally, the string value in the ARN indicates which type of node (leaf or spine) should consume the ARN and using which forwarding table. The embodiment of
Consider, for example, path 50C in
The scheme of
It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
Number | Name | Date | Kind |
---|---|---|---|
4312064 | Bench et al. | Jan 1982 | A |
5480500 | Richard et al. | Jan 1996 | A |
6115385 | Vig | Sep 2000 | A |
6169741 | LeMaire et al. | Jan 2001 | B1 |
6532211 | Rathonyi et al. | Mar 2003 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6665297 | Hariguchi et al. | Dec 2003 | B1 |
6775268 | Wang et al. | Aug 2004 | B1 |
6804532 | Moon et al. | Oct 2004 | B1 |
6831918 | Kavak | Dec 2004 | B1 |
6912604 | Tzeng et al. | Jun 2005 | B1 |
6950428 | Horst et al. | Sep 2005 | B1 |
7010607 | Bunton | Mar 2006 | B1 |
7076569 | Bailey et al. | Jul 2006 | B1 |
7234001 | Simpson et al. | Jun 2007 | B2 |
7286535 | Ishikawa et al. | Oct 2007 | B2 |
7676597 | Kagan et al. | Mar 2010 | B2 |
7746854 | Ambe et al. | Jun 2010 | B2 |
7936770 | Frattura et al. | May 2011 | B1 |
7969980 | Florit et al. | Jun 2011 | B1 |
8094569 | Gunukula et al. | Jan 2012 | B2 |
8175094 | Bauchot et al. | May 2012 | B2 |
8195989 | Lu et al. | Jun 2012 | B1 |
8213315 | Crupnicoff et al. | Jul 2012 | B2 |
8401012 | Underwood et al. | Mar 2013 | B2 |
8489718 | Brar et al. | Jul 2013 | B1 |
8495194 | Brar et al. | Jul 2013 | B1 |
8576715 | Bloch et al. | Nov 2013 | B2 |
8605575 | Gunukula et al. | Dec 2013 | B2 |
8621111 | Marr et al. | Dec 2013 | B2 |
8755389 | Poutievski et al. | Jun 2014 | B1 |
8774063 | Beecroft | Jul 2014 | B2 |
8908704 | Koren et al. | Dec 2014 | B2 |
9014006 | Haramaty et al. | Apr 2015 | B2 |
9042234 | Liljenstolpe et al. | May 2015 | B1 |
20020013844 | Garrett et al. | Jan 2002 | A1 |
20020026525 | Armitage | Feb 2002 | A1 |
20020039357 | Lipasti et al. | Apr 2002 | A1 |
20020136163 | Kawakami et al. | Sep 2002 | A1 |
20020138645 | Shinomiya et al. | Sep 2002 | A1 |
20020165897 | Kagan et al. | Nov 2002 | A1 |
20030039260 | Fujisawa | Feb 2003 | A1 |
20030065856 | Kagan et al. | Apr 2003 | A1 |
20030079005 | Myers et al. | Apr 2003 | A1 |
20030223453 | Stoler et al. | Dec 2003 | A1 |
20040111651 | Mukherjee et al. | Jun 2004 | A1 |
20040202473 | Nakamura et al. | Oct 2004 | A1 |
20050013245 | Sreemanthula et al. | Jan 2005 | A1 |
20050157641 | Roy | Jul 2005 | A1 |
20050259588 | Preguica | Nov 2005 | A1 |
20060126627 | Diouf | Jun 2006 | A1 |
20060182034 | Klinker et al. | Aug 2006 | A1 |
20060291480 | Cho et al. | Dec 2006 | A1 |
20070058536 | Vaananen et al. | Mar 2007 | A1 |
20070058646 | Hermoni | Mar 2007 | A1 |
20070070998 | Sethuram et al. | Mar 2007 | A1 |
20070091911 | Watanabe et al. | Apr 2007 | A1 |
20070223470 | Stahl | Sep 2007 | A1 |
20070237083 | Oh et al. | Oct 2007 | A9 |
20080002690 | Ver Steeg et al. | Jan 2008 | A1 |
20080112413 | Pong | May 2008 | A1 |
20080165797 | Aceves | Jul 2008 | A1 |
20080189432 | Abali et al. | Aug 2008 | A1 |
20080298248 | Roeck et al. | Dec 2008 | A1 |
20090103534 | Malledant et al. | Apr 2009 | A1 |
20090119565 | Park et al. | May 2009 | A1 |
20100039959 | Gilmartin | Feb 2010 | A1 |
20100049942 | Kim | Feb 2010 | A1 |
20100111529 | Zeng et al. | May 2010 | A1 |
20100141428 | Mildenberger et al. | Jun 2010 | A1 |
20100216444 | Mariniello et al. | Aug 2010 | A1 |
20100284404 | Gopinath et al. | Nov 2010 | A1 |
20100315958 | Luo et al. | Dec 2010 | A1 |
20110019673 | Fernandez | Jan 2011 | A1 |
20110085440 | Owens et al. | Apr 2011 | A1 |
20110085449 | Jeyachandran et al. | Apr 2011 | A1 |
20110090784 | Gan | Apr 2011 | A1 |
20110164496 | Loh et al. | Jul 2011 | A1 |
20110225391 | Burroughs et al. | Sep 2011 | A1 |
20110249679 | Lin et al. | Oct 2011 | A1 |
20110255410 | Yamen et al. | Oct 2011 | A1 |
20110265006 | Morimura et al. | Oct 2011 | A1 |
20110299529 | Olsson et al. | Dec 2011 | A1 |
20120020207 | Corti et al. | Jan 2012 | A1 |
20120082057 | Welin et al. | Apr 2012 | A1 |
20120144064 | Parker | Jun 2012 | A1 |
20120144065 | Parker et al. | Jun 2012 | A1 |
20120147752 | Ashwood-Smith et al. | Jun 2012 | A1 |
20120207175 | Raman et al. | Aug 2012 | A1 |
20120300669 | Zahavi | Nov 2012 | A1 |
20120314706 | Liss | Dec 2012 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130071116 | Ong | Mar 2013 | A1 |
20130083701 | Tomic et al. | Apr 2013 | A1 |
20130114599 | Arad | May 2013 | A1 |
20130170451 | Krause et al. | Jul 2013 | A1 |
20130242745 | Umezuki | Sep 2013 | A1 |
20130301646 | Bogdanski et al. | Nov 2013 | A1 |
20130315237 | Kagan et al. | Nov 2013 | A1 |
20130322256 | Bader et al. | Dec 2013 | A1 |
20130336116 | Vasseur et al. | Dec 2013 | A1 |
20140043959 | Owens et al. | Feb 2014 | A1 |
20140140341 | Bataineh et al. | May 2014 | A1 |
20140192646 | Mir et al. | Jul 2014 | A1 |
20140211631 | Haramaty et al. | Jul 2014 | A1 |
20140313880 | Lu et al. | Oct 2014 | A1 |
20140328180 | Kim et al. | Nov 2014 | A1 |
20140343967 | Baker | Nov 2014 | A1 |
20150030033 | Vasseur et al. | Jan 2015 | A1 |
20150052252 | Gilde et al. | Feb 2015 | A1 |
20150098466 | Haramaty et al. | Apr 2015 | A1 |
20150124815 | Beliveau et al. | May 2015 | A1 |
20150163144 | Koponen et al. | Jun 2015 | A1 |
20150194215 | Douglas et al. | Jul 2015 | A1 |
20150195204 | Haramaty et al. | Jul 2015 | A1 |
20150372898 | Haramaty et al. | Dec 2015 | A1 |
20150372916 | Haramaty et al. | Dec 2015 | A1 |
20160014636 | Bahr et al. | Jan 2016 | A1 |
20160182378 | Basavaraja et al. | Jun 2016 | A1 |
20160294715 | Raindel et al. | Oct 2016 | A1 |
Entry |
---|
Kim et al., “Technology-Driven, Highly-Scalable Dragonfly Topology,” Proceedings of the 2008 International Symposium on Computer Architecture, pp. 77-88, Jun. 21-25, 2008. |
Jiang et al., “Indirect Adaptive Routing on Large Scale Interconnection Networks,” Proceedings of the 2009 International Symposium on Computer Architecture, pp. 220-331, Jun. 20-24, 2009. |
Garcia et al., “On-the-Fly 10 Adaptive Routing in High-Radix Hierarchical Networks,” Proceedings of the 2012 International Conference on Parallel Processing (ICPP), pp. 279-288, Sep. 10-13, 2012. |
Matsuoka, S., “You Don't Really Need Big Fat Switches Anymore—Almost,” Information Processing Society of Japan (IPSJ) Journal, 6 pages, year 2008. |
Prisacari et al., “Performance implications of remote-only load balancing under adversarial traffic in Dragonflies”, Proceedings of the 8th International Workshop on Interconnection Network Architecture: On-Chip, Multi-Chip, 4 pages, Jan. 22, 2014. |
“InfiniBandTM Architecture Specification,” vol. 1, Release 1.2.1, sections 7.6, 7.9 and 14, Nov. 2007. |
Leiserson, C E., “Fat-Trees: Universal Networks for Hardware Efficient Supercomputing”, IEEE Transactions on Computers, vol. C-34, No. 10, pp. 892-901, Oct. 1985. |
Ohring et al., “On Generalized Fat Trees”, Proceedings of the 9th International Symposium on Parallel Processing, pp. 37-44, Santa Barbara, USA, Apr. 25-28, 1995. |
Zahavi, E., “D-Mod-K Routing Providing Non-Blocking Traffic for Shift Permutations on Real Life Fat Trees”, CCIT Technical Report #776, Technion—Israel Institute of Technology, Haifa, Israel, Aug. 2010. |
Yuan et al., “Oblivious Routing for Fat-Tree Based System Area Networks with Uncertain Traffic Demands”, Proceedings of ACM SIGMETRICS—the International Conference on Measurement and Modeling of Computer Systems, pp. 337-348, San Diego, USA, Jun. 12-16, 2007. |
U.S. Appl. No. 14/662,259 Office Action dated Sep. 22, 2016. |
Dally et al., “Deadlock-Free Message Routing in Multiprocessor Interconnection Networks”, IEEE Transactions on Computers, vol. C-36, No. 5, May 1987, pp. 547-553. |
U.S. Appl. No. 14/745,488 Office Action dated Dec. 6, 2016. |
Minkenberg et al., “Adaptive Routing in Data Center Bridges”, Proceedings of 17th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 33-41, Aug. 25-27, 2009. |
Kim et al., “Adaptive Routing in High-Radix Clos Network”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC2006), Tampa, USA, Nov. 2006. |
Afek et al., “Sampling and Large Flow Detection in SDN”, SIGCOMM '15, pp. 345-346, Aug. 17-21, 2015, London, UK. |
Culley et al., “Marker PDU Aligned Framing for TCP Specification”, IETF Network Working Group, RFC 5044, Oct. 2007. |
Shah et al., “Direct Data Placement over Reliable Transports”, IETF Network Working Group, RFC 5041, Oct. 2007. |
Martinez et al., “Supporting fully adaptive routing in Infiniband networks”, Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS'03),Apr. 22-26, 2003. |
Joseph, S., “Adaptive routing in distributed decentralized systems: NeuroGrid, Gnutella & Freenet”, Proceedings of Workshop on Infrastructure for Agents, MAS and Scalable MAS, Montreal, Canada, 11 pages, year 2001. |
Gusat et al., “R3C2: Reactive Route & Rate Control for CEE”, Proceedings of 18th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 50-57, Aug. 10-27, 2010. |
Wu et al., “DARD: Distributed adaptive routing datacenter networks”, Proceedings of IEEE 32nd International Conference Distributed Computing Systems, pp. 32-41, Jun. 18-21, 2012. |
Ding et al., “Level-wise scheduling algorithm for fat tree interconnection networks”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC 2006), 9 pages, Nov. 2006. |
U.S. Appl. No. 14/046,976 Office Action dated Jun. 2, 2015. |
Li et al., “Multicast Replication Using Dual Lookups in Large Packet-Based Switches”, 2006 IET International conference on Wireless, Mobile and Multimedia Networks, pp. 1-3, Nov. 6-9, 2006. |
Nichols et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, Network Norking Group, RFC 2474, 20 pages, Dec. 1998. |
Microsoft., “How IPv4 Multicasting Works”, 22 pages, Mar. 28, 2003. |
Suchara et al., “Network Architecture for Joint Failure Recovery and Traffic Engineering”, Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, pp. 97-108, Jun. 7-11, 2011. |
IEEE 802.1Q, “IEEE Standard for Local and metropolitan area networks Virtual Bridged Local Area Networks”, IEEE Computer Society, 303 pages, May 19, 2006. |
Plummer, D., “An Ethernet Address Resolution Protocol,” Network Working Group ,Request for Comments (RFC) 826, 10 pages, Nov. 1982. |
Hinden et al., “IP Version 6 Addressing Architecture,” Network Working Group ,Request for Comments (RFC) 2373, 26 pages, Jul. 1998. |
U.S. Appl. No. 12/910,900 Office Action dated Apr. 9, 2013. |
U.S. Appl. No. 14/046,976 Office Action dated Jan. 14, 2016. |
Nkposong et al., “Experiences with BGP in Large Scale Data Centers:Teaching an old protocol new tricks”, 44 pages, Jan. 31, 3014. |
“Equal-cost multi-path routing”, Wikipedia, 2 pages, Oct. 13, 2014. |
Thaler et al., “Multipath Issues in Unicast and Multicast Next-Flop Selection”, Network Working Group, RFC 2991, 9 pages, Nov. 2000. |
Glass et al., “The turn model for adaptive routing”, Journal of the ACM, vol. 41, No. 5, pp. 874-903, Sep. 1994. |
Mahalingam et al., “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks”, Internet Draft, 20 pages, Aug. 22, 2012. |
Sinha et al., “Harnessing TCP's Burstiness with Flowlet Switching”, 3rd ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets), 6 pages, Nov. 11, 2004. |
Vishnu et al., “Hot-Spot Avoidance With Multi-Pathing Over IntiniBand: An MPI Perspective”, Seventh IEEE International Symposium on Cluster Computing and the Grid (CCGrid'07), 8 pages, year 2007. |
NOWLAB—Network Based Computing Lab, 2 pages, years 2002-2015 http://nowlab.cse—ohio-state.edu/publications/conf-presentations/2007/vishnu-ccgrid07.pdf. |
Alizadeh et al.,“CONGA: Distributed Congestion-Aware Load Balancing for Datacenters”, Cisco Systems, 12 pages, Aug. 9, 2014. |
Geoffray et al., “Adaptive Routing Strategies for Modem High Performance Networks”, 16th IEEE Symposium on High Performance Interconnects (HOTI '08), pp. 165-172, Aug. 26-28, 2008. |
Anderson et al., “On the Stability of Adaptive Routing in the Presence of Congestion Control”, IEEE INFOCOM, 11 pages, 2003. |
Perry et al., “Fastpass: A Centralized “Zero-Queue” Datacenter Network”, M.I.T. Computer Science & Artificial Intelligence Lab, 12 pages, year 2014. |
U.S. Appl. No. 14/732,853 Office Action dated Jan. 26, 2017. |
Number | Date | Country | |
---|---|---|---|
20160028613 A1 | Jan 2016 | US |