Embodiments described herein relate generally to communication networks, and particularly to methods and systems for efficient interconnection among network elements.
Various topologies suitable for deploying large-scale networks are known in the art. For example, U.S. Pat. No. 9,699,067 describes a communication network that includes multiple nodes, which are arranged in groups such that the nodes in each group are interconnected in a bipartite topology and the groups are interconnected in a mesh topology. The nodes are configured to convey traffic between source hosts and respective destination hosts by routing packets among the nodes on paths that do not traverse any intermediate hosts other than the source and destination hosts.
U.S. patent application publication 2018/0302288 describes a method for networking nodes in a data center network structure, including connecting at least ten base units each including connected nodes with southbound connections of a multi-host NIC controller having northbound a higher total bandwidth than southbound, the controllers configured as dragonfly switches; connecting the ten base units with their respective controllers in a modified Peterson graph form as an intragroup network to build a super unit including three groups, where each controller uses three northbound connections for a direct connection to three other base units of the super unit, and in which two base units of each group are connected via a respective one of a fourth northbound connection to one of the other groups, and a remaining base unit not being part of one of the groups is adapted for using three northbound connections for direct connection to one base unit in each group.
An embodiment that is described herein provides a design tool for network interconnection, including a user interface and a processor. The user interface includes an input device and an output device. The processor is coupled to the input device and to the output device, and is configured to receive via the input device design parameters including: (i) a number G of groups of network elements, (ii) a number S of spines associated with each group, and (iii) a number P of ports that each spine has for connecting to other spines, using short-cable connections or long-cable connections. The processor is further configured to determine an interconnection plan by specifying connections among spines belonging to different groups, in a clique scheme or in a bipartite scheme, so that for given values of G, S and P, (i) a number of the long-cable connections among the spines is minimized, and (ii) a number of inter-group connections is balanced among the G groups up to a deviation of a single connection, and to output to the output device instructions for applying the interconnection plan.
In some embodiments, the processor is configured to determine the interconnection plan by dividing the G groups into multiple disjoint subsets, and to determine short-cable connections among spines belonging to groups within each subset and among spines belonging to groups in pairs of the subsets. In other embodiments, the processor is configured to divide the G groups into a number Ns of disjoint subsets, each subset includes a number Ng of the groups, so that when a first condition—P divides G—is satisfied, Ns=G/P and Ng=P, when a second condition is satisfied—(P+1) divides G, Ns=G/(P+1) and Ng=P+1, and when a third condition—P divides (G−1) is satisfied, Ns=(G−1)/P and Ng=P. In yet other embodiments, in response to identifying that one of the first and the second conditions is satisfied, the processor is configured to specify connections: (i) among Ng spines in producing the clique scheme, and (ii) among 2·Ng spines in producing the bipartite scheme.
In an embodiment, in response to identifying that the third condition is satisfied, the processor is configured to specify connections (i) among Ng+1 spines in producing the clique scheme, and (ii) among 2·Ng spines in producing the bipartite scheme. In another embodiment, the processor is configured to define one or more virtual groups, each virtual group include S virtual spines so that a number of groups that includes the virtual groups satisfies at least one of the first, second and third conditions, and to specify the connections by omitting connections to the virtual spines. In yet another embodiment, the processor is configured to determine the interconnection plan by specifying spines connections in a number Ceiling[S/Ns] of subnetworks, the subnetworks correspond to different respective permutations of the G groups.
In some embodiments, the processor is configured to determine the interconnection plan by constraining a number of inter-group connections between any pair of the groups between Floor[Nc] and Ceiling[Nc], and Nc=P·S/(G−1). In other embodiments, the processor is configured to determine the interconnection plan by allocating spines to multiple racks, so that the spines of each clique scheme reside in a same rack, and the spines of each bipartite scheme reside in a same rack. In yet other embodiments, the processor is configured to identify unconnected ports in spines of the clique schemes or of the bipartite schemes, and to determine short-cable connections among unconnected ports of spines in a same rack, and long-cable connections among unconnected ports of spines belonging to different racks.
There is additionally provided, in accordance with an embodiment that is described herein, a method for designing a network interconnection, including receiving design parameters including: (i) a number G of groups of network elements, (ii) a number S of spines associated with each group, and (iii) a number P of ports that each spine has for connecting to other spines, using short-cable connections or long-cable connections. An interconnection plan is determined by specifying connections among spines belonging to different groups, in a clique scheme or in a bipartite scheme, so that for given values of G, S and P, (i) a number of the long-cable connections among the spines is minimized, and (ii) a number of inter-group connections is balanced among the G groups up to a deviation of a single connection. The interconnection plan is applied by connecting among the spines in accordance with the connections specified in the interconnection plan.
There is additionally provided, in accordance with an embodiment that is described herein, a communication network, including a number G of groups of network elements, each group includes a number S of spines, and each spine includes a number P of ports for connecting to other spines, using short-cable connections or long-cable connections. Spines belonging to different groups are interconnected in a clique scheme or in a bipartite scheme. Each clique scheme and each bipartite scheme is assigned to reside in a rack, so that spines having a port among the P ports that is not connected in a clique or a bipartite scheme are interconnected using short-cable connections within a common rack, and using long-cable connections between different racks. For given values of G, S and P, (i) a number of the long-cable connections among the spines is minimized, and (ii) a number of inter-group connections between pairs of groups equals Floor[P·S/(G−1)] or Ceiling[P·S/(G−1)].
These and other embodiments will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In various computing systems, such as data centers, a large number of network nodes communicate with one another over a packet network. The underlying network typically comprises network elements such as switches or routers that connect to one another and to the network nodes using physical links. Typically, the network topology employed has a major impact on the system performance in terms of data rate, latency and cost.
Embodiments that are described herein provide systems and methods for designing and implementing a large-scale network that aims to reduce interconnection cost by minimizing the required number of long-cable connections.
The disclosed embodiments refer mainly to a hierarchical topology in which network nodes connect directly to leaf network elements, which in turn connect to spine network elements. In the context of the present disclosure and in the claims, leaf and spine network elements are also referred to respectively as “leaves” and “spines,” for brevity.
The lengths of the physical links in the network depend on actual physical distances between the connected elements. The usable length of a physical link is limited by the data rate transmitted over the link. For example, electrical cables support data rates of 100 Gigabps up to a distance of about 2.5 meters. Longer connections at the same rate require the use of optical cables, which are significantly more costly than electrical cables.
In some embodiments, the network elements comprising the network are divided into multiple predefined groups. Due to physical limitations, network elements belonging to different groups typically reside at large distances from one another, which require long-cable connections.
In the disclosed embodiments, spines are mounted within racks, separately from the leaves. Spines belonging to the same rack are interconnected using short-cable connections, whereas spines that reside in different racks are interconnected using long-cable connections. Inter-group connections are made by connecting between spines of the respective groups. The ports of a spine that are used for inter-group connections are also referred to herein as “global ports.”
In some applications, physical elements of the network are constrained to predefined values. Such parameters comprise, for example, “G”—the number of groups, “S”—the number of spines per group, “P”—the number of global ports per spine, and “R”—the number of spines per rack. Designing an interconnection plan for a large-scale network, given an arbitrary combination of the design parameters, is a challenging task.
In some embodiments, a method is disclosed for planning and implementing an interconnection plan among the spines. The interconnection plan specifies connections among spines belonging to different groups that provide a balanced connectivity among the groups. The spines are interconnected in clique and bipartite schemes, and are allocated to racks. The interconnection plan is designed so that (i) a number of the long-cable connections among the spines is minimized, and (ii) the number of inter-group connections is balanced among the G groups up to a deviation of a single connection.
In some embodiments, depending on G and P, the plurality of the G groups is divided into Ns disjoint subsets of size Ng, wherein short-cable connections are determined among spines belonging to groups within each subset and among spines belonging to groups in pairs of the subsets. In some embodiments, virtual groups are added to satisfy a condition between G and P, and G is increased accordingly. Virtual spines of the virtual groups, however, are not actually connected.
In an embodiment, the interconnection plan specifies spines connections in a number Ceiling[S/Ns] of subnetworks, wherein the subnetworks correspond to different respective permutations of the G groups.
In some embodiments, the interconnection plan specifies allocation of spines belonging to clique and bipartite schemes to racks. Global ports of spines belonging to clique or bipartite structures, and that remain unused, are connected using short-cable connections in the same rack, and using long-cable connections between different racks.
In the disclosed techniques, an efficient network topology having a minimal number of long-cable connections among spines is designed and implemented, thus reducing the interconnection cost significantly. The underlying design method is flexible and is suitable for various combinations of design parameters.
In computing system 20, network nodes 24 communicate with one another over a hierarchical network comprising a first level of leaf network elements 28 and a second level of spine network elements 32, 34 and 36. For the sake of brevity, leaf and spine network elements are also referred to herein simply as “leaves” and “spines” respectively. In the present example, each leaf is coupled to M network nodes 24 using M respective links 30. Leaves 28 and spines 32 may comprise any suitable network element such as a switch or router, and may operate in accordance with any suitable communication protocol such as, for example, Ethernet or InfiniBand.
Computing system 20 may be used in various applications in which a large number of nodes or processors communicate with one another, for example, in computing clusters such as High-Performance Computing (HPC) and large-scale data centers. In practical implementations, computing system 20 supports high-rate communication among thousands of network nodes.
The network elements comprising the network in
In
In the description that follows, “short links” are also referred to as “short-cable links” and “long links” are also referred to as “long-cable links.”
In the context of the present disclosure and in the claims, the terms “short-cable connection” and “long-cable connection” refer to physical links connecting spines within the same cabinet and between different cabinets, respectively. Passive copper cables support reliable data delivery at a data rate of 100 Gigabps along distances up to about 2.5 meters. This means that for such a data rate copper-based short-cable connections are can be used up to 2.5 meters and optical long-cable connections are used for distances above 2.5 meters. Alternatively, other data rates and corresponding cable lengths can also be used.
In some embodiments, a path between two leaves belonging to a common group passes via a single spine of the common group. For example, leaves 28A in group 40A may communicate with one another via one or more parallel connections, wherein each of the parallel connections passes via a respective spine such as, 32A, 32B or 32C. In an embodiment, a path between leaves belonging to different groups, passes via two spines associated with the respective groups. For example, the left side leaf 28A in group 40A may connect to the right side leaf 28C of group 40C via spines 32A and 34A in cabinet 44A. Alternative paths comprise spines 32B and 34B in cabinet 44B, and spines 36A and 36B in cabinet 44C.
In some embodiments, spines located in the same cabinet may be interconnected using a clique scheme, a bipartite scheme or both. In a fully connected clique scheme, every two spines are interconnected. In a fully connected bipartite scheme, spines are interconnected between two disjoint groups of the spines, so that each spine connects to all of the spines in the opposite group.
In some embodiments, depending on the underlying design parameters, spines in different cabinets may be connected using long links 48. This may occur when spines in different cabinets have global ports not participating in the clique and bipartite schemes.
In
In some embodiments, the number of global ports available exceeds the number of intra-clique connections. This may happen, for example, when COND1 is satisfied. In general, when constructing a clique scheme using P spines that each has P′≥P global ports, P′−P+1 global ports remain unused, in each of the clique spines. In such embodiments, global ports not participating in the clique scheme can be used for making multiple connections between spines of the clique, or for connecting between different clique structures within the same cabinet, using short-cable connections. Alternatively, spines having spare global ports in different cabinets can be connected using long-cable connections.
In some embodiments, the number of global ports per spine is insufficient for connecting each spine in a bipartite scheme to all the spines in the opposite group of the two disjoint groups comprising the bipartite scheme. This may happen, for example, when COND2 is satisfied. In an embodiment, a bipartite scheme comprises two disjoint groups of P spines, wherein each spine has only P′<P global ports. In such an embodiment, a fully connected bipartite scheme cannot be constructed, and therefore a partial bipartite scheme is constructed instead.
In the ingress direction, the packet processor applies to packets received in the network element via ports 88, various processing such as verifying the correctness of the data in the packet payload, packet classification and prioritization, and routing. The packet processor typically checks certain fields in the packets headers for the purpose of packet classification and routing. The header fields contain addressing information, such as source and destination addresses and port numbers, and the underlying network protocol used. The packet processor stores processed packets that are awaiting transmission in one or more queues in memory buffer 86.
In the egress direction, packet processor 82 schedules the transmission of packets stored in the queues in memory buffer 86 via respective output ports using any suitable arbitration scheme, such as, for example, a round-robin scheduling scheme.
Design tool 104 comprises a processor 106 and a memory 108, and may be implemented in any suitable computer such as a server or a laptop. The processor is coupled to a user output device 112, such as a display, and to a user input device 116, such as a keyboard. Memory 108 may comprises any suitable type of memory of any suitable storage technology. Memory 108 may be coupled to the processor using a local bus, or using a suitable interface such as USB, for example.
Processor 106 typically runs various programs and applications that are installed, for example, in memory 108. In the present example, processor 106 runs a design tool application for designing an interconnection plan among spine network elements. The interconnection plan may be designed for connecting among spines 32, 34 and 36 in
In some embodiments, designer 102 provides the design tool with design parameters specifying the design requirements. In response to the parameters, processor 106 generates an interconnection plan, and outputs to output device 112 readable instructions for implementing the interconnection plan. The designer (or some other user) allocates spines to cabinets and connects among the spines within each cabinet and possibly between cabinets, in accordance with the interconnection plan.
The example design tool above, in which the user interfaces with processor 106 using a keyboard and a display is a non-limiting example. In alternative embodiments, other suitable interfacing methods and devices can also be used. For example, the processor may read the design parameters from a file in memory.
The configurations of computing system 20, network node 24, network element 80 and design tool 104 shown in
In some embodiments, some of the functions of processor 70, packet processor 82, and/or processor 106 may be carried out by a general-purpose processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
In some embodiments, the global interconnections among the spines in
To determine the subnetwork interconnections among the G groups, the plurality of the G groups is divided into multiple disjoint subsets 150. In the example of
Let Ns denote the number of subsets 150, and let Ng denote the number of groups in each subset. The values of Ns and Ng depend on the parameters G and P. Table 1 summarizes three conditions between G and P, denoted COND1, COND2 and COND3, and corresponding expressions for calculating Ns and Ng.
Numerical values of Ns and Ng for example values of G and P are given in Table 2.
As shown in
The plurality of G groups can be represented, for example, by a list of indices {1 . . . G}. For each of the G and P combinations (G=20, P=5), (G=20, P=4) and (G=21, P=5) of Table 2 above, a list of indices {1 . . . 20} can be partitioned into four disjoint subsets {1 . . . 5}-150A, {6 . . . 10}-150B, {11 . . . 15}-150C and {16 . . . 20}-150D. This partition is given by way of example and other suitable partitions can also be used. In the present example, clique structure 154 can be implemented using clique scheme 60 of
In some embodiments, the number of global ports per spine does not match the number of connections in a clique or bipartite structure. For example, when P divides G (COND1 is satisfied), the clique structure has one port among the P ports in each spine that remains unconnected. As another example, when (P+1) divides G (COND2 is satisfied), the number of ports 2·P in the bipartite scheme is insufficient for constructing a fully connected bipartite scheme.
When P divides (G−1) (COND3 is satisfied), one of the G groups is not partitioned into any of the subsets. In
In the subnetwork topology, each of the G groups contributes one spine for constructing a clique structure, and Ns−1 spines for constructing a bipartite structure. A subnetwork in which each of the clique schemes has P spines and each of the bipartite schemes has 2·P spines is referred to herein as a “full subnetwork.” For constructing a full subnetwork, the number of spines per group (S) should satisfy S≥Ns, wherein the value of Ns depends on G and P as depicted in Table 1 above. The number of spines required for constructing a full subnetwork is given by Ns·G.
Although in the example of
In some embodiments, the parameters G and P do not satisfy any of the three conditions COND1, COND2 and COND3 of Table 1. In such embodiments, before partitioning the plurality of groups into the disjoint subsets, one or more virtual groups are defined, wherein each virtual group comprises S virtual spines. Let Gv denote the number of virtual groups, and let G′ denote the total number of groups, i.e., G′=(G+Gv). In an embodiment, the number of virtual groups Gv is selected so that G′ and P satisfy one of the conditions COND1, COND2 and COND3 of Table 1. The subnetwork interconnection is then designed, as described above, using the parameters G′, P and S, wherein G′ replaces the original value G. In such embodiments, short-cable connections that involve virtual spines belonging to the virtual groups are omitted. Using virtual groups may increase the number of unconnected ports in the subnetwork.
In some embodiments, connecting global ports in constructing a subnetwork is carried out sequentially among the subsets so as to distribute unconnected ports in a balanced manner among the subsets. For example, the subsets (150 in
In some embodiments, Ns and S satisfy the condition Ns<S for the given design parameters G, P and S. In such embodiments, the method described above for designing and implementing an interconnection plan for one subnetwork can be used for constructing multiple subnetworks. In general, the number of subnetworks equals Ceiling[S/Ns]. When Ns divides S, a number (S/Ns) of full subnetworks can be constructed. Otherwise, one of the subnetworks is constructed as a “partial subnetwork” comprising a smaller number of spines than a full subnetwork. In general, a partial subnetwork typically comprises a smaller number of clique and/or bipartite schemes than a corresponding full subnetwork.
In some embodiments, in order to balance the usage of spines across groups, different permutations of the G groups {1 . . . G} are generated for the different subnetworks. For example, for G=20, P=5 and S=8, two full subnetworks can be constructed. The first subnetwork is constructed, for example, using the four subsets {1,2,3,4,5}, {6,7,8,9,10}, {11,12,13,14,15} and {16,17,18,19,20}. The four subsets for the second subnetwork are given, for example, by {1,4,7,10,13}, {16,19,2,5,8}, {11,14,17,20,3}, and {6,9,12,15,18}.
Deriving a permutation of the list {1 . . . G} can be done using any suitable method. In an example embodiment, the permutation may be defined by scanning the list {1 . . . G} cyclically while jumping over X indices at a time, wherein X and G are coprime integers, i.e., the only positive integer that divides both X and G is 1. Generating permuted lists of {1 . . . G} using X jumps so that X and G are coprime integers, results in a balanced usage of spines across the groups.
In some embodiments, the number of connections assigned to each pair of the G groups is balanced among the groups. Since S spines having P ports per spine are available per group, an average number Nc=P·S/(G−1) of connections can be applied between pairs of groups, using global ports of spines. Let NcMin and NcMax denote the minimal and maximal numbers of group-to-group connections. In some embodiments, McMin and McMax are calculated as McMin=Floor[Nc)] and McMax=Ceiling[Nc], so that the numbers of group-to-group connections are balanced up to a deviation of a single connection. In some embodiments, the number of inter-group connections in a subnetwork, before handling any remaining unconnected global ports, is constrained not to exceed NcMin.
The method begins with processor 106 receiving design parameters from designer 102 via input device 116, at an input step 200. In the present example, the parameters refer to design requirements of the network in
At a condition checking step 204, the processor checks whether the parameters G and P satisfy one of conditions COND1, COND2 and COND3 of Table 1.
In some embodiments, at step 204 multiple conditions are satisfied simultaneously. For example, the parameters G=20, P=4 satisfy both the conditions COND1 and COND2. In such embodiments, the processor selects one of the satisfied conditions using any suitable method, e.g., arbitrarily.
In some embodiments, the received parameters G and P fail to satisfy all of conditions COND1, COND2 and COND3 in Table 1. Example non-limiting such parameters are G=22, P=5. In such cases, the processor defines one or more virtual groups, wherein each virtual group comprises S virtual spines, and increases G accordingly, as described above with reference to
At a subnetwork construction step 208, the processor produces an interconnection plan of one subnetwork. The actual interconnection plan depends on the design parameters and the condition that was selected at step 204. The interconnection plan specifies interconnections among spines in clique and bipartite structures, as well as allocation of the spines into racks. When at step 200 virtual groups were defined, the processor omits in designing the interconnection plan, connections specified for virtual spines of these virtual groups. The processor determines the interconnection plan, including allocation of spines into racks so that the subnetwork interconnect comprises a minimal number of long-cable connections. Detailed methods for determining interconnections for one subnetwork ae described, e.g., with reference to
At an output step 212, the processor provides instructions to a user, e.g., via output device 112, for applying the designed interconnection plan. The processor provides, for example, readable instructions for connecting among spines and for allocating spines into racks. In some embodiments, the processor assigns interconnected clique and bipartite structures in a rack, and when the overall number of spines exceeds the rack capacity R, opens a subsequent rack. Note that in general, all of the spines of a clique structure are allocated to a common rack. Similarly, all of the spines of a bipartite structure are allocated to a common rack.
At a loop handling step 220, the processor checks whether interconnection for the last subnetwork has been designed, and loops back to step 208 when planning an interconnection for a subsequent subnetwork is required. For example, at step 220, the processor checks whether there are any spines available among the total S·G spines that do not belong to any of the subnetworks already processed, and if so, loops back to step 208. Alternatively, the processor may calculate the total number of subnetworks−Ceiling[S/Ns], and use it for handling the loop.
When at step 220 the interconnection of the last subnetwork has been processed, the processor proceeds to a second-phase step 224, at which the processor specifies interconnections among global ports that were not yet connected. Global ports may remain unconnected in the subnetworks, depending on the design parameters G, P and S. Unconnected global ports may remain, for example, when the G groups contain one or more virtual groups.
Specifically at step 224, the processor specifies connections and provides instructions for connecting the remaining global ports (e.g., separately for each subnetwork) based on the following rules, which take into consideration the minimal (NcMin) and maximal (NcMax) numbers of connections between groups:
The spines in
The subnetwork in
In the present example, since G and P satisfy COND1 of Table 1, each spine in clique structures 60 has a global port, unused in the clique interconnections. These unconnected ports are connected within the same clique structure and between different clique structures within a rack. Short-cable connections among such unused global ports are denoted 52A in the figure.
The spines of each subnetwork are mounted within one or more racks 250. In the present example, each rack 250 comprises up to 20 spines. The spines of the interconnected clique and bipartite structures can be allocated to the racks in various ways. For example, in some embodiments, clique and bipartite structures are allocated to racks in the same order in which they are created. In such embodiments, all of the clique structures of a subnetwork are allocated to racks after allocating all of the bipartite structures of that subnetwork to racks. In
In the example of
The embodiments described above are given by way of example, and other suitable embodiments can also be used.
It will be appreciated that the embodiments described above are cited by way of example, and that the following claims are not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
Number | Name | Date | Kind |
---|---|---|---|
4312064 | Bench et al. | Jan 1982 | A |
6115385 | Vig | Sep 2000 | A |
6169741 | Lemaire et al. | Jan 2001 | B1 |
6480500 | Erimli et al. | Nov 2002 | B1 |
6532211 | Rathonyi et al. | Mar 2003 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6614758 | Wong | Sep 2003 | B2 |
6665297 | Hariguchi et al. | Dec 2003 | B1 |
6775268 | Wang et al. | Aug 2004 | B1 |
6795886 | Nguyen | Sep 2004 | B1 |
6804532 | Moon et al. | Oct 2004 | B1 |
6807175 | Jennings et al. | Oct 2004 | B1 |
6831918 | Kavak | Dec 2004 | B1 |
6912589 | Jain et al. | Jun 2005 | B1 |
6912604 | Tzeng et al. | Jun 2005 | B1 |
6950428 | Horst et al. | Sep 2005 | B1 |
7010607 | Bunton | Mar 2006 | B1 |
7076569 | Bailey et al. | Jul 2006 | B1 |
7221676 | Green et al. | May 2007 | B2 |
7234001 | Simpson et al. | Jun 2007 | B2 |
7274869 | Pan et al. | Sep 2007 | B1 |
7286535 | Ishikawa et al. | Oct 2007 | B2 |
7401157 | Costantino et al. | Jul 2008 | B2 |
7676597 | Kagan et al. | Mar 2010 | B2 |
7746854 | Ambe et al. | Jun 2010 | B2 |
7899930 | Turner et al. | Mar 2011 | B1 |
7924837 | Shabtay et al. | Apr 2011 | B1 |
7936770 | Frattura et al. | May 2011 | B1 |
7969980 | Florit et al. | Jun 2011 | B1 |
8094569 | Gunukula et al. | Jan 2012 | B2 |
8175094 | Bauchot et al. | May 2012 | B2 |
8195989 | Lu et al. | Jun 2012 | B1 |
8213315 | Crupnicoff et al. | Jul 2012 | B2 |
8401012 | Underwood et al. | Mar 2013 | B2 |
8489718 | Brar et al. | Jul 2013 | B1 |
8495194 | Brar et al. | Jul 2013 | B1 |
8570865 | Goldenberg | Oct 2013 | B2 |
8576715 | Bloch et al. | Nov 2013 | B2 |
8605575 | Gunukula et al. | Dec 2013 | B2 |
8621111 | Marr et al. | Dec 2013 | B2 |
8625427 | Terry et al. | Jan 2014 | B1 |
8681641 | Sajassi et al. | Mar 2014 | B1 |
8755389 | Poutievski et al. | Jun 2014 | B1 |
8774063 | Beecroft | Jul 2014 | B2 |
8867356 | Bloch et al. | Oct 2014 | B2 |
8873567 | Mandal et al. | Oct 2014 | B1 |
8908704 | Koren et al. | Dec 2014 | B2 |
9014006 | Haramaty et al. | Apr 2015 | B2 |
9042234 | Liljenstolpe et al. | May 2015 | B1 |
9231888 | Bogdanski et al. | Jan 2016 | B2 |
9385949 | Vershkov et al. | Jul 2016 | B2 |
9571400 | Mandal et al. | Feb 2017 | B1 |
10200294 | Shpiner et al. | Feb 2019 | B2 |
20010043564 | Bloch et al. | Nov 2001 | A1 |
20010043614 | Viswanadhham et al. | Nov 2001 | A1 |
20020009073 | Furukawa et al. | Jan 2002 | A1 |
20020013844 | Garrett et al. | Jan 2002 | A1 |
20020026525 | Armitage | Feb 2002 | A1 |
20020039357 | Lipasti et al. | Apr 2002 | A1 |
20020071439 | Reeves et al. | Jun 2002 | A1 |
20020085586 | Tzeng | Jul 2002 | A1 |
20020136163 | Kawakami et al. | Sep 2002 | A1 |
20020138645 | Shinomiya et al. | Sep 2002 | A1 |
20020141412 | Wong | Oct 2002 | A1 |
20020165897 | Kagan et al. | Nov 2002 | A1 |
20020176363 | Durinovic-Johri et al. | Nov 2002 | A1 |
20030016624 | Bare | Jan 2003 | A1 |
20030039260 | Fujisawa | Feb 2003 | A1 |
20030065856 | Kagan et al. | Apr 2003 | A1 |
20030079005 | Myers et al. | Apr 2003 | A1 |
20030097438 | Bearden et al. | May 2003 | A1 |
20030223453 | Stoler et al. | Dec 2003 | A1 |
20040024903 | Costatino et al. | Feb 2004 | A1 |
20040062242 | Wadia et al. | Apr 2004 | A1 |
20040111651 | Mukherjee et al. | Jun 2004 | A1 |
20040202473 | Nakamura et al. | Oct 2004 | A1 |
20050013245 | Sreemanthula et al. | Jan 2005 | A1 |
20050154790 | Nagata | Jul 2005 | A1 |
20050157641 | Roy | Jul 2005 | A1 |
20050259588 | Preguica | Nov 2005 | A1 |
20060126627 | Diouf | Jun 2006 | A1 |
20060143300 | See et al. | Jun 2006 | A1 |
20060182034 | Klinker et al. | Aug 2006 | A1 |
20060215645 | Kangyu | Sep 2006 | A1 |
20060291480 | Cho et al. | Dec 2006 | A1 |
20070030817 | Arunachalam et al. | Feb 2007 | A1 |
20070058536 | Vaananen et al. | Mar 2007 | A1 |
20070058646 | Hermoni | Mar 2007 | A1 |
20070070998 | Sethuram et al. | Mar 2007 | A1 |
20070091911 | Watanabe et al. | Apr 2007 | A1 |
20070104192 | Yoon et al. | May 2007 | A1 |
20070183418 | Riddoch et al. | Aug 2007 | A1 |
20070223470 | Stahl | Sep 2007 | A1 |
20070237083 | Oh et al. | Oct 2007 | A9 |
20080002690 | Ver Steeg et al. | Jan 2008 | A1 |
20080101378 | Krueger | May 2008 | A1 |
20080112413 | Pong | May 2008 | A1 |
20080165797 | Aceves | Jul 2008 | A1 |
20080186981 | Seto et al. | Aug 2008 | A1 |
20080189432 | Abali et al. | Aug 2008 | A1 |
20080267078 | Farinacci et al. | Oct 2008 | A1 |
20080298248 | Roeck et al. | Dec 2008 | A1 |
20090010159 | Brownell et al. | Jan 2009 | A1 |
20090022154 | Kiribe et al. | Jan 2009 | A1 |
20090097496 | Nakamura et al. | Apr 2009 | A1 |
20090103534 | Malledant et al. | Apr 2009 | A1 |
20090119565 | Park et al. | May 2009 | A1 |
20090262741 | Jungck et al. | Oct 2009 | A1 |
20100020796 | Park et al. | Jan 2010 | A1 |
20100039959 | Gilmartin | Feb 2010 | A1 |
20100049942 | Kim et al. | Feb 2010 | A1 |
20100111529 | Zeng et al. | May 2010 | A1 |
20100141428 | Mildenberger et al. | Jun 2010 | A1 |
20100216444 | Mariniello et al. | Aug 2010 | A1 |
20100284404 | Gopinath et al. | Nov 2010 | A1 |
20100290385 | Ankaiah et al. | Nov 2010 | A1 |
20100290458 | Assarpour et al. | Nov 2010 | A1 |
20100315958 | Luo et al. | Dec 2010 | A1 |
20110019673 | Fernandez | Jan 2011 | A1 |
20110080913 | Liu et al. | Apr 2011 | A1 |
20110085440 | Owens et al. | Apr 2011 | A1 |
20110085449 | Jeyachandran et al. | Apr 2011 | A1 |
20110090784 | Gan | Apr 2011 | A1 |
20110164496 | Loh et al. | Jul 2011 | A1 |
20110164518 | Daraiseh et al. | Jul 2011 | A1 |
20110225391 | Burroughs et al. | Sep 2011 | A1 |
20110249679 | Lin et al. | Oct 2011 | A1 |
20110255410 | Yamen et al. | Oct 2011 | A1 |
20110265006 | Morimura et al. | Oct 2011 | A1 |
20110299529 | Olsson et al. | Dec 2011 | A1 |
20120020207 | Corti et al. | Jan 2012 | A1 |
20120075999 | Ko et al. | Mar 2012 | A1 |
20120082057 | Welin et al. | Apr 2012 | A1 |
20120144064 | Parker et al. | Jun 2012 | A1 |
20120144065 | Parker et al. | Jun 2012 | A1 |
20120147752 | Ashwood-Smith et al. | Jun 2012 | A1 |
20120163797 | Wang | Jun 2012 | A1 |
20120170582 | Abts et al. | Jul 2012 | A1 |
20120207175 | Raman et al. | Aug 2012 | A1 |
20120287791 | Xi et al. | Nov 2012 | A1 |
20120300669 | Zahavi | Nov 2012 | A1 |
20120314706 | Liss | Dec 2012 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130071116 | Ong | Mar 2013 | A1 |
20130083701 | Tomic et al. | Apr 2013 | A1 |
20130114599 | Arad | May 2013 | A1 |
20130114619 | Wakumoto | May 2013 | A1 |
20130159548 | Vasseur et al. | Jun 2013 | A1 |
20130170451 | Krause et al. | Jul 2013 | A1 |
20130204933 | Cardona et al. | Aug 2013 | A1 |
20130208720 | Ellis et al. | Aug 2013 | A1 |
20130242745 | Umezuki | Sep 2013 | A1 |
20130259033 | Hefty | Oct 2013 | A1 |
20130297757 | Han et al. | Nov 2013 | A1 |
20130301646 | Bogdanski et al. | Nov 2013 | A1 |
20130315237 | Kagan et al. | Nov 2013 | A1 |
20130322256 | Bader et al. | Dec 2013 | A1 |
20130329727 | Rajagopalan et al. | Dec 2013 | A1 |
20130336116 | Vasseur et al. | Dec 2013 | A1 |
20130336164 | Yang et al. | Dec 2013 | A1 |
20140016457 | Enyedi et al. | Jan 2014 | A1 |
20140022942 | Han et al. | Jan 2014 | A1 |
20140043959 | Owens et al. | Feb 2014 | A1 |
20140059440 | Sasaki | Feb 2014 | A1 |
20140105034 | Sun | Apr 2014 | A1 |
20140140341 | Bataineh et al. | May 2014 | A1 |
20140169173 | Naouri et al. | Jun 2014 | A1 |
20140192646 | Mir et al. | Jul 2014 | A1 |
20140198636 | Thayalan et al. | Jul 2014 | A1 |
20140211631 | Haramaty et al. | Jul 2014 | A1 |
20140211808 | Koren et al. | Jul 2014 | A1 |
20140269305 | Nguyen | Sep 2014 | A1 |
20140313880 | Lu et al. | Oct 2014 | A1 |
20140328180 | Kim et al. | Nov 2014 | A1 |
20140343967 | Baker | Nov 2014 | A1 |
20150030033 | Vasseur et al. | Jan 2015 | A1 |
20150052252 | Gilde et al. | Feb 2015 | A1 |
20150092539 | Sivabalan et al. | Apr 2015 | A1 |
20150098466 | Haramaty et al. | Apr 2015 | A1 |
20150124815 | Beliveau et al. | May 2015 | A1 |
20150127797 | Attar et al. | May 2015 | A1 |
20150131663 | Brar et al. | May 2015 | A1 |
20150163144 | Koponen et al. | Jun 2015 | A1 |
20150172070 | Csaszar | Jun 2015 | A1 |
20150194215 | Douglas et al. | Jul 2015 | A1 |
20150195204 | Haramaty et al. | Jul 2015 | A1 |
20150249590 | Gusat et al. | Sep 2015 | A1 |
20150295858 | Chrysos et al. | Oct 2015 | A1 |
20150372898 | Haramaty et al. | Dec 2015 | A1 |
20150372916 | Haramaty et al. | Dec 2015 | A1 |
20160012004 | Arimilli et al. | Jan 2016 | A1 |
20160014636 | Bahr et al. | Jan 2016 | A1 |
20160028613 | Haramaty et al. | Jan 2016 | A1 |
20160043933 | Gopalarathnam | Feb 2016 | A1 |
20160080120 | Unger et al. | Mar 2016 | A1 |
20160080321 | Pan et al. | Mar 2016 | A1 |
20160182378 | Basavaraja et al. | Jun 2016 | A1 |
20160294715 | Raindel et al. | Oct 2016 | A1 |
20170054445 | Wang | Feb 2017 | A1 |
20170054591 | Hyoudou et al. | Feb 2017 | A1 |
20170068669 | Levy et al. | Mar 2017 | A1 |
20170070474 | Haramaty et al. | Mar 2017 | A1 |
20170180243 | Haramaty et al. | Jun 2017 | A1 |
20170187614 | Haramaty et al. | Jun 2017 | A1 |
20170244630 | Levy et al. | Aug 2017 | A1 |
20170270119 | Kfir et al. | Sep 2017 | A1 |
20170286292 | Levy et al. | Oct 2017 | A1 |
20170331740 | Levy et al. | Nov 2017 | A1 |
20170358111 | Madsen | Dec 2017 | A1 |
20180026878 | Zahavi et al. | Jan 2018 | A1 |
20180139132 | Edsall et al. | May 2018 | A1 |
20180302288 | Schmatz | Oct 2018 | A1 |
20200042667 | Swaminathan | Feb 2020 | A1 |
Number | Date | Country |
---|---|---|
2016105446 | Jun 2016 | WO |
Entry |
---|
U.S. Appl. No. 15/050,480 office action dated Apr. 9, 2020. |
Leiserson, C E., “Fat-Trees: Universal Networks for Hardware Efficient Supercomputing”, IEEE Transactions on Computers, vol. C-34, No. 10, pp. 892-901, Oct. 1985. |
Ohring et al., “On Generalized Fat Trees”, Proceedings of the 9th International Symposium on Parallel Processing, pp. 37-44, Santa Barbara, USA, Apr. 25-28, 1995. |
Zahavi, E., “D-Mod-K Routing Providing Non-Blocking Traffic for Shift Permutations on Real Life Fat Trees”, CCIT Technical Report #776, Technion—Israel Institute of Technology, Haifa, Israel, Aug. 2010. |
Yuan et al., “Oblivious Routing for Fat-Tree Based System Area Networks with Uncertain Traffic Demands”, Proceedings of ACM SIGMETRICS—the International Conference on Measurement and Modeling of Computer Systems, pp. 337-348, San Diego, USA, Jun. 12-16, 2007. |
Matsuoka S., “You Don't Really Need Big Fat Switches Anymore—Almost”, IPSJ SIG Technical Reports, vol. 2003, No. 83, pp. 157-162, year 2003. |
Kim et al., “Technology-Driven, Highly-Scalable Dragonfly Topology”, 35th International Symposium on Computer Architecture, pp. 77-78, Beijing, China, Jun. 21-25, 2008. |
Jiang et al., “Indirect Adaptive Routing on Large Scale Interconnection Networks”, 36th International Symposium on Computer Architecture, pp. 220-231, Austin, USA, Jun. 20-24, 2009. |
Minkenberg et al., “Adaptive Routing in Data Center Bridges”, Proceedings of 17th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 33-41, Aug. 25-27, 2009. |
Kim et al., “Adaptive Routing in High-Radix Clos Network”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC2006), Tampa, USA, Nov. 2006. |
Infiniband Trade Association, “InfiniBandTM Architecture Specification vol. 1”, Release 1.2.1, Nov. 2007. |
Culley et al., “Marker PDU Aligned Framing for TCP Specification”, IETF Network Working Group, RFC 5044, Oct. 2007. |
Shah et al., “Direct Data Placement over Reliable Transports”, IETF Network Working Group, RFC 5041, Oct. 2007. |
Martinez et al., “Supporting fully adaptive routing in Infiniband networks”, Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS'03),Apr. 22-26, 2003. |
Joseph, S., “Adaptive routing in distributed decentralized systems: NeuroGrid, Gnutella & Freenet”, Proceedings of Workshop on Infrastructure for Agents, MAS and Scalable MAS, Montreal, Canada, 11 pages, year 2001. |
Gusat et al., “R3C2: Reactive Route & Rate Control for CEE”, Proceedings of 18th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 50-57, Aug. 10-27, 2010. |
Wu et al., “DARD: Distributed adaptive routing datacenter networks”, Proceedings of IEEE 32nd International Conference Distributed Computing Systems, pp. 32-41, Jun. 18-21, 2012. |
Ding et al., “Level-wise scheduling algorithm for fat tree interconnection networks”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC 2006), 9 pages, Nov. 2006. |
Prisacari et al., “Performance implications of remote-only load balancing under adversarial traffic in Dragonflies”, Proceedings of the 8th International Workshop on Interconnection Network Architecture: On-Chip, Multi-Chip, 4 pages, Jan. 22, 2014. |
Li et al., “Multicast Replication Using Dual Lookups in Large Packet-Based Switches”, 2006 IET International Conference on Wireless, Mobile and Multimedia Networks, , pp. 1-3, Nov. 6-9, 2006. |
Nichols et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, Network Working Group, RFC 2474, 20 pages, Dec. 1998. |
Microsoft., “How IPv4 Multicasting Works”, 22 pages, Mar. 28, 2003. |
Suchara et al., “Network Architecture for Joint Failure Recovery and Traffic Engineering”, Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, pp. 97-108, Jun. 7-11, 2011. |
IEEE 802.1Q, “IEEE Standard for Local and metropolitan area networks Virtual Bridged Local Area Networks”, IEEE Computer Society, 303 pages, May 19, 2006. |
Plummer, D., “An Ethernet Address Resolution Protocol,” Network Working Group ,Request for Comments (RFC) 826, 10 pages, Nov. 1982. |
Hinden et al., “IP Version 6 Addressing Architecture,” Network Working Group ,Request for Comments (RFC) 2373, 26 pages, Jul. 1998. |
Garcia et al., “On-the-Fly 10 Adaptive Routing in High-Radix Hierarchical Networks,” Proceedings of the 2012 International Conference on Parallel Processing (ICPP), pp. 279-288, Sep. 10-13, 2012. |
Dally et al., “Deadlock-Free Message Routing in Multiprocessor Interconnection Networks”, IEEE Transactions on Computers, vol. C-36, No. 5, May 1987, pp. 547-553. |
Nkposong et al., “Experiences with BGP in Large Scale Data Centers:Teaching an old protocol new tricks”, 44 pages, Jan. 31, 3014. |
“Equal-cost multi-path routing”, Wikipedia, 2 pages, Oct. 13, 2014. |
Thaler et al., “Multipath Issues in Unicast and Multicast Next-Hop Selection”, Network Working Group, RFC 2991, 9 pages, Nov. 2000. |
Glass et al., “The turn model for adaptive routing”, Journal of the ACM, vol. 41, No. 5, pp. 874-903, Sep. 1994. |
Mahalingam et al., “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks”, Internet Draft, 20 pages, Aug. 22, 2012. |
Sinha et al., “Harnessing TCP's Burstiness with Flowlet Switching”, 3rd ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets), 6 pages, Nov. 11, 2004. |
Vishnu et al., “Hot-Spot Avoidance With Multi-Pathing Over InfiniBand: An MPI Perspective”, Seventh IEEE International Symposium on Cluster Computing and the Grid (CCGrid'07), 8 pages, year 2007. |
NOWLAB—Network Based Computing Lab, 2 pages, years 2002-2015 http://nowlab.cse.ohio-state.edu/publications/conf-presentations/2007/vishnu-ccgrid07.pdf. |
Alizadeh et al.,“CONGA: Distributed Congestion-Aware Load Balancing for Datacenters”, Cisco Systems, 12 pages, Aug. 9, 2014. |
Geoffray et al., “Adaptive Routing Strategies for Modern High Performance Networks”, 16th IEEE Symposium on High Performance Interconnects (HOTI '08), pp. 165-172, Aug. 26-28, 2008. |
Anderson et al., “On the Stability of Adaptive Routing in the Presence of Congestion Control”, IEEE INFOCOM, 11 pages, 2003. |
Perry et al., “Fastpass: A Centralized “Zero-Queue” Datacenter Network”, M.I.T. Computer Science & Artificial Intelligence Lab, 12 pages, year 2014. |
Afek et al., “Sampling and Large Flow Detection in SDN”, SIGCOMM '15, pp. 345-346, Aug. 17-21, 2015, London, UK. |
Amante et al., “IPv6 Flow Label Specification”, Request for Comments: 6437 , 15 pages, Nov. 2011. |
U.S. Appl. No. 15/050,480 office action dated Jun. 28, 2019. |
U.S. Appl. No. 15/896,088 office action dated Jun. 12, 2019. |
U.S. Appl. No. 15/356,588 Advisory Action dated May 23, 2019. |
U.S. Appl. No. 15/356,588 office action dated Aug. 12, 2019. |
U.S. Appl. No. 15/218,028 office action dated Jun. 26, 2019. |
Zahavi et al., “Distributed Adaptive Routing for Big-Data Applications Running on Data Center Networks,” Proceedings of the Eighth ACM/IEEE Symposium on Architectures for Networking and Communication Systems, New York, USA, pp. 99-110, Oct. 29-30, 2012. |
Levy et al., U.S. Appl. No. 15/896,088, filed Feb. 14, 2018. |
Shpiner et al., “Dragonfly+: Low Cost Topology for Scaling Datacenters”, IEEE 3rd International Workshop on High-Performance Interconnection Networks in the Exascale and Big-Data Era (HiPINEB), pp. 1-9, Feb. 2017. |
U.S. Appl. No. 15/356,588 office action dated Feb. 7, 2019. |
U.S. Appl. No. 15/218,028 office action dated Feb. 6, 2019. |
Cao et al., “Implementation Method for High-radix Fat-tree Deterministic Source-routing Interconnection Network”, Computer Science ,vol. 39, Issue 12, pp. 33-37, 2012. |