The present invention relates generally to communication networks, and particularly to methods and systems for routing over multiple paths.
Various communication systems support traffic delivery to a given destination over multiple paths. When configured with multiple paths, a network router or switch typically selects one of the paths for each incoming packet. Multiple paths are used for example in adaptive routing (AR), in which packets may be re-routed based on the network state.
Methods for multi-path routing in various network topologies are known in the art. For example, U.S. Pat. No. 8,576,715, whose disclosure is incorporated herein by reference, describes a method for communication that includes routing a first packet, which belongs to a given packet flow, over a first routing path through a communication network. A second packet, which follows the first packet in the given packet flow, is routed using a time-bounded Adaptive Routing (AR) mode, by evaluating a time gap between the first and second packets, routing the second packet over the first routing path if the time gap does not exceed a predefined threshold, and, if the time gap exceeds the predefined threshold, selecting a second routing path through the communication network that is potentially different from the first routing path, and routing the second packet over the second routing path.
U.S. Patent Application Publication 2012/0144064, whose disclosure is incorporated herein by reference, describes a dragonfly processor interconnect network that comprises a plurality of processor nodes and a plurality of routers. The routers are operable to adaptively route data by selecting from among a plurality of network paths from a target node to a destination node in the dragonfly network, based on network congestion information from neighboring routers and failed network link information from neighboring routers.
As another example, U.S. Patent Application Publication 2012/0144065, whose disclosure is incorporated herein by reference describes a dragonfly processor interconnect network that comprises a plurality of processor nodes and a plurality of routers. The routers are operable to route data by selecting from among a plurality of network paths from a target node to a destination node in the dragonfly network based on one or more routing tables.
An embodiment of the present invention provides a network element that includes one or more interfaces and circuitry. The interfaces are configured to connect to a communication network. The circuitry is configured to assign multiple egress interfaces corresponding to respective different paths via the communication network for routing packets to a given destination-address group, to hold, for the given destination-address group, respective state information for each of multiple sets of hash results, to receive via an ingress interface a packet destined to the given destination-address group, to calculate a given hash result for the packet and identify a given set of hash results in which the given hash result falls, and to forward the packet via one of the multiple egress interfaces in accordance with the state information corresponding to the given destination-address group and the given set of hash results.
In some embodiments, the circuitry is configured to hold the state information per set of hash results in multiple hash tables that correspond to multiple respective destination-address groups, to identify the given destination-address group to which the packet is destined, to select a hash table corresponding to the given destination-address group, and to retrieve the state information from the selected hash table. In other embodiments, the circuitry is configured to hold a mapping between the destination-address groups and the respective hash tables, and to select the hash table by matching a destination address of the packet to the given destination-address group and choosing the hash table based on the given destination-address group using the mapping.
In yet other embodiments, the circuitry is configured to match the destination address by identifying that the destination address belongs to the given destination-address group. In yet further other embodiments, the state information includes at least an egress interface selected from among the assigned egress interfaces, and timeout information.
In an embodiment, the circuitry is configured to assign the multiple egress interfaces to be used in equal-cost multi-path (ECMP) routing. In another embodiment, the circuitry is configured to assign the multiple egress interfaces to be used in adaptive routing (AR). In yet another embodiment, the circuitry is configured to assign the multiple egress interfaces to be used in the OpenFlow protocol. In yet further another embodiment, the circuitry is configured to assign the multiple egress interfaces to be used in link aggregation (LAG).
There is additionally provided, in accordance with an embodiment of the present invention, a method including assigning in a network element multiple egress interfaces corresponding to respective different paths via a communication network for routing packets to a given destination-address group. For the given destination-address group, respective state information is held for each of multiple sets of hash results. A packet destined to the given destination-address group is received via an ingress port. A given hash result is calculated for the packet and a set of hash results in which the given hash result falls is identified. The packet is forwarded via one of the assigned egress interfaces in accordance with the state information corresponding to the given destination-address group and the given set of hash results.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
Multi-path routing refers to sending traffic to a destination via a communication network over multiple different paths. Consider, for example, a router that configures a group of multiple egress interfaces to support multi-path routing to a given destination, e.g., at initialization. When the router receives a packet that is addressed to this destination, the router forwards the packet via one of the egress interfaces that were assigned to multi-path routing.
In principle, the router can forward all the packets addressed to the same destination via the same egress interface, and re-route all these packets to an alternative egress interface, e.g., upon congestion. This routing scheme, however, may result in non-uniform traffic distribution and possibly overload the alternative egress interface.
Embodiments of the present disclosure that are described herein provide improved methods and systems for multi-path routing with efficient traffic distribution.
A large network is sometimes divided into smaller subnetworks that are each assigned a dedicated group of destination addresses. Thus, to route a packet to a given destination, a router should typically match the destination address of the packet to the destination address-group of the relevant subnetwork and forward the packet accordingly.
In some embodiments, to distribute packets that belong to different flows among the egress interfaces assigned to multi-path routing, the router calculates a hash function over certain fields in the packet headers, e.g., fields that identify the flow, and uses the hash result to choose an egress interface for the packet. In some embodiments, at least some of the possible outcomes of the hash function are arranged in groups that are referred to herein as hash-sets. A hash-set may comprise one or more outcomes of the hash function, and the hash result calculated over the packet typically belongs to a single hash-set. Hash-sets are sometimes referred to as “buckets”.
In some embodiments, the router holds respective state information for each of multiple pairs of destination-address-group value and hash-set. The state information comprises a selected egress interface and possibly other information such as a timeout counter. For routing a packet, the router finds a destination-address group to which the destination address of the packet belongs, and a hash-set in which the hash result of the packet falls, and forwards the packet in accordance with the respective state information.
In some embodiments, the data structure in which the router holds the state information comprises a routing table and multiple hash tables. The routing table maps destination-address groups to respective hash tables, and each of the hash tables holds state information per hash-set. In an embodiment, the router retrieves the state information in two phases. In the first phase, the router matches the destination address of the packet to one of the destination-address groups in the routing table, and extracts from the routing table a pointer to the respective hash table. In the second phase, the router identifies, within the hash table selected in the first phase, a hash-set in which the hash result of the packet falls, and retrieves the respective state information.
In the disclosed techniques, the routing decision depends on state information that relates to both the destination address of the packet and to the flow identification, and therefore the flows are efficiently distributed among the multiple paths even when re-routed due to congestion or other failures.
Network 20 may comprise an IP network in accordance with version 4 or version 6 of the IP protocol. Alternatively, network 20 may comprise an InfiniBand or an Ethernet network, or any other suitable packet network. Further alternatively, network 20 may comprise any other suitable packet network, or a combination of multiple packet networks, operating in accordance with any suitable protocols and data rates.
Nodes 32 exchange flows of packets with one another over network 20. Each flow originates in a certain source node, ends in a certain destination node, and travels a certain path that traverses multiple routers and links.
The term “flow” refers to a sequence of packets, which transfer data between a pair of end nodes. A flow can be identified, for example, by one or more attributes from among: source/destination address such as IP address, higher layer source/destination identifiers such as TCP source/destination port number and higher layer protocol identifier such as TCP or UDP. Different flows may be processed differently, such as assigning per-flow quality of service level. In some cases, although not necessarily, the packets in a given flow are required to arrive at the same order they were sent. Flows can generally be defined at various granularity levels. Typically, finer-granularity flows may require the router to hold more state information such as a selected path and time bound timer per flow.
In the present example, network 20 comprises multiple subnetworks 36 (also referred to as “subnets” for brevity). Each subnetwork typically comprises routers similar to routers 24 or other network elements that route packets through the subnetwork. Subnetworks can be used for dividing a large network into smaller subnetworks that may be easier to manage. Typically, each subnetwork is assigned a dedicated group of addresses that use for routing packets within the subnetwork. For example, in IP networks, addresses that are used for accessing network nodes or other elements in a given subnetwork have identical most-significant bit-group in their IP address. The IP address is thus divided logically into two fields—a network or routing prefix, and a host identifier that identifies a specific host or network interface within the subnetwork.
A network node in one subnetwork may communicate with another network node in a different subnetwork. For example, node 32A attached to SUBNET0 may communicate with node 32B attached to SUBNET1 or with node 32C attached to SUBNET2. Node 32A can also communicate with elements in SUBNET3 and SUBNET4.
Network 20 supports communication between pairs of network nodes over multiple paths. For example, in
Delivery over multiple paths can be used in various applications and related protocols. For example, each of the paths from ROUTER1 to ROUTERS, i.e., ROUTE1-ROUTE2-ROUTE5, ROUTE1-ROUTE3-ROUTES and ROUTE1-ROUTE4-ROUTES includes the same number of routers and therefore can be used in equal-cost multi-path (ECMP)routing, for example, to load-balance the traffic sent via these multiple paths. The same group of three paths can be used in other applications, such as, for example, in adaptive routing (AR), in which packets delivered through a congested path are re-routed to another path in the group. Other example applications that use multi-path routing include the OpenFlow protocol in which a controller external to the routers/switches manages their forwarding rules, and link aggregation (LAG).
In the example of
Router 24 comprises multiple ports or interfaces 52 that connect to the network. An input port or ingress interface serves for receiving packets from the network, whereas an output port or egress interface serves for transmitting packets to the network. A given ingress interface may receive packets that belong to multiple different flows. Router 24 comprises an input/output queues unit 56 that queues incoming packets and packets to be transmitted.
A forwarding module 60 comprises forwarding rules for the incoming packets. In some embodiments, the router selects an egress interface to a received packet by applying one or more respective routing rules based on certain fields in the packet headers. A control unit 64 manages the various tasks of router 24. Among other tasks, control unit 64 configures forwarding module 60 and manages the routing over multiple paths.
The forwarding rules implemented in forwarding module 60 are typically based at least on the destination address of the packet. For example, a packet should be forwarded to a given subnetwork when the destination address of the packet matches or belongs to the destination-address-group of the given subnetwork.
As will be described in detail below, in some embodiments, the router calculates a hash result over certain fields of the packet headers using a suitable hash function. In an embodiment, the possible outcomes of the hash function are divided into respective multiple sets that are referred to herein as hash-sets. A hash-set comprises one or more hash outcomes of the hash function. The router identifies a hash-set in which the hash result of the packet falls, and forwards the packet based on the destination-address group that matches the destination address of the packet, and on the hash-set.
Using this forwarding technique, different flows that are destined to the same subnetwork are distributed efficiently among the multiple paths. In addition, by calculating the same hash function over the same packet fields, in-order packet delivery per flow can be guaranteed. Moreover, different flows traversing a congested path can be re-routed to different egress interfaces, which results in efficient distribution of the traffic through the network even under congestion conditions.
The network and router configurations shown in
Certain router elements may be implemented using hardware/firmware, such as using one or more Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). Alternatively, some router elements may be implemented in software or using a combination of hardware/firmware and software elements.
In some embodiments, certain router functions, such as certain functions of control unit 64, may be implemented using a processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
As noted above, the router comprises multiple interfaces 52 and other elements. In the description that follows and in the claims, the term “circuitry” refers to all the elements the router excluding the interfaces. In the example of
In some embodiments, the forwarding module comprises a hash function 100, which the forwarding module applies to certain fields in the packet headers to produce a respective hash result. Forwarding module 60 may apply hash function 100 to one or more fields that identify the flow to which the packet belongs, such as one or more of the five-tuple comprising the source and destination addresses, source and destination port numbers and the underlying communication protocol. Alternatively or additionally, other suitable packet fields can also be used for calculating the hash result for the packet.
Typically, hash function 100 has multiple possible outcomes. In an embodiment, the forwarding module divides these possible outcomes into respective sets that are referred to herein as hash-sets. Each hash-set may comprise one or more hash outcomes. A given hash result typically falls within a single hash-set. As will be described below, the forwarding module makes routing decisions based on state information that is held per each of multiple pairs of destination-address-group value and hash-set.
In the example of
Routing table 104 maps each subnetwork or destination-address group to a respective hash table 108. Different subnetworks may correspond to the same hash table. For example, in
Hash table 108 stores respective state information for each hash-set. Alternatively, a hash table may store state information for a partial subset of the hash-sets. In the example of
Forwarding module 60 comprises a decision module 116 that receives the retrieved state and makes a forwarding decision accordingly. The state information may comprise, for example, a selected egress interface and information used for adaptive routing such as one or more timers for timely bounded adaptive routing.
At a hash table selection step 208, forwarding module 60 uses the pointer from routing table 104 to select a respective hash table 108. The description that follows assumes that the selected hash table holds state information per hash-set, as described above.
At a hash calculation step 212, the forwarding module calculates a hash function over one or more fields in the packet headers, as described above. At a state retrieval step 216 the forwarding module finds a hash-set in which the hash result falls, and retrieves the respective state information.
At a decision step 220, decision module 116 accepts the retrieved state and decides on the next-hop egress interface for the packet based on the state. The method then terminates.
The forwarding module configuration of
Although the data structure in
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
This application claims the benefit of U.S. Provisional Patent Application 62/016,141, filed Jun. 24, 2014, whose disclosure is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4312064 | Bench et al. | Jan 1982 | A |
6115385 | Vig | Sep 2000 | A |
6169741 | LeMaire et al. | Jan 2001 | B1 |
6480500 | Erimli et al. | Nov 2002 | B1 |
6532211 | Rathonyi et al. | Mar 2003 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6665297 | Hariguchi | Dec 2003 | B1 |
6775268 | Wang et al. | Aug 2004 | B1 |
6804532 | Moon et al. | Oct 2004 | B1 |
6831918 | Kavak | Dec 2004 | B1 |
6912604 | Tzeng et al. | Jun 2005 | B1 |
6950428 | Horst et al. | Sep 2005 | B1 |
7010607 | Bunton | Mar 2006 | B1 |
7076569 | Bailey et al. | Jul 2006 | B1 |
7234001 | Simpson et al. | Jun 2007 | B2 |
7286535 | Ishikawa et al. | Oct 2007 | B2 |
7676597 | Kagan et al. | Mar 2010 | B2 |
7746854 | Ambe et al. | Jun 2010 | B2 |
7936770 | Frattura et al. | May 2011 | B1 |
7969980 | Florit et al. | Jun 2011 | B1 |
8094569 | Gunukula et al. | Jan 2012 | B2 |
8175094 | Bauchot et al. | May 2012 | B2 |
8195989 | Lu et al. | Jun 2012 | B1 |
8401012 | Underwood et al. | Mar 2013 | B2 |
8489718 | Brar et al. | Jul 2013 | B1 |
8495194 | Brar et al. | Jul 2013 | B1 |
8576715 | Bloch et al. | Nov 2013 | B2 |
8605575 | Gunukula et al. | Dec 2013 | B2 |
8621111 | Marr et al. | Dec 2013 | B2 |
8755389 | Poutievski | Jun 2014 | B1 |
8774063 | Beecroft | Jul 2014 | B2 |
8873567 | Mandal et al. | Oct 2014 | B1 |
8908704 | Koren et al. | Dec 2014 | B2 |
9014006 | Haramaty et al. | Apr 2015 | B2 |
9042234 | Liljenstolpe et al. | May 2015 | B1 |
9571400 | Mandal et al. | Feb 2017 | B1 |
20020013844 | Garrett et al. | Jan 2002 | A1 |
20020026525 | Armitage | Feb 2002 | A1 |
20020039357 | Lipasti et al. | Apr 2002 | A1 |
20020071439 | Reeves et al. | Jun 2002 | A1 |
20020136163 | Kawakami et al. | Sep 2002 | A1 |
20020138645 | Shinomiya et al. | Sep 2002 | A1 |
20020165897 | Kagan et al. | Nov 2002 | A1 |
20030016624 | Bare | Jan 2003 | A1 |
20030039260 | Fujisawa | Feb 2003 | A1 |
20030065856 | Kagan et al. | Apr 2003 | A1 |
20030079005 | Myers et al. | Apr 2003 | A1 |
20030223453 | Stoler et al. | Dec 2003 | A1 |
20040111651 | Mukherjee et al. | Jun 2004 | A1 |
20040202473 | Nakamura et al. | Oct 2004 | A1 |
20050013245 | Sreemanthula et al. | Jan 2005 | A1 |
20050157641 | Roy | Jul 2005 | A1 |
20050259588 | Preguica | Nov 2005 | A1 |
20060126627 | Diouf | Jun 2006 | A1 |
20060182034 | Klinker et al. | Aug 2006 | A1 |
20060291480 | Cho et al. | Dec 2006 | A1 |
20070058536 | Vaananen et al. | Mar 2007 | A1 |
20070058646 | Hermoni | Mar 2007 | A1 |
20070070998 | Sethuram et al. | Mar 2007 | A1 |
20070091911 | Watanabe et al. | Apr 2007 | A1 |
20070223470 | Stahl | Sep 2007 | A1 |
20070237083 | Oh et al. | Oct 2007 | A9 |
20080002690 | Ver Steeg et al. | Jan 2008 | A1 |
20080112413 | Pong | May 2008 | A1 |
20080165797 | Aceves | Jul 2008 | A1 |
20080189432 | Abali et al. | Aug 2008 | A1 |
20080267078 | Farinacci et al. | Oct 2008 | A1 |
20080298248 | Roeck et al. | Dec 2008 | A1 |
20090103534 | Malledant et al. | Apr 2009 | A1 |
20090119565 | Park et al. | May 2009 | A1 |
20100039959 | Gilmartin | Feb 2010 | A1 |
20100049942 | Kim et al. | Feb 2010 | A1 |
20100111529 | Zeng et al. | May 2010 | A1 |
20100141428 | Mildenberger et al. | Jun 2010 | A1 |
20100216444 | Mariniello et al. | Aug 2010 | A1 |
20100284404 | Gopinath et al. | Nov 2010 | A1 |
20100290385 | Ankaiah et al. | Nov 2010 | A1 |
20100315958 | Luo et al. | Dec 2010 | A1 |
20110019673 | Fernandez Gutierrez | Jan 2011 | A1 |
20110085440 | Owens et al. | Apr 2011 | A1 |
20110085449 | Jeyachandran et al. | Apr 2011 | A1 |
20110164496 | Loh et al. | Jul 2011 | A1 |
20110225391 | Burroughs | Sep 2011 | A1 |
20110249679 | Lin et al. | Oct 2011 | A1 |
20110255410 | Yamen et al. | Oct 2011 | A1 |
20110265006 | Morimura et al. | Oct 2011 | A1 |
20110299529 | Olsson et al. | Dec 2011 | A1 |
20120020207 | Corti et al. | Jan 2012 | A1 |
20120082057 | Welin et al. | Apr 2012 | A1 |
20120144064 | Parker et al. | Jun 2012 | A1 |
20120144065 | Parker et al. | Jun 2012 | A1 |
20120147752 | Ashwood-Smith et al. | Jun 2012 | A1 |
20120163797 | Wang | Jun 2012 | A1 |
20120207175 | Raman | Aug 2012 | A1 |
20120300669 | Zahavi | Nov 2012 | A1 |
20120314706 | Liss | Dec 2012 | A1 |
20130044636 | Koponen | Feb 2013 | A1 |
20130071116 | Ong | Mar 2013 | A1 |
20130083701 | Tomic et al. | Apr 2013 | A1 |
20130114599 | Arad | May 2013 | A1 |
20130114619 | Wakumoto | May 2013 | A1 |
20130170451 | Krause et al. | Jul 2013 | A1 |
20130208720 | Ellis et al. | Aug 2013 | A1 |
20130242745 | Umezuki | Sep 2013 | A1 |
20130301646 | Bogdanski et al. | Nov 2013 | A1 |
20130315237 | Kagan et al. | Nov 2013 | A1 |
20130322256 | Bader et al. | Dec 2013 | A1 |
20130336116 | Vasseur et al. | Dec 2013 | A1 |
20140043959 | Owens et al. | Feb 2014 | A1 |
20140140341 | Bataineh et al. | May 2014 | A1 |
20140192646 | Mir et al. | Jul 2014 | A1 |
20140313880 | Lu et al. | Oct 2014 | A1 |
20140328180 | Kim | Nov 2014 | A1 |
20140343967 | Baker | Nov 2014 | A1 |
20150030033 | Vasseur et al. | Jan 2015 | A1 |
20150052252 | Gilde et al. | Feb 2015 | A1 |
20150092539 | Sivabalan et al. | Apr 2015 | A1 |
20150098466 | Haramaty et al. | Apr 2015 | A1 |
20150124815 | Beliveau | May 2015 | A1 |
20150163144 | Koponen et al. | Jun 2015 | A1 |
20150194215 | Douglas | Jul 2015 | A1 |
20160014636 | Bahr et al. | Jan 2016 | A1 |
20160080120 | Unger et al. | Mar 2016 | A1 |
20160182378 | Basavaraja | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
2016105446 | Jun 2016 | WO |
Entry |
---|
U.S. Appl. No. 14/662,259 Office Action dated Sep. 22, 2016. |
Afek et al., “Sampling and Large Flow Detection in SDN”, SIGCOMM '15, pp. 345-346, Aug. 17-21, 2015, London, UK. |
U.S. Appl. No. 14/745,488 Office Action dated Dec. 6, 2016. |
U.S. Appl. No. 14/337,334 Office Action dated Oct. 20, 2016. |
Dally et al., “Deadlock-Free Message Routing in Multiprocessor Interconnection Networks”, IEEE Transactions on Computers, vol. C-36, No. 5, May 1987, pp. 547-553. |
Prisacari et al., “Performance implications of remote-only load balancing under adversarial traffic in Dragonflies”, Proceedings of the 8th International Workshop on Interconnection Network Architecture: On-Chip, Multi-Chip, 4 pages, Jan. 22, 2014. |
Garcia et al., “On-the-Fly 10 Adaptive Routing in High-Radix Hierarchical Networks,” Proceedings of the 2012 International Conference on Parallel Processing (ICPP), pp. 279-288, Sep. 10-13, 2012. |
“Equal-cost multi-path routing”, Wikipedia, 2 pages, Oct. 13, 2014. |
Thaler et al., “Multipath Issues in Unicast and Multicast Next-Hop Selection”, Network Working Group, RFC 2991, 9 pages, Nov. 2000. |
Nkposong et al., “Experiences with BGP in Large Scale Data Centers:Teaching an old protocol new tricks”, 44 pages, Jan. 31, 3014. |
Mahalingam et al., “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks”, Internet Draft, 20 pages, Aug. 22, 2012. |
Sinha et al., “Harnessing TCP's Burstiness with Flowlet Switching”, 3rd ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets), 6 pages, Nov. 11, 2004. |
Vishnu et al., “Hot-Spot Avoidance With Multi-Pathing Over InfiniBand: An MPI Perspective”, Seventh IEEE International Symposium on Cluster Computing and the Grid (CCGrid'07), 8 pages, year 2007. |
Nowlab—Network Based Computing Lab, 2 pages, years 2002-2015 http://nowlab.cse.ohio-state.edu/publications/conf-presentations/2007/vishnu-ccgrid07.pdf. |
Alizadeh et al., “CONGA: Distributed Congestion-Aware Load Balancing for Datacenters”, Cisco Systems, 12 pages, Aug. 9, 2014. |
Geoffray et al., “Adaptive Routing Strategies for Modern High Performance Networks”, 16th IEEE Symposium on High Performance Interconnects (HOTI '08), pp. 165-172, Aug. 26-28, 2008. |
Anderson et al., “On the Stability of Adaptive Routing in the Presence of Congestion Control”, IEEE INFOCOM, 11 pages, 2003. |
Perry et al., “Fastpass: A Centralized “Zero-Queue” Datacenter Network”, M.I.T. Computer Science & Artificial Intelligence Lab, 12 pages, year 2014. |
Glass et al., “The turn model for adaptive routing”, Journal of the ACM, vol. 41, No. 5, pp. 874-903, Sep. 1994. |
Leiserson, C E., “Fat-Trees: Universal Networks for Hardware Efficient Supercomputing”, IEEE Transactions on Computers, vol. C-34, No. 10, pp. 892-901, Oct. 1985. |
Ohring et al., “On Generalized Fat Trees”, Proceedings of the 9th International Symposium on Parallel Processing, pp. 37-44, Santa Barbara, USA, Apr. 25-28, 1995. |
Zahavi, E., “D-Mod-K Routing Providing Non-Blocking Traffic for Shift Permutations on Real Life Fat Trees”, CCIT Technical Report #776, Technion—Israel Institute of Technology, Haifa, Israel, Aug. 2010. |
Yuan et al., “Oblivious Routing for Fat-Tree Based System Area Networks with Uncertain Traffic Demands”, Proceedings of ACM SIGMETRICS—the International Conference on Measurement and Modeling of Computer Systems, pp. 337-348, San Diego, USA, Jun. 12-16, 2007. |
Matsuoka S., “You Don't Really Need Big Fat Switches Anymore—Almost”, IPSJ SIG Technical Reports, vol. 2003, No. 83, pp. 157-162, year 2003. |
Kim et al., “Technology-Driven, Highly-Scalable Dragonfly Topology”, 35th International Symposium on Computer Architecture, pp. 77-78, Beijing, China, Jun. 21-25, 2008. |
Jiang et al., “Indirect Adaptive Routing on Large Scale Interconnection Networks”, 36th International Symposium on Architecture Computer, pp. 220-231, Austin, USA, Jun. 20-24, 2009. |
Minkenberg et al., “Adaptive Routing in Data Center Bridges”, Proceedings of 17th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 33-41, Aug. 25-27, 2009. |
Kim et al., “Adaptive Routing in High-Radix Clos Network”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC2006), Tampa, USA, Nov. 2006. |
Infiniband Trade Association, “InfiniBandTM Architecture Specification vol. 1”, Release 1.2.1, Nov. 2007. |
Culley et al., “Marker PDU Aligned Framing for TCP Specification”, IETF Network Working Group, RFC 5044, Oct. 2007. |
Shah et al., “Direct Data Placement over Reliable Transports”, IETF Network Working Group, RFC 5041, Oct. 2007. |
Martinez et al., “Supporting fully adaptive routing in Infiniband networks”, Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS'03), Nice, France, 10 pages, Apr. 22-26, 2003. |
Joseph, S., “Adaptive routing in distributed decentralized systems: NeuroGrid, Gnutella & Freenet”, Proceedings of Workshop on Infrastructure for Agents, MAS and Scalable MAS, Montreal, Canada, 11 pages, year 2001. |
Gusat et al., “R3C2: Reactive Route & Rate Control for CEE”, Proceedings of 18th IEEE Symposium on High Performance Interconnects, New York, USA, pp. 50-57, Aug. 10-27, 2010. |
Wu et al., “DARD: Distributed adaptive routing datacenter networks”, Proceedings of IEEE 32nd International Conference Distributed Computing Systems, pp. 32-41, Jun. 18-21, 2012. |
Ding et al., “Level-wise scheduling algorithm for fat tree interconnection networks”, Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (SC 2006), 9 pages, Nov. 2006. |
U.S. Appl. No. 14/046,976 Office Action dated Jun. 2, 2015. |
Li et al., “Multicast Replication Using Dual Lookups in Large Packet-Based Switches”, 2006 IET International Conference on Wireless, Mobile and Multimedia Networks, , pp. 1-3, Nov. 6-9, 2006. |
Nichols et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, Network Working Group, RFC 2474, 20 pages, Dec. 1998. |
Microsoft., “How IPv4 Multicasting Works”, 22 pages, Mar. 28, 2003. |
Suchara et al., “Network Architecture for Joint Failure Recovery and Traffic Engineering”, Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, pp. 97-108, Jun. 7-11, 2011. |
IEEE 802.1Q, “IEEE Standard for Local and metropolitan area networks Virtual Bridged Local Area Networks”, IEEE Computer Society, 303 pages, May 19, 2006. |
Plummer, D., “An Ethernet Address Resolution Protocol,” Network Working Group ,Request for Comments (RFC) 826, 10 pages, Nov. 1982. |
Hinden et al., “IP Version 6 Addressing Architecture,” Network Working Group ,Request for Comments (RFC) 2373, 26 pages, Jul. 1998. |
U.S. Appl. No. 12/910,900 Office Action dated Apr. 9, 2013. |
Haramaty et al., U.S. Appl. No. 14/745,488, filed Jun. 22, 2015. |
U.S. Appl. No. 14/046,976 Office Action dated Jan. 14, 2016. |
U.S. Appl. No. 14/970,608 Office Action dated May 30, 2017. |
U.S. Appl. No. 14/673,892 Office Action dated Jun. 1, 2017. |
Number | Date | Country | |
---|---|---|---|
20150372916 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
62016141 | Jun 2014 | US |