The present invention relates generally to computer networks, and specifically to process-to-process message communication over computer networks.
Parallel computation algorithms often entail frequent sending of short data messages between processors over a communication network. Efficient managing of inter-processor messages are discussed, for example, in “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems,” Bruck et. al, IEEE Transactions On Parallel And Distributed Systems, Vol. 8, No. 11, November 1997, wherein the authors present efficient algorithms for two all-to-all communication operations in message-passing systems.
The Message Passing Interface (MPI) is the de-facto standard for message handling in distributed computing. The standard is defined by the Message Passing Interface forum (MPI), and includes point-to-point message-passing, collective communications, group and communicator concepts, process topologies, environmental management, process creation and management, one-sided communications, extended collective operations, external interfaces, I/O, some miscellaneous topics, and a profiling interface. The latest publication of the standard is “MPI: A Message-Passing Interface Standard Version 3.0,” Message Passing Interface Forum, Sep. 21, 2012. For summaries of some of the main topics, see, for example, chapters 1, 3.1 through 3.4, 5.1, 6.1 and 7.1. Another commonly used distributed processing framework is OpenShmem; see, for example, “Introducing OpenSHMEM: SHMEM for the PGAS community,” Chapman et. al, Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, October 2010 (ISBN: 978-1-4503-0461-0).
An embodiment of the present invention that is described herein provides an apparatus including one or more ports for connecting to a communication network, processing circuitry and a message aggregation circuit (MAC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MAC is configured to receive messages, which originate in one or more source processes and are destined to one or more destination processes, to aggregate two or more of the messages that share a common destination into an aggregated message, and to send the aggregated message using the processing circuitry over the communication network.
In an embodiment, the apparatus further includes a host interface for connecting to one or more local processors, and the MAC is configured to receive one or more of the messages from the one or more local processors over the host interface. Additionally or alternatively, the MAC is configured to receive one or more of the messages from one or more remote processors over the communication network, via the ports.
In a disclosed embodiment, the two or more messages share a common destination network node, and the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination network node. In another embodiment, the two or more messages share a common destination path via the network, and the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination path. In an embodiment, the MAC is configured to compress the messages by joining messages that are destined to neighboring address ranges defined in the common destination.
In an example embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to expiry of a timeout. In another embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to a total size of the aggregated message reaching a predefined limit. In yet another embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to receiving an aggregation termination request. Typically, the MAC is configured to aggregate the messages as part of transport-layer processing.
In some embodiments, the messages include at least read requests, and the MAC is configured to aggregate at least the read requests into the aggregated message, and, upon receiving one or more aggregated responses in response to the aggregated message, to disaggregate the one or more aggregated responses at least into multiple read responses that correspond to the read requests. In some embodiments, the MAC is configured to aggregate in the aggregated message one or more additional messages in addition to the read requests.
In some embodiments, the messages include at least one message type selected from a group of types consisting of Remote Direct Memory Access (RDMA) READ messages, RDMA WRITE messages, and RDMA ATOMIC messages. In some embodiments, the one or more ports, the processing circuitry and the MAC are included in a network device.
There is additionally provided, in accordance with an embodiment of the present invention, an apparatus including one or more ports for connecting to a communication network, processing circuitry and a message disaggregation circuit (MDC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MDC is configured to receive from the processing circuitry an aggregated message, which was aggregated from two or more messages originating in one or more source processes and destined to one or more destination processes, to disaggregate the aggregated message into the two or more messages, and to send the two or more messages to the one or more destination processes.
Typically, the MDC is configured to disaggregate the aggregated message as part of transport-layer processing. In some embodiments, the aggregated message includes at least read requests, the MDC is configured to disaggregate the aggregated message into at least the read requests, and the apparatus further includes a message aggregation circuit (MAC) configured to receive read responses corresponding to the read requests, to aggregate the read responses into one or more aggregated responses, and to send the one or more aggregated responses using the processing circuitry over the communication network.
In an embodiment, the MAC is configured to group the read responses in the one or more aggregated responses in a grouping that differs from the grouping of the read requests in the aggregated message. In some embodiments, the messages include at least one message type selected from a group of types consisting of Remote Direct Memory Access (RDMA) READ messages, RDMA WRITE messages, and RDMA ATOMIC messages. In some embodiments, the one or more ports, the processing circuitry and the MDC are included in a network device.
There is further provided, in accordance with an embodiment of the present invention, a method including communicating messages, which originate in one or more source processes and are destined to one or more destination processes, over a communication network. Two or more of the messages, which share a common destination, are aggregated into an aggregated message. The aggregated message is sent over the communication network.
There is also provided, in accordance with an embodiment of the present invention, a method including communicating messages over a communication network, including receiving an aggregated message, which was aggregated from two or more messages originating in one or more source processes and destined to one or more destination processes. The aggregated message is disaggregated into the two or more messages. The two or more messages are sent to the one or more destination processes.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
Parallel algorithms that generate large amounts of small data packets with a point-to-point communication semantic, such as graph algorithms, often utilize a very small portion of the available network bandwidth. Small data packets are defined to be those with payload that is similar in size or smaller than the associated network headers sent in the packet. The poor network utilization, sometimes on the order of single digit percent of the network utilization, is caused by the bandwidth needed to transfer the network headers being similar to or greater than that of the payload, and limits on the rate at which network hardware can communicate data over the network.
The main contributors to the performance degradation are:
1. Large overheads relative to the protocol overheads.
2. Limited rate at which messages can be processed by the network-interface controller.
3. Limited rate at which packets can be processed by a switch.
4. Large packet overheads relative to network packet payload size.
Embodiments of the present invention that are disclosed herein provide methods and systems for aggregating egress messages, which may reduce the overhead and improve the multi-computer system performance. In some embodiments, a Message Aggregation Circuit (MAC) is added to the egress path of network devices; the MAC may aggregate messages that share the same destination, allowing the network device to send a smaller number of larger aggregated messages, reducing the total cost of the message overhead.
In some embodiments, aggregation is performed by a network adapter in a compute node, wherein the network adapter aggregates messages generated by processes running in the compute node. This sort of aggregation is sometimes referred to as “source aggregation.” In other embodiments, aggregation is carried out by a network switch, which aggregates messages received over the network. This sort of aggregation is sometimes referred to as “intermediate aggregation.” Hybrid aggregation schemes, in which an aggregated message is formed from both locally generated messages and messages received over the network, are also possible. For a given message, the process generating the message is referred to herein as a “source process” and the process to which the message is destined is referred to as a “destination process”. Generally, the disclosed aggregation techniques may be carried out in any suitable type of network device, e.g., network adapter, switch, router, hub, gateway, network-connected Graphics Processing unit (GPU), and the like.
The term “common destination” used for aggregation may refer to, for example, a common destination compute node, or a common destination path via the network. When aggregating messages destined to a common destination compute node, individual messages in the aggregated message may be addressed to different processors and/or processes in the common destination compute node. When aggregating messages destined to a common destination path, individual messages in the aggregated message may be addressed to different compute nodes, processors and/or processes reachable via the common destination path.
In some embodiments, the aggregation of egress messages to create an aggregated message may stop when a time limit has expired, or when a buffer size has been reached. In other embodiments, the aggregation may stop when a minimum bandwidth specification is met.
In some embodiments, an aggregation hierarchy is implemented, wherein messages within an aggregated message may be further aggregated; e.g., messages that write to neighboring segments of a memory may be aggregated to a larger message that writes into the combined memory space (such aggregation will be sometimes referred to as aggregated message compression).
Other embodiments of the present invention comprise a message disaggregation circuit (MDC), which is configured to break the aggregated messages back into the discrete original messages.
In some embodiments, aggregation is done based on the next hop in the network fabric. For example, if a network adapter sends messages to a plurality of destinations, but a group of the messages is first sent to the same switch in the communication network, the network adapter may aggregate the group of messages and send the aggregated message to the switch, which may then disaggregate the aggregated message and send the original messages to the corresponding destinations. In some embodiments, various switches in the communication network may aggregate and disaggregate messages.
Thus, in embodiments, the efficiency of message communication between network elements may be enhanced by sharing the communication overhead between groups of messages that are aggregated.
More details will be disclosed in the System Description hereinbelow, with reference to example embodiments.
Parallel computing systems in which computers that run a shared task communicate with each other over a communication network, typically comprise network-connected devices such as Network-Interface Controllers (NICs), Host Channel Adapters (HCAs), switches, routers, hubs and so on. The computers that run the shared task are typically connected to the network through a network adapter (NIC in Ethernet nomenclature, HCA in InfiniBand™ nomenclature, or similar for other communication networks); however the parallel computing tasks may also be run by computers that are coupled to other network elements such as switches.
Messages that the computers send to each other are typically sent by egress packets, which may or may not be acknowledged, using communication protocols such as Dynamic Connection (DC), Reliable Connection (RC) and others.
Remote Compute Node 106 comprises a Host Processor 114 that runs parallel computing processes 116, and a Network Adapter 118 that is configured to communicate messages over Network 104.
When Parallel-Computing System 100 runs a parallel computing job, processes throughout the system may communicate messages with peer processes. For example, one or more processes 110 running on Host 108 may send messages to one or more processes 116 that run on Host 114. Such messages may be short, and, as the overhead for each message is large (relative to the message size), may adversely affect the system performance if sent separately. As noted above, the process generating a certain message is referred to as the source process of that message, and the process to which the message is destined is referred to as the destination process.
According to the example embodiment illustrated in
In some embodiments, when the MAC aggregates multiple messages having the same destination, the MAC may strip-off the common destination fields of the messages (sending a single destination header instead), and possibly strip-off additional header fields. Typically, however, the MAC will not strip-off header fields that are not shared by the individual messages; e.g., source identification (when relevant).
At the destination, Network Adapter 118 of Remote Compute Node 106 comprises a Packet Processing circuit 124 and a Message Disaggregation Circuit (MDC) 126. The Packet Processing circuit sends ingress messages to the MDC. If any message is aggregated, the MDC reconstructs the original messages by disaggregating the aggregated message to separate messages, and then sends the messages back to the Packet Processing circuit, which may send the messages to Host 114.
As would be appreciated, Network Adapters 112 and 118 illustrated in
Moreover, the aggregation (grouping) of read responses at the target network adapter may differ from the original aggregation (grouping) of read requests at the source network adapter. In one simplified example, the source network adapter may aggregate two requests “req0” and “req1” into an aggregated message and send a third requests “req2” individually. In response, the target network adapter may send a response to req0 (denoted “res0”) individually, and aggregate the responses to req1 and req2 (denoted “res1” and “res2”) in an aggregated response message.
In embodiments, atomic read and writes may also be aggregated. In yet other embodiments, multiple transaction types may be combined to a single aggregated message.
In an embodiment, The MAC may be implemented as a separate dedicated block on a device (e.g., a processor (such as a CPU or GPU) or an FPGA) connected to a standard network adapter that does not include a MAC. In some embodiments a single process may run on Host 114. In an embodiment, a single process runs on Host 108, and the MAC aggregates messages that the single process generates (and are destined to the same Remote Compute Node). In some embodiments, Compute Node 102 and/or Compute Node 106 comprise more than one Host and/or more than one Network Adapter; in an embodiment, processes 110 may run on a peer device such as a GPU or an FPGA.
In an embodiment, Packet Processing circuit 124 detects aggregated messages, and sends to the MDC only packets that need to be disaggregated. In another embodiment, MDC 126 sends the disaggregated messages directly to Host 114.
The Message Classifier receives messages to specified destinations from the packet processing circuit, and checks if the messages should and could be aggregated (examples to messages that should not be aggregated and to messages that cannot be aggregated will be described hereinbelow, with reference to
Aggregation Circuits 202 are configured to store aggregated messages. Typically, the Aggregation Circuit adds metadata to the message, e.g., to specify message boundaries. When a new message is to be added to an aggregated message, the Aggregation Circuit adds the new message to the stored aggregated message and may modify the metadata accordingly. In embodiments, an Aggregation Circuit that aggregates messages with a specified destination is marked with the destination ID.
Aggregation Control circuit 204 is configured to determine if any of the Aggregation Circuits should be deallocated (e.g., emptied and made ready to be reallocated). (Example criteria for this decision will be described hereinbelow, with reference to
In summary, Message-Processing circuit 122 receives messages from Packet Processing circuit 120 and stores some of the messages in Aggregation Circuits which are allocated to specified message destinations. An Aggregation Control circuit empties the Aggregation Circuits through a Multiplexor and an Egress Queue, the latter sending aggregated messages back to the Packet Processing circuit. The number of aggregated messages may be smaller than the number of the non-aggregated messages, improving overall performance.
As would be appreciated, the message aggregation circuit structure illustrated in
If, in step 304, there is no Aggregation Circuit with a destination ID matching the specified destination, the Message Classifier will enter a Check-Aggregation-Needed step 308, and check if the message should be aggregated. In some embodiments, only messages to predefined destinations should be aggregated; in an embodiment, predefined ranges of destination may be defined, and any message to a destination that is not within the specified range should not be aggregated. In another embodiment, aggregation is a property of the egress queue; In some other embodiments, messages with size exceeding a predefined threshold should not be aggregated, and in yet other embodiments an application may indicate which messages should (or should not) be aggregated, and when the aggregation should stop.
If, in step 308, the message should not be aggregated, the Message Classifier enters a Post-Message step 310, and posts the message in Egress Queue 208 (
After either step 306 or step 310, the Message Classifier reenters step 302, to handle the next message.
If, in step 404, the aggregated message is not greater than the preset threshold, the Aggregation Control circuit enters a Check-Timeout step 408, and checks if a preset time limit, measured from the time in which the Aggregation Circuit was allocated, has been reached. In some embodiments, step 408 is useful to guarantee a maximum latency specification. If the preset time limit has been reached, the Aggregation Control circuit enters Post-Message step 406, to post the message and reallocate the Aggregation Control circuit. If, in step 408, the time limit has not been reached, the Aggregation Control circuit enters a Check-Bandwidth step 410. In some embodiments, a minimum bandwidth is specified, and message aggregation should guarantee a bandwidth equal to or greater than the specified minimum. In an embodiment, the bandwidth is measured and, if the specified minimum is met, the aggregation may be relaxed (e.g., to shorten the latency). In step 410, if the measured bandwidth is more than a predefined threshold, (which is typically higher than the specified minimum bandwidth by some margin), the Aggregation Control circuit enters Post Message step 406. If, in step 410, the bandwidth is not higher (or not sufficiently higher) than the specified minimum, all deallocation criteria are not met, and the Aggregation Control circuit enters an Increment-i step 412 to increment the destination index, and then reenters step 402, to check the next Aggregation Circuit (the Aggregation Control circuit also enters step 412 after post-message step 406).
As would be appreciated, flowcharts 300 and 400, illustrated in
In the description hereinabove, messages with shared destination may be aggregated. In some embodiments, messages within the aggregated message may be further aggregated, according to criteria other than destination ID, for further performance improvement. For example, an aggregated message to processes in a remote host may comprise messages to the same process running in the host. In some embodiments, messages to the same process are further aggregated within the aggregated message to the host, saving overhead in the destination (such secondary aggregation is also referred to as “aggregated message compression”).
In some embodiments, data that is written to neighboring segments in a memory of the destination processor may be aggregated; e.g., a message to write data in addresses 0-63 may be aggregated with a message to write data in addresses 64-127, to form a message that writes data in addresses 0-127 (within the aggregated message to the host).
In some embodiments of the present invention, messages are aggregated based on the next hop node in the message propagation. For example, if a Compute Node sends messages to a plurality of different peer computers, but a group of messages are routed through a first shared switch (“first hop”), the compute node may aggregate messages that share the same first hop. The switch will comprise a disaggregation circuit, to disaggregate the messages, and forward the disaggregated messages to their destinations. In some embodiments, the switch may comprise a message aggregation circuit, to aggregate egress messages, including disaggregated messages sent from the previous hop and other messages. In embodiments, multiple switches may comprise disaggregation and aggregation circuits and, hence, message aggregation and disaggregation is distributed in both the network adapters and the network switches of the parallel computing system.
In some embodiments, the network adapters may be partially synchronized by sending messages to similar destinations at similar time slots—this increases the probability that the messages will be aggregated at the next hop within a given timeframe.
Switch 506 may comprise a Message Aggregation Circuit 512, which is configured to aggregate egress messages. According to the example embodiment illustrated in
As would be appreciated, the structure of switch 504, illustrated in
In various embodiments, aggregation is carried out in various communication layers, such as Transport layer, Network layer and Link layer, wherein deeper layer may results in more efficient aggregation. For example, when aggregating at the Transport layer, network acknowledgment now acknowledges completion of work posted by multiple processes; the MAC needs to record which of the multiple per-process work requests were completed by the single acknowledgement.
In an embodiment, aggregation may include out-of-order completion; in this case the MAC should complete the aggregation only when receiving a sequence of completions; or else report out-of-order to the requesting source.
It should be mentioned that aggregation and disaggregation may be used for both one-sided Remote-Direct-Memory-Access (RDMA) transactions and for message SEND operations; note, though, that address aggregation may not be applicable to a SEND operation, which may not have an associated address. Some messages (e.g., RDMA READ and WRITE) may be regarded as “address-based” in which case the aggregation, too, may be based on addresses of the messages. Other messages may not be address-based.
The configuration of Network Adapters 112 and 118, and their components, e.g., MAC 122, MDC 126; the components of MAC 122 (e.g., Message Classifier 200, Aggregation Circuits 202, Aggregation Control 204, Multiplexor 206 and Egress Queue 208); and the methods of flowcharts 300 and 400, illustrated in
In some embodiments, Host 108, Host 114, and certain elements of the Network Adapters and the Switches may be implemented using one or more general-purpose programmable processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
Although the embodiments described herein mainly address message aggregation in parallel computing systems, the methods and systems described herein can also be used in other applications, such PCIe and/or CXL tunneling.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 62/961,232, filed Jan. 15, 2020, whose disclosure is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4933969 | Marshall et al. | Jun 1990 | A |
5068877 | Near et al. | Nov 1991 | A |
5325500 | Bell et al. | Jun 1994 | A |
5353412 | Douglas et al. | Oct 1994 | A |
5404565 | Gould et al. | Apr 1995 | A |
5408469 | Opher et al. | Apr 1995 | A |
5606703 | Brady et al. | Feb 1997 | A |
5944779 | Blum | Aug 1999 | A |
6041049 | Brady | Mar 2000 | A |
6115394 | Balachandran | Sep 2000 | A |
6370502 | Wu et al. | Apr 2002 | B1 |
6438137 | Turner | Aug 2002 | B1 |
6483804 | Muller et al. | Nov 2002 | B1 |
6507562 | Kadansky et al. | Jan 2003 | B1 |
6728862 | Wilson | Apr 2004 | B1 |
6857004 | Howard et al. | Feb 2005 | B1 |
6937576 | Di Benedetto et al. | Aug 2005 | B1 |
7102998 | Golestani | Sep 2006 | B1 |
7124180 | Ranous | Oct 2006 | B1 |
7164422 | Wholey, III et al. | Jan 2007 | B1 |
7171484 | Krause et al. | Jan 2007 | B1 |
7313582 | Bhanot et al. | Dec 2007 | B2 |
7327693 | Rivers et al. | Feb 2008 | B1 |
7336646 | Muller | Feb 2008 | B2 |
7346698 | Hannaway | Mar 2008 | B2 |
7555549 | Campbell et al. | Jun 2009 | B1 |
7613774 | Caronni et al. | Nov 2009 | B1 |
7636424 | Halikhedkar et al. | Dec 2009 | B1 |
7636699 | Stanfill | Dec 2009 | B2 |
7738443 | Kumar | Jun 2010 | B2 |
7760743 | Shokri et al. | Jul 2010 | B2 |
8213315 | Crupnicoff et al. | Jul 2012 | B2 |
8255475 | Kagan et al. | Aug 2012 | B2 |
8380880 | Gulley et al. | Feb 2013 | B2 |
8510366 | Anderson et al. | Aug 2013 | B1 |
8645663 | Kagan et al. | Feb 2014 | B2 |
8738891 | Karandikar et al. | May 2014 | B1 |
8761189 | Shachar et al. | Jun 2014 | B2 |
8768898 | Trimmer et al. | Jul 2014 | B1 |
8775698 | Archer et al. | Jul 2014 | B2 |
8811417 | Bloch et al. | Aug 2014 | B2 |
9110860 | Shahar | Aug 2015 | B2 |
9189447 | Faraj | Nov 2015 | B2 |
9294551 | Froese et al. | Mar 2016 | B1 |
9344490 | Bloch et al. | May 2016 | B2 |
9456060 | Pope et al. | Sep 2016 | B2 |
9563426 | Bent et al. | Feb 2017 | B1 |
9626329 | Howard | Apr 2017 | B2 |
9756154 | Jiang | Sep 2017 | B1 |
10015106 | Florissi et al. | Jul 2018 | B1 |
10158702 | Bloch et al. | Dec 2018 | B2 |
10284383 | Bloch et al. | May 2019 | B2 |
10296351 | Kohn et al. | May 2019 | B1 |
10305980 | Gonzalez et al. | May 2019 | B1 |
10318306 | Kohn et al. | Jun 2019 | B1 |
10425350 | Florissi | Sep 2019 | B1 |
10521283 | Shuler et al. | Dec 2019 | B2 |
10528518 | Graham et al. | Jan 2020 | B2 |
10541938 | Timmerman et al. | Jan 2020 | B1 |
10547553 | Shattah et al. | Jan 2020 | B2 |
10621489 | Appuswamy et al. | Apr 2020 | B2 |
11088971 | Brody et al. | Aug 2021 | B2 |
20020010844 | Noel et al. | Jan 2002 | A1 |
20020035625 | Tanaka | Mar 2002 | A1 |
20020150094 | Cheng et al. | Oct 2002 | A1 |
20020150106 | Kagan et al. | Oct 2002 | A1 |
20020152315 | Kagan et al. | Oct 2002 | A1 |
20020152327 | Kagan et al. | Oct 2002 | A1 |
20020152328 | Kagan et al. | Oct 2002 | A1 |
20020165897 | Kagan et al. | Nov 2002 | A1 |
20030018828 | Craddock et al. | Jan 2003 | A1 |
20030061417 | Craddock et al. | Mar 2003 | A1 |
20030065856 | Kagan et al. | Apr 2003 | A1 |
20030120835 | Kale et al. | Jun 2003 | A1 |
20040030745 | Boucher et al. | Feb 2004 | A1 |
20040062258 | Grow et al. | Apr 2004 | A1 |
20040078493 | Blumrich et al. | Apr 2004 | A1 |
20040120331 | Rhine et al. | Jun 2004 | A1 |
20040123071 | Stefan et al. | Jun 2004 | A1 |
20040252685 | Kagan et al. | Dec 2004 | A1 |
20040260683 | Chan et al. | Dec 2004 | A1 |
20050097300 | Gildea et al. | May 2005 | A1 |
20050122329 | Janus | Jun 2005 | A1 |
20050129039 | Biran et al. | Jun 2005 | A1 |
20050131865 | Jones et al. | Jun 2005 | A1 |
20050223118 | Tucker et al. | Oct 2005 | A1 |
20050281287 | Ninomi et al. | Dec 2005 | A1 |
20060282838 | Gupta et al. | Dec 2006 | A1 |
20070127396 | Jain et al. | Jun 2007 | A1 |
20070127525 | Sarangam et al. | Jun 2007 | A1 |
20070162236 | Lamblin et al. | Jul 2007 | A1 |
20080040792 | Larson | Feb 2008 | A1 |
20080104218 | Liang et al. | May 2008 | A1 |
20080126564 | Wilkinson | May 2008 | A1 |
20080168471 | Benner et al. | Jul 2008 | A1 |
20080181260 | Vonog et al. | Jul 2008 | A1 |
20080192750 | Ko et al. | Aug 2008 | A1 |
20080219159 | Chateau et al. | Sep 2008 | A1 |
20080244220 | Lin et al. | Oct 2008 | A1 |
20080263329 | Archer et al. | Oct 2008 | A1 |
20080288949 | Bohra et al. | Nov 2008 | A1 |
20080298380 | Rittmeyer et al. | Dec 2008 | A1 |
20080307082 | Cai et al. | Dec 2008 | A1 |
20090037377 | Archer et al. | Feb 2009 | A1 |
20090063816 | Arimilli et al. | Mar 2009 | A1 |
20090063817 | Arimilli et al. | Mar 2009 | A1 |
20090063891 | Arimilli et al. | Mar 2009 | A1 |
20090182814 | Tapolcai et al. | Jul 2009 | A1 |
20090240838 | Berg et al. | Sep 2009 | A1 |
20090247241 | Gollnick et al. | Oct 2009 | A1 |
20090292905 | Faraj | Nov 2009 | A1 |
20090296699 | Hefty | Dec 2009 | A1 |
20090327444 | Archer et al. | Dec 2009 | A1 |
20100017420 | Archer et al. | Jan 2010 | A1 |
20100049836 | Kramer | Feb 2010 | A1 |
20100074098 | Zeng et al. | Mar 2010 | A1 |
20100095086 | Eichenberger et al. | Apr 2010 | A1 |
20100185719 | Howard | Jul 2010 | A1 |
20100241828 | Yu et al. | Sep 2010 | A1 |
20100274876 | Kagan et al. | Oct 2010 | A1 |
20100329275 | Johnsen et al. | Dec 2010 | A1 |
20110060891 | Jia | Mar 2011 | A1 |
20110066649 | Berlyant et al. | Mar 2011 | A1 |
20110093258 | Xu et al. | Apr 2011 | A1 |
20110119673 | Bloch et al. | May 2011 | A1 |
20110173413 | Chen et al. | Jul 2011 | A1 |
20110219208 | Asaad | Sep 2011 | A1 |
20110238956 | Arimilli et al. | Sep 2011 | A1 |
20110258245 | Blocksome et al. | Oct 2011 | A1 |
20110276789 | Chambers et al. | Nov 2011 | A1 |
20120063436 | Thubert et al. | Mar 2012 | A1 |
20120117331 | Krause et al. | May 2012 | A1 |
20120131309 | Johnson | May 2012 | A1 |
20120254110 | Takemoto | Oct 2012 | A1 |
20130117548 | Grover et al. | May 2013 | A1 |
20130159410 | Lee et al. | Jun 2013 | A1 |
20130159568 | Shahar et al. | Jun 2013 | A1 |
20130215904 | Zhou et al. | Aug 2013 | A1 |
20130250756 | Johri | Sep 2013 | A1 |
20130312011 | Kumar et al. | Nov 2013 | A1 |
20130318525 | Palanisamy et al. | Nov 2013 | A1 |
20130336292 | Kore et al. | Dec 2013 | A1 |
20140019574 | Cardona | Jan 2014 | A1 |
20140033217 | Vajda et al. | Jan 2014 | A1 |
20140040542 | Kim et al. | Feb 2014 | A1 |
20140047341 | Breternitz et al. | Feb 2014 | A1 |
20140095779 | Forsyth et al. | Apr 2014 | A1 |
20140122831 | Uliel et al. | May 2014 | A1 |
20140136811 | Fleischer et al. | May 2014 | A1 |
20140189308 | Hughes et al. | Jul 2014 | A1 |
20140211804 | Makikeni et al. | Jul 2014 | A1 |
20140258438 | Ayoub | Sep 2014 | A1 |
20140280420 | Khan | Sep 2014 | A1 |
20140281370 | Khan | Sep 2014 | A1 |
20140362692 | Wu et al. | Dec 2014 | A1 |
20140365548 | Mortensen | Dec 2014 | A1 |
20150074373 | Sperber et al. | Mar 2015 | A1 |
20150106578 | Warfield et al. | Apr 2015 | A1 |
20150143076 | Khan | May 2015 | A1 |
20150143077 | Khan | May 2015 | A1 |
20150143078 | Khan et al. | May 2015 | A1 |
20150143079 | Khan | May 2015 | A1 |
20150143085 | Khan | May 2015 | A1 |
20150143086 | Khan | May 2015 | A1 |
20150154058 | Miwa et al. | Jun 2015 | A1 |
20150178211 | Hiramoto et al. | Jun 2015 | A1 |
20150180785 | Annamraju | Jun 2015 | A1 |
20150188987 | Reed et al. | Jul 2015 | A1 |
20150193271 | Archer et al. | Jul 2015 | A1 |
20150212972 | Boettcher et al. | Jul 2015 | A1 |
20150261720 | Kagan et al. | Sep 2015 | A1 |
20150269116 | Raikin | Sep 2015 | A1 |
20150278347 | Meyer et al. | Oct 2015 | A1 |
20150347012 | Dewitt et al. | Dec 2015 | A1 |
20150365494 | Cardona | Dec 2015 | A1 |
20150379022 | Puig et al. | Dec 2015 | A1 |
20160055225 | Xu et al. | Feb 2016 | A1 |
20160092362 | Barron | Mar 2016 | A1 |
20160105494 | Reed et al. | Apr 2016 | A1 |
20160112531 | Milton et al. | Apr 2016 | A1 |
20160117277 | Raindel et al. | Apr 2016 | A1 |
20160119244 | Wang et al. | Apr 2016 | A1 |
20160179537 | Kunzman et al. | Jun 2016 | A1 |
20160219009 | French | Jul 2016 | A1 |
20160248656 | Anand et al. | Aug 2016 | A1 |
20160283422 | Crupnicoff et al. | Sep 2016 | A1 |
20160294793 | Larson | Oct 2016 | A1 |
20160299872 | Vaidyanathan et al. | Oct 2016 | A1 |
20160342568 | Burchard et al. | Nov 2016 | A1 |
20160352598 | Reinhardt | Dec 2016 | A1 |
20160364350 | Sanghi et al. | Dec 2016 | A1 |
20170063613 | Bloch et al. | Mar 2017 | A1 |
20170093715 | McGhee et al. | Mar 2017 | A1 |
20170116154 | Palmer et al. | Apr 2017 | A1 |
20170187496 | Shalev et al. | Jun 2017 | A1 |
20170187589 | Pope et al. | Jun 2017 | A1 |
20170187629 | Shalev et al. | Jun 2017 | A1 |
20170187846 | Shalev et al. | Jun 2017 | A1 |
20170192782 | Valentine et al. | Jul 2017 | A1 |
20170199844 | Burchard et al. | Jul 2017 | A1 |
20170255501 | Shuler | Sep 2017 | A1 |
20170262517 | Horowitz et al. | Sep 2017 | A1 |
20170308329 | A et al. | Oct 2017 | A1 |
20170344589 | Kafai et al. | Nov 2017 | A1 |
20180004530 | Vorbach | Jan 2018 | A1 |
20180046901 | Xie et al. | Feb 2018 | A1 |
20180047099 | Bonig et al. | Feb 2018 | A1 |
20180089278 | Bhattacharjee et al. | Mar 2018 | A1 |
20180091442 | Chen et al. | Mar 2018 | A1 |
20180097721 | Matsui et al. | Apr 2018 | A1 |
20180115529 | Munger | Apr 2018 | A1 |
20180173673 | Daglis et al. | Jun 2018 | A1 |
20180262551 | Demeyer et al. | Sep 2018 | A1 |
20180278549 | Mula | Sep 2018 | A1 |
20180285316 | Thorson et al. | Oct 2018 | A1 |
20180287928 | Levi et al. | Oct 2018 | A1 |
20180302324 | Kasuya | Oct 2018 | A1 |
20180321912 | Li et al. | Nov 2018 | A1 |
20180321938 | Boswell et al. | Nov 2018 | A1 |
20180349212 | Liu et al. | Dec 2018 | A1 |
20180367465 | Levi | Dec 2018 | A1 |
20180375781 | Chen et al. | Dec 2018 | A1 |
20190018805 | Benisty | Jan 2019 | A1 |
20190026250 | Das Sarma et al. | Jan 2019 | A1 |
20190044889 | Serres | Feb 2019 | A1 |
20190065208 | Liu et al. | Feb 2019 | A1 |
20190068501 | Schneder et al. | Feb 2019 | A1 |
20190102179 | Fleming et al. | Apr 2019 | A1 |
20190102338 | Tang et al. | Apr 2019 | A1 |
20190102640 | Balasubramanian | Apr 2019 | A1 |
20190114533 | Ng et al. | Apr 2019 | A1 |
20190121388 | Knowles et al. | Apr 2019 | A1 |
20190138638 | Pal et al. | May 2019 | A1 |
20190141133 | Rajan | May 2019 | A1 |
20190147092 | Pal et al. | May 2019 | A1 |
20190149486 | Bohrer et al. | May 2019 | A1 |
20190149488 | Bansal | May 2019 | A1 |
20190171612 | Shahar | Jun 2019 | A1 |
20190235866 | Das Sarma et al. | Aug 2019 | A1 |
20190278737 | Kozomora | Sep 2019 | A1 |
20190303168 | Fleming, Jr. et al. | Oct 2019 | A1 |
20190303263 | Fleming, Jr. et al. | Oct 2019 | A1 |
20190324431 | Celia et al. | Oct 2019 | A1 |
20190339688 | Celia et al. | Nov 2019 | A1 |
20190347099 | Eapen et al. | Nov 2019 | A1 |
20190369994 | Parandeh Afshar et al. | Dec 2019 | A1 |
20190377580 | Vorbach | Dec 2019 | A1 |
20190379714 | Levi et al. | Dec 2019 | A1 |
20200005859 | Chen et al. | Jan 2020 | A1 |
20200034145 | Bainville et al. | Jan 2020 | A1 |
20200057748 | Danilak | Feb 2020 | A1 |
20200103894 | Celia et al. | Apr 2020 | A1 |
20200106828 | Elias et al. | Apr 2020 | A1 |
20200137013 | Jin et al. | Apr 2020 | A1 |
20200265043 | Graham et al. | Aug 2020 | A1 |
20200274733 | Graham et al. | Aug 2020 | A1 |
20210203621 | Ylisirniö | Jul 2021 | A1 |
Entry |
---|
Mellanox Technologies, “InfiniScale IV: 36-port 40GB/s Infiniband Switch Device”, pp. 1-2, year 2009. |
Mellanox Technologies Inc., “Scaling 10Gb/s Clustering at Wire-Speed”, pp. 1-8, year 2006. |
IEEE 802.1D Standard “IEEE Standard for Local and Metropolitan Area Networks—Media Access Control (MAC) Bridges”, IEEE Computer Society, pp. 1-281, Jun. 9, 2004. |
IEEE 802.1AX Standard “IEEE Standard for Local and Metropolitan Area Networks—Link Aggregation”, IEEE Computer Society, pp. 1-163, Nov. 3, 2008. |
Turner et al., “Multirate Clos Networks”, IEEE Communications Magazine, pp. 1-11, Oct. 2003. |
Thayer School of Engineering, “An Slightly Edited Local Copy of Elements of Lectures 4 and 5”, Dartmouth College, pp. 1-5, Jan. 15, 1998 http://people.seas.harvard.edu/˜jones/cscie129/nu_lectures/lecture11/switching/clos_network/clos_network.html. |
“MPI: A Message-Passing Interface Standard,” Message Passing Interface Forum, version 3.1, pp. 1-868, Jun. 4, 2015. |
Coti et al., “MPI Applications on Grids: a Topology Aware Approach,” Proceedings of the 15th International European Conference on Parallel and Distributed Computing (EuroPar'09), pp. 1-12, Aug. 2009. |
Petrini et al., “The Quadrics Network (QsNet): High-Performance Clustering Technology,” Proceedings of the 9th IEEE Symposium on Hot Interconnects (Hotl'01), pp. 1-6, Aug. 2001. |
Sancho et al., “Efficient Offloading of Collective Communications in Large-Scale Systems,” Proceedings of the 2007 IEEE International Conference on Cluster Computing, pp. 1-10, Sep. 17-20, 2007. |
Nudelman et al., U.S. Appl. No. 17/120,321, filed Dec. 14, 2020. |
InfiniBand Architecture Specification, vol. 1, Release 1.2.1, pp. 1-1727, Nov. 2007. |
Deming, “Infiniband Architectural Overview”, Storage Developer Conference, pp. 1-70, year 2013. |
Fugger et al., “Reconciling fault-tolerant distributed computing and systems-on-chip”, Distributed Computing, vol. 24, Issue 6, pp. 323-355, Jan. 2012. |
Wikipedia, “System on a chip”, pp. 1-4, Jul. 6, 2018. |
Villavieja et al., “On-chip Distributed Shared Memory”, Computer Architecture Department, pp. 1-10, Feb. 3, 2011. |
Ben-Moshe et al., U.S. Appl. No. 16/750,019, filed Jan. 23, 2020. |
Bruck et al., “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems”, IEEE Transactions on Parallel and Distributed Systems, vol. 8, No. 11, pp. 1143-1156, Nov. 1997. |
Gainaru et al., “Using InfiniBand Hardware Gather-Scatter Capabilities to Optimize MPI AII-to-AII”, EuroMPI '16, Edinburgh, United Kingdom, pp. 1-13, year 2016. |
Pjesivac-Grbovic et al., “Performance analysis of MPI collective operations”, Cluster Computing, pp. 1-25, 2007. |
Bruck et al., “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems”, Proceedings of the sixth annual ACM symposium on Parallel algorithms and architectures, pp. 298-309, Aug. 1, 1994. |
Chiang et al., “Toward supporting data parallel programming on clusters of symmetric multiprocessors”, Proceedings International Conference on Parallel and Distributed Systems, pp. 607-614, Dec. 14, 1998. |
Danalis et al., “PTG: an abstraction for unhindered parallelism”, 2014 Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, pp. 1-10, Nov. 17, 2014. |
Cosnard et al., “Symbolic Scheduling of Parameterized Task Graphs on Parallel Machines,” Combinatorial Optimization book series (COOP, vol. 7), pp. 217-243, year 2000. |
Jeannot et al., “Automatic Multithreaded Parallel Program Generation for Message Passing Multiprocessors using paramerized Task Graphs”, World Scientific, pp. 1-8, Jul. 23, 2001. |
Stone, “An Efficient Parallel Algorithm for the Solution of a Tridiagonal Linear System of Equations,” Journal of the Association for Computing Machinery, vol. 10, No. 1, pp. 27-38, Jan. 1973. |
Kogge et al., “A Parallel Algorithm for the Efficient Solution of a General Class of Recurrence Equations,” IEEE Transactions on Computers, vol. C-22, No. 8, pp. 786-793, Aug. 1973. |
Hoefler et al., “Message Progression in Parallel Computing—To Thread or not to Thread?”, 2008 IEEE International Conference on Cluster Computing, pp. 1-10, Tsukuba, Japan, Sep. 29-Oct. 1, 2008. |
Wikipedia, “Loop unrolling,” pp. 1-9, last edited Sep. 9, 2020 downloaded from https://en.wikipedia.org/wiki/Loop_unrolling. |
Chapman et al., “Introducing OpenSHMEM: SHMEM for the PGAS Community,” Proceedings of the Forth Conferene on Partitioned Global Address Space Programming Model, pp. 1-4, Oct. 2010. |
Priest et al., “You've Got Mail (YGM): Building Missing Asynchronous Communication Primitives”, IEEE International Parallel and Distributed Processing Symposium Workshops, pp. 221-230, year 2019. |
Wikipedia, “Nagle's algorithm”, pp. 1-4, Dec. 12, 2019. |
U.S. Appl. No. 16/430,457 Office Action dated Jul. 9, 2021. |
Yang et al., “SwitchAgg: A Further Step Toward In-Network Computing,” 2019 IEEE International Conference on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking, pp. 36-45, Dec. 2019. |
EP Application # 20216972 Search Report dated Jun. 11, 2021. |
U.S. Appl. No. 16/782,118 Office Action dated Jun. 3, 2021. |
U.S. Appl. No. 16/789,458 Office Action dated Jun. 10, 2021. |
U.S. Appl. No. 16/750,019 Office Action dated Jun. 15, 2021. |
U.S. Appl. No. 16/782,118 Office Action dated Nov. 8, 2021. |
“Message Passing Interface (MPI): History and Evolution,” Virtual Workshop, Cornell University Center for Advanced Computing, NY, USA, pp. 1-2, year 2021, as downloaded from https://cvw.cac.cornell.edu/mpi/history. |
Pacheco, “A User's Guide to MPI,” Department of Mathematics, University of San Francisco, CA, USA, pp. 1-51, Mar. 30, 1998. |
Wikipedia, “Message Passing Interface,” pp. 1-16, last edited Nov. 7, 2021, as downloaded from https://en.wikipedia.org/wiki/Message_Passing_Interface. |
EP Application # 21183290.2 Search Report dated Dec. 8, 2021. |
U.S. Appl. No. 16/782,118 Office Action dated Jun. 15, 2022. |
U.S. Appl. No. 16/782,118 Office Action dated Sep. 7, 2022. |
U.S. Appl. No. 17/495,824 Office Action dated Jan. 27, 2023. |
Number | Date | Country | |
---|---|---|---|
20210218808 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
62961232 | Jan 2020 | US |