Small message aggregation

Information

  • Patent Grant
  • 11750699
  • Patent Number
    11,750,699
  • Date Filed
    Wednesday, January 13, 2021
    3 years ago
  • Date Issued
    Tuesday, September 5, 2023
    a year ago
Abstract
An apparatus includes one or more ports for connecting to a communication network, processing circuitry and a message aggregation circuit (MAC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MAC is configured to receive messages, which originate in one or more source processes and are destined to one or more destination processes, to aggregate two or more of the messages that share a common destination into an aggregated message, and to send the aggregated message using the processing circuitry over the communication network.
Description
FIELD OF THE INVENTION

The present invention relates generally to computer networks, and specifically to process-to-process message communication over computer networks.


BACKGROUND

Parallel computation algorithms often entail frequent sending of short data messages between processors over a communication network. Efficient managing of inter-processor messages are discussed, for example, in “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems,” Bruck et. al, IEEE Transactions On Parallel And Distributed Systems, Vol. 8, No. 11, November 1997, wherein the authors present efficient algorithms for two all-to-all communication operations in message-passing systems.


The Message Passing Interface (MPI) is the de-facto standard for message handling in distributed computing. The standard is defined by the Message Passing Interface forum (MPI), and includes point-to-point message-passing, collective communications, group and communicator concepts, process topologies, environmental management, process creation and management, one-sided communications, extended collective operations, external interfaces, I/O, some miscellaneous topics, and a profiling interface. The latest publication of the standard is “MPI: A Message-Passing Interface Standard Version 3.0,” Message Passing Interface Forum, Sep. 21, 2012. For summaries of some of the main topics, see, for example, chapters 1, 3.1 through 3.4, 5.1, 6.1 and 7.1. Another commonly used distributed processing framework is OpenShmem; see, for example, “Introducing OpenSHMEM: SHMEM for the PGAS community,” Chapman et. al, Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, October 2010 (ISBN: 978-1-4503-0461-0).


SUMMARY

An embodiment of the present invention that is described herein provides an apparatus including one or more ports for connecting to a communication network, processing circuitry and a message aggregation circuit (MAC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MAC is configured to receive messages, which originate in one or more source processes and are destined to one or more destination processes, to aggregate two or more of the messages that share a common destination into an aggregated message, and to send the aggregated message using the processing circuitry over the communication network.


In an embodiment, the apparatus further includes a host interface for connecting to one or more local processors, and the MAC is configured to receive one or more of the messages from the one or more local processors over the host interface. Additionally or alternatively, the MAC is configured to receive one or more of the messages from one or more remote processors over the communication network, via the ports.


In a disclosed embodiment, the two or more messages share a common destination network node, and the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination network node. In another embodiment, the two or more messages share a common destination path via the network, and the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination path. In an embodiment, the MAC is configured to compress the messages by joining messages that are destined to neighboring address ranges defined in the common destination.


In an example embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to expiry of a timeout. In another embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to a total size of the aggregated message reaching a predefined limit. In yet another embodiment, the MAC is configured to terminate aggregation of the aggregated message responsive to receiving an aggregation termination request. Typically, the MAC is configured to aggregate the messages as part of transport-layer processing.


In some embodiments, the messages include at least read requests, and the MAC is configured to aggregate at least the read requests into the aggregated message, and, upon receiving one or more aggregated responses in response to the aggregated message, to disaggregate the one or more aggregated responses at least into multiple read responses that correspond to the read requests. In some embodiments, the MAC is configured to aggregate in the aggregated message one or more additional messages in addition to the read requests.


In some embodiments, the messages include at least one message type selected from a group of types consisting of Remote Direct Memory Access (RDMA) READ messages, RDMA WRITE messages, and RDMA ATOMIC messages. In some embodiments, the one or more ports, the processing circuitry and the MAC are included in a network device.


There is additionally provided, in accordance with an embodiment of the present invention, an apparatus including one or more ports for connecting to a communication network, processing circuitry and a message disaggregation circuit (MDC). The processing circuitry is configured to communicate messages over the communication network via the one or more ports. The MDC is configured to receive from the processing circuitry an aggregated message, which was aggregated from two or more messages originating in one or more source processes and destined to one or more destination processes, to disaggregate the aggregated message into the two or more messages, and to send the two or more messages to the one or more destination processes.


Typically, the MDC is configured to disaggregate the aggregated message as part of transport-layer processing. In some embodiments, the aggregated message includes at least read requests, the MDC is configured to disaggregate the aggregated message into at least the read requests, and the apparatus further includes a message aggregation circuit (MAC) configured to receive read responses corresponding to the read requests, to aggregate the read responses into one or more aggregated responses, and to send the one or more aggregated responses using the processing circuitry over the communication network.


In an embodiment, the MAC is configured to group the read responses in the one or more aggregated responses in a grouping that differs from the grouping of the read requests in the aggregated message. In some embodiments, the messages include at least one message type selected from a group of types consisting of Remote Direct Memory Access (RDMA) READ messages, RDMA WRITE messages, and RDMA ATOMIC messages. In some embodiments, the one or more ports, the processing circuitry and the MDC are included in a network device.


There is further provided, in accordance with an embodiment of the present invention, a method including communicating messages, which originate in one or more source processes and are destined to one or more destination processes, over a communication network. Two or more of the messages, which share a common destination, are aggregated into an aggregated message. The aggregated message is sent over the communication network.


There is also provided, in accordance with an embodiment of the present invention, a method including communicating messages over a communication network, including receiving an aggregated message, which was aggregated from two or more messages originating in one or more source processes and destined to one or more destination processes. The aggregated message is disaggregated into the two or more messages. The two or more messages are sent to the one or more destination processes.





The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:



FIG. 1 is a block diagram that schematically illustrates a Parallel-Computing System, in which a plurality of compute nodes exchange messages over a communication network;



FIG. 2 is a block diagram that schematically illustrates a Message-Aggregation Circuit (MAC), in accordance with an embodiment of the present invention;



FIG. 3 is a flowchart that schematically illustrates a method for sending messages to aggregation circuits, in accordance with an embodiment of the present invention;



FIG. 4 is a flowchart that schematically illustrates a method for deallocation and emptying aggregation circuits, in accordance with an embodiment of the present invention; and



FIG. 5 is a block diagram that schematically illustrates the structure of a parallel computing system, with distributed message aggregation and disaggregation, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF EMBODIMENTS
Overview

Parallel algorithms that generate large amounts of small data packets with a point-to-point communication semantic, such as graph algorithms, often utilize a very small portion of the available network bandwidth. Small data packets are defined to be those with payload that is similar in size or smaller than the associated network headers sent in the packet. The poor network utilization, sometimes on the order of single digit percent of the network utilization, is caused by the bandwidth needed to transfer the network headers being similar to or greater than that of the payload, and limits on the rate at which network hardware can communicate data over the network.


The main contributors to the performance degradation are:


1. Large overheads relative to the protocol overheads.


2. Limited rate at which messages can be processed by the network-interface controller.


3. Limited rate at which packets can be processed by a switch.


4. Large packet overheads relative to network packet payload size.


Embodiments of the present invention that are disclosed herein provide methods and systems for aggregating egress messages, which may reduce the overhead and improve the multi-computer system performance. In some embodiments, a Message Aggregation Circuit (MAC) is added to the egress path of network devices; the MAC may aggregate messages that share the same destination, allowing the network device to send a smaller number of larger aggregated messages, reducing the total cost of the message overhead.


In some embodiments, aggregation is performed by a network adapter in a compute node, wherein the network adapter aggregates messages generated by processes running in the compute node. This sort of aggregation is sometimes referred to as “source aggregation.” In other embodiments, aggregation is carried out by a network switch, which aggregates messages received over the network. This sort of aggregation is sometimes referred to as “intermediate aggregation.” Hybrid aggregation schemes, in which an aggregated message is formed from both locally generated messages and messages received over the network, are also possible. For a given message, the process generating the message is referred to herein as a “source process” and the process to which the message is destined is referred to as a “destination process”. Generally, the disclosed aggregation techniques may be carried out in any suitable type of network device, e.g., network adapter, switch, router, hub, gateway, network-connected Graphics Processing unit (GPU), and the like.


The term “common destination” used for aggregation may refer to, for example, a common destination compute node, or a common destination path via the network. When aggregating messages destined to a common destination compute node, individual messages in the aggregated message may be addressed to different processors and/or processes in the common destination compute node. When aggregating messages destined to a common destination path, individual messages in the aggregated message may be addressed to different compute nodes, processors and/or processes reachable via the common destination path.


In some embodiments, the aggregation of egress messages to create an aggregated message may stop when a time limit has expired, or when a buffer size has been reached. In other embodiments, the aggregation may stop when a minimum bandwidth specification is met.


In some embodiments, an aggregation hierarchy is implemented, wherein messages within an aggregated message may be further aggregated; e.g., messages that write to neighboring segments of a memory may be aggregated to a larger message that writes into the combined memory space (such aggregation will be sometimes referred to as aggregated message compression).


Other embodiments of the present invention comprise a message disaggregation circuit (MDC), which is configured to break the aggregated messages back into the discrete original messages.


In some embodiments, aggregation is done based on the next hop in the network fabric. For example, if a network adapter sends messages to a plurality of destinations, but a group of the messages is first sent to the same switch in the communication network, the network adapter may aggregate the group of messages and send the aggregated message to the switch, which may then disaggregate the aggregated message and send the original messages to the corresponding destinations. In some embodiments, various switches in the communication network may aggregate and disaggregate messages.


Thus, in embodiments, the efficiency of message communication between network elements may be enhanced by sharing the communication overhead between groups of messages that are aggregated.


More details will be disclosed in the System Description hereinbelow, with reference to example embodiments.


System Description

Parallel computing systems in which computers that run a shared task communicate with each other over a communication network, typically comprise network-connected devices such as Network-Interface Controllers (NICs), Host Channel Adapters (HCAs), switches, routers, hubs and so on. The computers that run the shared task are typically connected to the network through a network adapter (NIC in Ethernet nomenclature, HCA in InfiniBand™ nomenclature, or similar for other communication networks); however the parallel computing tasks may also be run by computers that are coupled to other network elements such as switches.


Messages that the computers send to each other are typically sent by egress packets, which may or may not be acknowledged, using communication protocols such as Dynamic Connection (DC), Reliable Connection (RC) and others.



FIG. 1 is a block diagram that schematically illustrates a Parallel-Computing System 100, in which a plurality of compute nodes exchange messages over a communication network. The Parallel-Computing System comprises a Compute Node 102, a Communication Network 104 (e.g., InfiniBand™ (IB), or Ethernet) and a Remote Compute Node 106, wherein both compute nodes 102 and 106 (and typically many other compute nodes that are not shown) are coupled to each other through the communication network. Compute Node 102 comprises a Host Processor 108 that runs parallel computing processes 110 and a Network Adapter 112 that is configured to communicate messages over the communication network with peer compute nodes, including Remote Compute Node 106.


Remote Compute Node 106 comprises a Host Processor 114 that runs parallel computing processes 116, and a Network Adapter 118 that is configured to communicate messages over Network 104.


When Parallel-Computing System 100 runs a parallel computing job, processes throughout the system may communicate messages with peer processes. For example, one or more processes 110 running on Host 108 may send messages to one or more processes 116 that run on Host 114. Such messages may be short, and, as the overhead for each message is large (relative to the message size), may adversely affect the system performance if sent separately. As noted above, the process generating a certain message is referred to as the source process of that message, and the process to which the message is destined is referred to as the destination process.


According to the example embodiment illustrated in FIG. 1, Network Adapter 112 comprises processing circuitry (in the present example a Packet Processing circuit 120) and a Message Aggregation Circuit (MAC) 122. When Packet Processing circuit 120 receives, from Host 108, messages to be sent over the communication network, the Packet Processing circuit sends the messages to the MAC. The MAC checks the destination of the messages and may aggregate a plurality of messages that are destined to the same peer host (same destination compute node) into a single aggregated message. The MAC then sends the aggregated message to the Packet-Processing circuit, which communicates the message over the network. Thus, the overhead associated with the sending of a message is shared between a plurality of messages, mitigating the ensuing performance degradation.


In some embodiments, when the MAC aggregates multiple messages having the same destination, the MAC may strip-off the common destination fields of the messages (sending a single destination header instead), and possibly strip-off additional header fields. Typically, however, the MAC will not strip-off header fields that are not shared by the individual messages; e.g., source identification (when relevant).


At the destination, Network Adapter 118 of Remote Compute Node 106 comprises a Packet Processing circuit 124 and a Message Disaggregation Circuit (MDC) 126. The Packet Processing circuit sends ingress messages to the MDC. If any message is aggregated, the MDC reconstructs the original messages by disaggregating the aggregated message to separate messages, and then sends the messages back to the Packet Processing circuit, which may send the messages to Host 114.


As would be appreciated, Network Adapters 112 and 118 illustrated in FIG. 1 and described hereinabove are shown by way of example. In alternative embodiments, for example, any or both Network Adapters 112 and 118 may comprise a MAC and an MDC, for bidirectional communication of aggregated messages. In some embodiments, aggregation of read operations is supported—both the source and destination network adapters comprise MDCs; read requests are aggregated into a single message; at the target network adapter multiple read responses are aggregated into a single message that is processed at the target, and then disaggregated when arriving back at the source network adapter. In such embodiments, additional messages may be aggregated at the source network adapter together with the multiple read requests.


Moreover, the aggregation (grouping) of read responses at the target network adapter may differ from the original aggregation (grouping) of read requests at the source network adapter. In one simplified example, the source network adapter may aggregate two requests “req0” and “req1” into an aggregated message and send a third requests “req2” individually. In response, the target network adapter may send a response to req0 (denoted “res0”) individually, and aggregate the responses to req1 and req2 (denoted “res1” and “res2”) in an aggregated response message.


In embodiments, atomic read and writes may also be aggregated. In yet other embodiments, multiple transaction types may be combined to a single aggregated message.


In an embodiment, The MAC may be implemented as a separate dedicated block on a device (e.g., a processor (such as a CPU or GPU) or an FPGA) connected to a standard network adapter that does not include a MAC. In some embodiments a single process may run on Host 114. In an embodiment, a single process runs on Host 108, and the MAC aggregates messages that the single process generates (and are destined to the same Remote Compute Node). In some embodiments, Compute Node 102 and/or Compute Node 106 comprise more than one Host and/or more than one Network Adapter; in an embodiment, processes 110 may run on a peer device such as a GPU or an FPGA.


In an embodiment, Packet Processing circuit 124 detects aggregated messages, and sends to the MDC only packets that need to be disaggregated. In another embodiment, MDC 126 sends the disaggregated messages directly to Host 114.



FIG. 2 is a block diagram that schematically illustrates Message-Aggregation Circuit (MAC) 122, in accordance with an embodiment of the present invention. The MAC comprises a Message Classifier 200, which is configured to classify messages according to destinations; a plurality of Aggregation Circuits 202, which may be allocated to aggregate messages for given destinations; an Aggregation Control circuit 204, which is configured to control the aggregation; a Multiplexor 206, which is configured to select an aggregated message from the plurality of Aggregation Circuits 202; and, an Egress Queue 208, which is configured to temporarily store aggregated messages until the messages are handled by Packet-Processing 120 (FIG. 1).


The Message Classifier receives messages to specified destinations from the packet processing circuit, and checks if the messages should and could be aggregated (examples to messages that should not be aggregated and to messages that cannot be aggregated will be described hereinbelow, with reference to FIG. 3). If the message should not or cannot be aggregated, the Message Classifier sends the message directly to Egress Queue 208. If the message should and could be aggregated, the Message Classifier sends the message to one of Aggregation Circuits 202 that is allocated to messages with the destination specified for the current message, or, if no Aggregation Circuit is allocated to the specified destination, the Message Classifier allocates a free Aggregation Circuit and sends the message thereto.


Aggregation Circuits 202 are configured to store aggregated messages. Typically, the Aggregation Circuit adds metadata to the message, e.g., to specify message boundaries. When a new message is to be added to an aggregated message, the Aggregation Circuit adds the new message to the stored aggregated message and may modify the metadata accordingly. In embodiments, an Aggregation Circuit that aggregates messages with a specified destination is marked with the destination ID.


Aggregation Control circuit 204 is configured to determine if any of the Aggregation Circuits should be deallocated (e.g., emptied and made ready to be reallocated). (Example criteria for this decision will be described hereinbelow, with reference to FIG. 4.) The Aggregation Control circuit controls Multiplexor 206 to forward the aggregated message from the Aggregation Circuit that is to be deallocated to Egress Queue 208, which, in turn, sends the aggregated messages to the Packet Processing circuit.


In summary, Message-Processing circuit 122 receives messages from Packet Processing circuit 120 and stores some of the messages in Aggregation Circuits which are allocated to specified message destinations. An Aggregation Control circuit empties the Aggregation Circuits through a Multiplexor and an Egress Queue, the latter sending aggregated messages back to the Packet Processing circuit. The number of aggregated messages may be smaller than the number of the non-aggregated messages, improving overall performance.


As would be appreciated, the message aggregation circuit structure illustrated in FIG. 2 and described hereinabove is cited by way of example; the present invention is by no means limited to the described embodiment. In alternative embodiments, for example, there is no Egress Queue, and the MAC sends the aggregated messages directly to buffers in the Packet Processing circuit. In an embodiment, Message Classifier 200 and/or Aggregation Control circuit 204 are distributed in the Aggregation Circuits.



FIG. 3 is a flowchart 300 that schematically illustrates a method for sending messages to Aggregation Circuits, in accordance with an embodiment of the present invention. The flowchart may be executed, for example, by Message Classifier 200 (FIG. 2). The flowchart starts at a Get-Next-Message step 302, wherein the Message Classifier receives a message from Packet Processing circuit 120 (FIG. 1). The message specifies a destination to which the message should be sent. The Message Classifier then, in a Check-if-Aggregation-Circuit-Exists, checks if the destination ID of one of aggregation circuits 202 (FIG. 2) matches the specified destination. If so, the Message Classifier enters an Add-Message step 306, and sends the message to the corresponding Aggregation Circuit.


If, in step 304, there is no Aggregation Circuit with a destination ID matching the specified destination, the Message Classifier will enter a Check-Aggregation-Needed step 308, and check if the message should be aggregated. In some embodiments, only messages to predefined destinations should be aggregated; in an embodiment, predefined ranges of destination may be defined, and any message to a destination that is not within the specified range should not be aggregated. In another embodiment, aggregation is a property of the egress queue; In some other embodiments, messages with size exceeding a predefined threshold should not be aggregated, and in yet other embodiments an application may indicate which messages should (or should not) be aggregated, and when the aggregation should stop.


If, in step 308, the message should not be aggregated, the Message Classifier enters a Post-Message step 310, and posts the message in Egress Queue 208 (FIG. 2). If (in step 308), the message should be aggregated, the Message Classifier enters a Check-Free-Aggregation-Circuit step 312 and checks if there are Aggregation Circuits 202 (FIG. 2) that are not allocated. If so, the Message Classifier, in an Add-Message-New step 314, allocates an available Aggregation Circuit to the specified destination and sends the message to the new Aggregation Circuit. If, in step 312, there are no available Aggregation Circuits, the Message Classifier enters Post-Message step 310, and sends the message to the Egress Queue (in some embodiments, if no aggregation circuit is available, the message is temporarily stored in an Ingress Queue or in a dedicated queue.


After either step 306 or step 310, the Message Classifier reenters step 302, to handle the next message.



FIG. 4 is a flowchart 400 that schematically illustrates a method for deallocating and emptying Aggregation Circuits, in accordance with an embodiment of the present invention. The flowchart is executed by Aggregation Control circuit 204 (FIG. 2), which checks the aggregated messages against deallocation criteria. The flowchart starts at a Set Destination step 402, wherein the Aggregation Control circuit defines the destination-ID of the aggregation circuit to be checked, according to an index i. Next, in a Check-Size step 404, the Aggregation Control circuit checks the Aggregation Circuit (with destination ID equals to destination (i)) against a message size criterion. For example, the accumulated size of the aggregated message is compared to a predefined threshold. If the message size is greater than the threshold, the Aggregation Control circuit enters a Post-Message step 406, wherein the Aggregation Control circuit posts the aggregated message that is stored in the aggregation circuit in the Egress Queue, and deallocates the aggregation circuit. In another example, an aggregation termination request may be embedded in the message.


If, in step 404, the aggregated message is not greater than the preset threshold, the Aggregation Control circuit enters a Check-Timeout step 408, and checks if a preset time limit, measured from the time in which the Aggregation Circuit was allocated, has been reached. In some embodiments, step 408 is useful to guarantee a maximum latency specification. If the preset time limit has been reached, the Aggregation Control circuit enters Post-Message step 406, to post the message and reallocate the Aggregation Control circuit. If, in step 408, the time limit has not been reached, the Aggregation Control circuit enters a Check-Bandwidth step 410. In some embodiments, a minimum bandwidth is specified, and message aggregation should guarantee a bandwidth equal to or greater than the specified minimum. In an embodiment, the bandwidth is measured and, if the specified minimum is met, the aggregation may be relaxed (e.g., to shorten the latency). In step 410, if the measured bandwidth is more than a predefined threshold, (which is typically higher than the specified minimum bandwidth by some margin), the Aggregation Control circuit enters Post Message step 406. If, in step 410, the bandwidth is not higher (or not sufficiently higher) than the specified minimum, all deallocation criteria are not met, and the Aggregation Control circuit enters an Increment-i step 412 to increment the destination index, and then reenters step 402, to check the next Aggregation Circuit (the Aggregation Control circuit also enters step 412 after post-message step 406).


As would be appreciated, flowcharts 300 and 400, illustrated in FIGS. 3, 4 and described above, are example embodiments that are depicted merely for the sake of conceptual clarity. Other flowcharts may be used in alternative embodiments. For example, the order of checks 304, 308, 312 (FIG. 3) and of checks 404, 408, 410 (FIG. 4) may be different; in some embodiments, Packet Processing circuit 120 sends to MAC 122 only messages that may be aggregated, and, hence, step 308 (FIG. 3) may not be needed. In other embodiments, the classification and/or the aggregation-control circuits are distributed in the Aggregation Circuits, and the flowchart is replaced by suitable independent flowcharts for each of the Aggregation Circuits.


Hierarchical Aggregation

In the description hereinabove, messages with shared destination may be aggregated. In some embodiments, messages within the aggregated message may be further aggregated, according to criteria other than destination ID, for further performance improvement. For example, an aggregated message to processes in a remote host may comprise messages to the same process running in the host. In some embodiments, messages to the same process are further aggregated within the aggregated message to the host, saving overhead in the destination (such secondary aggregation is also referred to as “aggregated message compression”).


In some embodiments, data that is written to neighboring segments in a memory of the destination processor may be aggregated; e.g., a message to write data in addresses 0-63 may be aggregated with a message to write data in addresses 64-127, to form a message that writes data in addresses 0-127 (within the aggregated message to the host).


Next-Hop Aggregation

In some embodiments of the present invention, messages are aggregated based on the next hop node in the message propagation. For example, if a Compute Node sends messages to a plurality of different peer computers, but a group of messages are routed through a first shared switch (“first hop”), the compute node may aggregate messages that share the same first hop. The switch will comprise a disaggregation circuit, to disaggregate the messages, and forward the disaggregated messages to their destinations. In some embodiments, the switch may comprise a message aggregation circuit, to aggregate egress messages, including disaggregated messages sent from the previous hop and other messages. In embodiments, multiple switches may comprise disaggregation and aggregation circuits and, hence, message aggregation and disaggregation is distributed in both the network adapters and the network switches of the parallel computing system.


In some embodiments, the network adapters may be partially synchronized by sending messages to similar destinations at similar time slots—this increases the probability that the messages will be aggregated at the next hop within a given timeframe.



FIG. 5 is a block diagram that schematically illustrates the structure of a parallel computing system 500, with distributed message aggregation and disaggregation, in accordance with an embodiment of the present invention. Source Network Adapter 112, comprising Packet Processing circuit 120 and Message Aggregation Circuit 122, communicates messages with Destination Network Adapter 118, which comprises Packet Processing circuit 122 and Message Disaggregation Circuit 126 (all defined and described with reference to FIG. 1). The messages transverse through a Communication Network 502, comprising fabric and Switches 504. Each Switch 504 comprises an Ingress Processing circuit 506, which is configured to process ingress packets, and an Egress Processing circuit 508, which is configured to process egress packets. When the switch receives an aggregated message from an upstream switch (or, for the first switch, from the Source Network Adapter), the switch may disaggregate the message if the switch is the destination of the aggregated message (typically in a next-hop aggregation). If so, Ingress Processing 506 sends the aggregated message to a Message Disaggregation Circuit (MDC) 510, which disaggregates the message and sends a plurality of disaggregated messages back to Ingress Processing 506.


Switch 506 may comprise a Message Aggregation Circuit 512, which is configured to aggregate egress messages. According to the example embodiment illustrated in FIG. 5, Egress Processing circuit 508 sends egress messages to MAC 512, which may aggregate messages, based on same-next-hop and/or same destination, and send the aggregated messages back to Egress Processing 508, which communicates the aggregated messages to the next hop. (It should be noted that next-hop aggregation may only be applied if the next hop switch comprises a disaggregation circuit.)


As would be appreciated, the structure of switch 504, illustrated in FIG. 5 and described hereinabove, including MDC 510 and MAC 512, is cited by way of example; other structures may be used in alternative embodiments. For example, in some embodiments, the switch does not comprise an MDC (and, hence, does not support next-hop aggregated ingress messages). In other embodiments, the switch does not comprise a MAC, and does not aggregate egress messages (it does, however, relay aggregated ingress messages). In some embodiments, a mix of switches may be used, with varying disaggregation and aggregation capabilities.


In various embodiments, aggregation is carried out in various communication layers, such as Transport layer, Network layer and Link layer, wherein deeper layer may results in more efficient aggregation. For example, when aggregating at the Transport layer, network acknowledgment now acknowledges completion of work posted by multiple processes; the MAC needs to record which of the multiple per-process work requests were completed by the single acknowledgement.


In an embodiment, aggregation may include out-of-order completion; in this case the MAC should complete the aggregation only when receiving a sequence of completions; or else report out-of-order to the requesting source.


It should be mentioned that aggregation and disaggregation may be used for both one-sided Remote-Direct-Memory-Access (RDMA) transactions and for message SEND operations; note, though, that address aggregation may not be applicable to a SEND operation, which may not have an associated address. Some messages (e.g., RDMA READ and WRITE) may be regarded as “address-based” in which case the aggregation, too, may be based on addresses of the messages. Other messages may not be address-based.


The configuration of Network Adapters 112 and 118, and their components, e.g., MAC 122, MDC 126; the components of MAC 122 (e.g., Message Classifier 200, Aggregation Circuits 202, Aggregation Control 204, Multiplexor 206 and Egress Queue 208); and the methods of flowcharts 300 and 400, illustrated in FIGS. 1 through 5, are example configurations and flowcharts that are depicted purely for the sake of conceptual clarity. Any other suitable configurations and flowcharts can be used in alternative embodiments. The network adapters, switches and components thereof may be implemented using suitable hardware, such as in one or more Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Arrays (FPGA), using software, using hardware, or using a combination of hardware and software elements.


In some embodiments, Host 108, Host 114, and certain elements of the Network Adapters and the Switches may be implemented using one or more general-purpose programmable processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.


Although the embodiments described herein mainly address message aggregation in parallel computing systems, the methods and systems described herein can also be used in other applications, such PCIe and/or CXL tunneling.


It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.

Claims
  • 1. A network switch, comprising: one or more ports, for connecting to a communication network;processing circuitry, configured to communicate messages over the communication network via the one or more ports; anda message aggregation circuit (MAC), which is configured to:receive messages, which originate in one or more source processes, and, are destined to one or more destination processes;aggregate two or more of the received messages, that are received over two or more different ones of the ports from the communication network, and which share a common destination, and comprising respective fields specifying the common destination, into an aggregated message by joining the two or more received messages, including removing the fields specifying the common destination from the two or more received messages, and including in the aggregated message a single header indicative of the common destination, the single header replacing the removed fields; andsend the aggregated message using the processing circuitry, via one of the ports to the communication network en-route to the common destination.
  • 2. The network switch according to claim 1, further comprising a host interface for connecting to one or more local processors, wherein the MAC is configured to receive one or more of the messages from the one or more local processors over the host interface.
  • 3. The network switch according to claim 1, wherein the MAC is configured to receive one or more of the messages from one or more remote processors over the communication network, via the ports.
  • 4. The network switch according to claim 1, wherein the two or more messages share a common destination network node, and wherein the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination network node.
  • 5. The network switch according to claim 1, wherein the two or more messages share a common destination path via the network, and wherein the MAC is configured to cause the processing circuitry to send the aggregated message to the common destination path.
  • 6. The network switch according to claim 1, wherein the MAC is configured to compress the messages by joining messages that are destined to neighboring address ranges defined in the common destination.
  • 7. The network switch according to claim 1, wherein the MAC is configured to terminate aggregation of the aggregated message responsive to expiry of a timeout.
  • 8. The network switch according to claim 1, wherein the MAC is configured to terminate aggregation of the aggregated message responsive to a total size of the aggregated message reaching a predefined limit.
  • 9. The network switch according to claim 1, wherein the MAC is configured to aggregation of the aggregated message responsive to receiving an aggregation termination request.
  • 10. The network switch according to claim 1, wherein the MAC is configured to aggregate the messages as part of transport-layer processing.
  • 11. The network switch according to claim 1, wherein the messages comprise at least read requests, and wherein the MAC is configured to: aggregate at least the read requests into the aggregated message; andupon receiving one or more aggregated responses in response to the aggregated message, disaggregate the one or more aggregated responses at least into multiple read responses that correspond to the read requests.
  • 12. The network switch according to claim 11, wherein the MAC is configured to aggregate in the aggregated message one or more additional messages in addition to the read requests.
  • 13. The network switch according to claim 1, wherein the messages comprise at least one message type selected from a group of types consisting of: Remote Direct Memory Access (RDMA) READ messages;RDMA WRITE messages; andRDMA ATOMIC messages.
  • 14. A network switch, comprising: one or more ports, for connecting to a communication network;processing circuitry, configured to communicate messages over the communication network via the one or more ports; anda message disaggregation circuit (MDC), which is configured to:receive from the processing circuitry an aggregated message that was received from the communication network via one of the ports, the aggregated message, which was formed by joining two or more messages originating in one or more source processes and destined to multiple destination processes;disaggregate the aggregated message into the two or more messages which were previously joined, by separating each of the previously joined messages and reconstructing each separated message into its original message, and including in the separated original message the original field specifying the destination; andsend the two or more separated messages via two or more different ones of the ports to the communication network, en-route to the multiple destination processes.
  • 15. The network switch according to claim 14, wherein the MDC is configured to disaggregate the aggregated message as part of transport-layer processing.
  • 16. The network switch according to claim 14, wherein the aggregated message comprises at least read requests;wherein the MDC is configured to disaggregate the aggregated message into at least the read requests;and wherein the apparatus further comprises a message aggregation circuit (MAC) configured to receive read responses corresponding to the read requests, to aggregate the read responses into one or more aggregated responses, and to send the one or more aggregated responses using the processing circuitry over the communication network.
  • 17. The network switch according to claim 16, wherein the MAC is configured to group the read responses in the one or more aggregated responses in a grouping that differs from the grouping of the read requests in the aggregated message.
  • 18. The network switch according to claim 14, wherein the messages comprise at least one message type selected from a group of types consisting of: Remote Direct Memory Access (RDMA) READ messages;RDMA WRITE messages; andRDMA ATOMIC messages.
  • 19. A method of switching, comprising: communicating messages, which originate in one or more source processes and are received from at least two ports, and which are destined to one or more destination processes, over a communication network;aggregating two or more of the received messages, that are received over two or more different ones of the ports and share a common destination, and comprising respective fields specifying the common destination, into an aggregated message by joining the two or more received messages, including removing the fields specifying the common destination from the two or more received messages, and including in the aggregated message a single header indicative of the common destination, the single header replacing removed fields; andsending the aggregated message over the communication network, via one of the ports to the communication network en-route to the common destination.
  • 20. The method according to claim 19, and comprising receiving one or more of the messages from one or more local processors over a host interface.
  • 21. The method according to claim 19, and comprising receiving one or more of the messages from one or more remote processors over the communication network.
  • 22. The method according to claim 19, wherein the two or more messages share a common destination network node, and wherein sending the aggregated message comprises sending the aggregated message to the common destination network node.
  • 23. The method according to claim 19, wherein the two or more messages share a common destination path via the network, and wherein sending the aggregated message comprises sending the aggregated message to the common destination path.
  • 24. The method according to claim 19, wherein aggregating the messages comprises compressing the messages by joining messages that are destined to neighboring address ranges defined in the common destination.
  • 25. The method according to claim 19, wherein aggregating the messages comprises terminating aggregation of the aggregated message responsive to expiry of a timeout.
  • 26. The method according to claim 19, wherein aggregating the messages comprises terminating aggregation of the aggregated message responsive to a total size of the aggregated message reaching a predefined limit.
  • 27. The method according to claim 19, wherein aggregating the messages comprises terminating aggregation of the aggregated message responsive to receiving an aggregation termination request.
  • 28. The method according to claim 19, wherein aggregating the messages is performed as part of transport-layer processing.
  • 29. The method according to claim 19, wherein the messages comprise at least read requests, wherein aggregating the messages comprises aggregating at least the read requests into the aggregated message, and wherein the method further comprises, upon receiving one or more aggregated responses in response to the aggregated message, disaggregating the one or more aggregated responses at least into multiple read responses that correspond to the read requests.
  • 30. The method according to claim 29, wherein aggregating the messages comprises aggregating in the aggregated message one or more additional messages in addition to the read requests.
  • 31. The method according to claim 19, wherein the messages comprise at least one message type selected from a group of types consisting of: Remote Direct Memory Access (RDMA) READ messages;RDMA WRITE messages; andRDMA ATOMIC messages.
  • 32. The method according to claim 19, wherein communicating and aggregating the messages are performed in a network device.
  • 33. A method of switching, comprising: communicating messages over a communication network, including receiving an aggregated message, which was aggregated by joining two or more messages originating in one or more source processes, received over the communication network via one port of a plurality of ports, and destined to one or more destination processes;disaggregating the aggregated message by separating the previously joined two or more messages into the two or more messages and reconstructing each separated message into its original message, and including in the separated original message the original field specifying the destination; andsending the two or more separated messages via at least two of the plurality of ports to the communication network en-route to the one or more destination processes.
  • 34. The method according to claim 33, wherein disaggregation of the aggregated message is performed as part of transport-layer processing.
  • 35. The method according to claim 33, wherein the aggregated message comprises at least read requests;wherein disaggregating the aggregated message comprises disaggregating the aggregated message into at least the read requests;and wherein the method further comprises receiving read responses corresponding to the read requests, aggregating the read responses into one or more aggregated responses, and sending the one or more aggregated responses over the communication network.
  • 36. The method according to claim 35, wherein aggregating the read responses comprises grouping the read responses in the one or more aggregated responses in a grouping that differs from the grouping of the read requests in the aggregated message.
  • 37. The method according to claim 33, wherein the messages comprise at least one message type selected from a group of types consisting of: Remote Direct Memory Access (RDMA) READ messages;RDMA WRITE messages; andRDMA ATOMIC messages.
  • 38. The method according to claim 33, wherein communicating the messages and disaggregating the aggregated message are performed in a network device.
  • 39. The network switch according to claim 1, wherein the MAC is configured to aggregate the messages as part of an out-of-order completion processing.
  • 40. The network switch according to claim 14, wherein the MDC is configured to disaggregate the aggregated message as part of the out-of-order completion processing.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application 62/961,232, filed Jan. 15, 2020, whose disclosure is incorporated herein by reference.

US Referenced Citations (257)
Number Name Date Kind
4933969 Marshall et al. Jun 1990 A
5068877 Near et al. Nov 1991 A
5325500 Bell et al. Jun 1994 A
5353412 Douglas et al. Oct 1994 A
5404565 Gould et al. Apr 1995 A
5408469 Opher et al. Apr 1995 A
5606703 Brady et al. Feb 1997 A
5944779 Blum Aug 1999 A
6041049 Brady Mar 2000 A
6115394 Balachandran Sep 2000 A
6370502 Wu et al. Apr 2002 B1
6438137 Turner Aug 2002 B1
6483804 Muller et al. Nov 2002 B1
6507562 Kadansky et al. Jan 2003 B1
6728862 Wilson Apr 2004 B1
6857004 Howard et al. Feb 2005 B1
6937576 Di Benedetto et al. Aug 2005 B1
7102998 Golestani Sep 2006 B1
7124180 Ranous Oct 2006 B1
7164422 Wholey, III et al. Jan 2007 B1
7171484 Krause et al. Jan 2007 B1
7313582 Bhanot et al. Dec 2007 B2
7327693 Rivers et al. Feb 2008 B1
7336646 Muller Feb 2008 B2
7346698 Hannaway Mar 2008 B2
7555549 Campbell et al. Jun 2009 B1
7613774 Caronni et al. Nov 2009 B1
7636424 Halikhedkar et al. Dec 2009 B1
7636699 Stanfill Dec 2009 B2
7738443 Kumar Jun 2010 B2
7760743 Shokri et al. Jul 2010 B2
8213315 Crupnicoff et al. Jul 2012 B2
8255475 Kagan et al. Aug 2012 B2
8380880 Gulley et al. Feb 2013 B2
8510366 Anderson et al. Aug 2013 B1
8645663 Kagan et al. Feb 2014 B2
8738891 Karandikar et al. May 2014 B1
8761189 Shachar et al. Jun 2014 B2
8768898 Trimmer et al. Jul 2014 B1
8775698 Archer et al. Jul 2014 B2
8811417 Bloch et al. Aug 2014 B2
9110860 Shahar Aug 2015 B2
9189447 Faraj Nov 2015 B2
9294551 Froese et al. Mar 2016 B1
9344490 Bloch et al. May 2016 B2
9456060 Pope et al. Sep 2016 B2
9563426 Bent et al. Feb 2017 B1
9626329 Howard Apr 2017 B2
9756154 Jiang Sep 2017 B1
10015106 Florissi et al. Jul 2018 B1
10158702 Bloch et al. Dec 2018 B2
10284383 Bloch et al. May 2019 B2
10296351 Kohn et al. May 2019 B1
10305980 Gonzalez et al. May 2019 B1
10318306 Kohn et al. Jun 2019 B1
10425350 Florissi Sep 2019 B1
10521283 Shuler et al. Dec 2019 B2
10528518 Graham et al. Jan 2020 B2
10541938 Timmerman et al. Jan 2020 B1
10547553 Shattah et al. Jan 2020 B2
10621489 Appuswamy et al. Apr 2020 B2
11088971 Brody et al. Aug 2021 B2
20020010844 Noel et al. Jan 2002 A1
20020035625 Tanaka Mar 2002 A1
20020150094 Cheng et al. Oct 2002 A1
20020150106 Kagan et al. Oct 2002 A1
20020152315 Kagan et al. Oct 2002 A1
20020152327 Kagan et al. Oct 2002 A1
20020152328 Kagan et al. Oct 2002 A1
20020165897 Kagan et al. Nov 2002 A1
20030018828 Craddock et al. Jan 2003 A1
20030061417 Craddock et al. Mar 2003 A1
20030065856 Kagan et al. Apr 2003 A1
20030120835 Kale et al. Jun 2003 A1
20040030745 Boucher et al. Feb 2004 A1
20040062258 Grow et al. Apr 2004 A1
20040078493 Blumrich et al. Apr 2004 A1
20040120331 Rhine et al. Jun 2004 A1
20040123071 Stefan et al. Jun 2004 A1
20040252685 Kagan et al. Dec 2004 A1
20040260683 Chan et al. Dec 2004 A1
20050097300 Gildea et al. May 2005 A1
20050122329 Janus Jun 2005 A1
20050129039 Biran et al. Jun 2005 A1
20050131865 Jones et al. Jun 2005 A1
20050223118 Tucker et al. Oct 2005 A1
20050281287 Ninomi et al. Dec 2005 A1
20060282838 Gupta et al. Dec 2006 A1
20070127396 Jain et al. Jun 2007 A1
20070127525 Sarangam et al. Jun 2007 A1
20070162236 Lamblin et al. Jul 2007 A1
20080040792 Larson Feb 2008 A1
20080104218 Liang et al. May 2008 A1
20080126564 Wilkinson May 2008 A1
20080168471 Benner et al. Jul 2008 A1
20080181260 Vonog et al. Jul 2008 A1
20080192750 Ko et al. Aug 2008 A1
20080219159 Chateau et al. Sep 2008 A1
20080244220 Lin et al. Oct 2008 A1
20080263329 Archer et al. Oct 2008 A1
20080288949 Bohra et al. Nov 2008 A1
20080298380 Rittmeyer et al. Dec 2008 A1
20080307082 Cai et al. Dec 2008 A1
20090037377 Archer et al. Feb 2009 A1
20090063816 Arimilli et al. Mar 2009 A1
20090063817 Arimilli et al. Mar 2009 A1
20090063891 Arimilli et al. Mar 2009 A1
20090182814 Tapolcai et al. Jul 2009 A1
20090240838 Berg et al. Sep 2009 A1
20090247241 Gollnick et al. Oct 2009 A1
20090292905 Faraj Nov 2009 A1
20090296699 Hefty Dec 2009 A1
20090327444 Archer et al. Dec 2009 A1
20100017420 Archer et al. Jan 2010 A1
20100049836 Kramer Feb 2010 A1
20100074098 Zeng et al. Mar 2010 A1
20100095086 Eichenberger et al. Apr 2010 A1
20100185719 Howard Jul 2010 A1
20100241828 Yu et al. Sep 2010 A1
20100274876 Kagan et al. Oct 2010 A1
20100329275 Johnsen et al. Dec 2010 A1
20110060891 Jia Mar 2011 A1
20110066649 Berlyant et al. Mar 2011 A1
20110093258 Xu et al. Apr 2011 A1
20110119673 Bloch et al. May 2011 A1
20110173413 Chen et al. Jul 2011 A1
20110219208 Asaad Sep 2011 A1
20110238956 Arimilli et al. Sep 2011 A1
20110258245 Blocksome et al. Oct 2011 A1
20110276789 Chambers et al. Nov 2011 A1
20120063436 Thubert et al. Mar 2012 A1
20120117331 Krause et al. May 2012 A1
20120131309 Johnson May 2012 A1
20120254110 Takemoto Oct 2012 A1
20130117548 Grover et al. May 2013 A1
20130159410 Lee et al. Jun 2013 A1
20130159568 Shahar et al. Jun 2013 A1
20130215904 Zhou et al. Aug 2013 A1
20130250756 Johri Sep 2013 A1
20130312011 Kumar et al. Nov 2013 A1
20130318525 Palanisamy et al. Nov 2013 A1
20130336292 Kore et al. Dec 2013 A1
20140019574 Cardona Jan 2014 A1
20140033217 Vajda et al. Jan 2014 A1
20140040542 Kim et al. Feb 2014 A1
20140047341 Breternitz et al. Feb 2014 A1
20140095779 Forsyth et al. Apr 2014 A1
20140122831 Uliel et al. May 2014 A1
20140136811 Fleischer et al. May 2014 A1
20140189308 Hughes et al. Jul 2014 A1
20140211804 Makikeni et al. Jul 2014 A1
20140258438 Ayoub Sep 2014 A1
20140280420 Khan Sep 2014 A1
20140281370 Khan Sep 2014 A1
20140362692 Wu et al. Dec 2014 A1
20140365548 Mortensen Dec 2014 A1
20150074373 Sperber et al. Mar 2015 A1
20150106578 Warfield et al. Apr 2015 A1
20150143076 Khan May 2015 A1
20150143077 Khan May 2015 A1
20150143078 Khan et al. May 2015 A1
20150143079 Khan May 2015 A1
20150143085 Khan May 2015 A1
20150143086 Khan May 2015 A1
20150154058 Miwa et al. Jun 2015 A1
20150178211 Hiramoto et al. Jun 2015 A1
20150180785 Annamraju Jun 2015 A1
20150188987 Reed et al. Jul 2015 A1
20150193271 Archer et al. Jul 2015 A1
20150212972 Boettcher et al. Jul 2015 A1
20150261720 Kagan et al. Sep 2015 A1
20150269116 Raikin Sep 2015 A1
20150278347 Meyer et al. Oct 2015 A1
20150347012 Dewitt et al. Dec 2015 A1
20150365494 Cardona Dec 2015 A1
20150379022 Puig et al. Dec 2015 A1
20160055225 Xu et al. Feb 2016 A1
20160092362 Barron Mar 2016 A1
20160105494 Reed et al. Apr 2016 A1
20160112531 Milton et al. Apr 2016 A1
20160117277 Raindel et al. Apr 2016 A1
20160119244 Wang et al. Apr 2016 A1
20160179537 Kunzman et al. Jun 2016 A1
20160219009 French Jul 2016 A1
20160248656 Anand et al. Aug 2016 A1
20160283422 Crupnicoff et al. Sep 2016 A1
20160294793 Larson Oct 2016 A1
20160299872 Vaidyanathan et al. Oct 2016 A1
20160342568 Burchard et al. Nov 2016 A1
20160352598 Reinhardt Dec 2016 A1
20160364350 Sanghi et al. Dec 2016 A1
20170063613 Bloch et al. Mar 2017 A1
20170093715 McGhee et al. Mar 2017 A1
20170116154 Palmer et al. Apr 2017 A1
20170187496 Shalev et al. Jun 2017 A1
20170187589 Pope et al. Jun 2017 A1
20170187629 Shalev et al. Jun 2017 A1
20170187846 Shalev et al. Jun 2017 A1
20170192782 Valentine et al. Jul 2017 A1
20170199844 Burchard et al. Jul 2017 A1
20170255501 Shuler Sep 2017 A1
20170262517 Horowitz et al. Sep 2017 A1
20170308329 A et al. Oct 2017 A1
20170344589 Kafai et al. Nov 2017 A1
20180004530 Vorbach Jan 2018 A1
20180046901 Xie et al. Feb 2018 A1
20180047099 Bonig et al. Feb 2018 A1
20180089278 Bhattacharjee et al. Mar 2018 A1
20180091442 Chen et al. Mar 2018 A1
20180097721 Matsui et al. Apr 2018 A1
20180115529 Munger Apr 2018 A1
20180173673 Daglis et al. Jun 2018 A1
20180262551 Demeyer et al. Sep 2018 A1
20180278549 Mula Sep 2018 A1
20180285316 Thorson et al. Oct 2018 A1
20180287928 Levi et al. Oct 2018 A1
20180302324 Kasuya Oct 2018 A1
20180321912 Li et al. Nov 2018 A1
20180321938 Boswell et al. Nov 2018 A1
20180349212 Liu et al. Dec 2018 A1
20180367465 Levi Dec 2018 A1
20180375781 Chen et al. Dec 2018 A1
20190018805 Benisty Jan 2019 A1
20190026250 Das Sarma et al. Jan 2019 A1
20190044889 Serres Feb 2019 A1
20190065208 Liu et al. Feb 2019 A1
20190068501 Schneder et al. Feb 2019 A1
20190102179 Fleming et al. Apr 2019 A1
20190102338 Tang et al. Apr 2019 A1
20190102640 Balasubramanian Apr 2019 A1
20190114533 Ng et al. Apr 2019 A1
20190121388 Knowles et al. Apr 2019 A1
20190138638 Pal et al. May 2019 A1
20190141133 Rajan May 2019 A1
20190147092 Pal et al. May 2019 A1
20190149486 Bohrer et al. May 2019 A1
20190149488 Bansal May 2019 A1
20190171612 Shahar Jun 2019 A1
20190235866 Das Sarma et al. Aug 2019 A1
20190278737 Kozomora Sep 2019 A1
20190303168 Fleming, Jr. et al. Oct 2019 A1
20190303263 Fleming, Jr. et al. Oct 2019 A1
20190324431 Celia et al. Oct 2019 A1
20190339688 Celia et al. Nov 2019 A1
20190347099 Eapen et al. Nov 2019 A1
20190369994 Parandeh Afshar et al. Dec 2019 A1
20190377580 Vorbach Dec 2019 A1
20190379714 Levi et al. Dec 2019 A1
20200005859 Chen et al. Jan 2020 A1
20200034145 Bainville et al. Jan 2020 A1
20200057748 Danilak Feb 2020 A1
20200103894 Celia et al. Apr 2020 A1
20200106828 Elias et al. Apr 2020 A1
20200137013 Jin et al. Apr 2020 A1
20200265043 Graham et al. Aug 2020 A1
20200274733 Graham et al. Aug 2020 A1
20210203621 Ylisirniö Jul 2021 A1
Non-Patent Literature Citations (46)
Entry
Mellanox Technologies, “InfiniScale IV: 36-port 40GB/s Infiniband Switch Device”, pp. 1-2, year 2009.
Mellanox Technologies Inc., “Scaling 10Gb/s Clustering at Wire-Speed”, pp. 1-8, year 2006.
IEEE 802.1D Standard “IEEE Standard for Local and Metropolitan Area Networks—Media Access Control (MAC) Bridges”, IEEE Computer Society, pp. 1-281, Jun. 9, 2004.
IEEE 802.1AX Standard “IEEE Standard for Local and Metropolitan Area Networks—Link Aggregation”, IEEE Computer Society, pp. 1-163, Nov. 3, 2008.
Turner et al., “Multirate Clos Networks”, IEEE Communications Magazine, pp. 1-11, Oct. 2003.
Thayer School of Engineering, “An Slightly Edited Local Copy of Elements of Lectures 4 and 5”, Dartmouth College, pp. 1-5, Jan. 15, 1998 http://people.seas.harvard.edu/˜jones/cscie129/nu_lectures/lecture11/switching/clos_network/clos_network.html.
“MPI: A Message-Passing Interface Standard,” Message Passing Interface Forum, version 3.1, pp. 1-868, Jun. 4, 2015.
Coti et al., “MPI Applications on Grids: a Topology Aware Approach,” Proceedings of the 15th International European Conference on Parallel and Distributed Computing (EuroPar'09), pp. 1-12, Aug. 2009.
Petrini et al., “The Quadrics Network (QsNet): High-Performance Clustering Technology,” Proceedings of the 9th IEEE Symposium on Hot Interconnects (Hotl'01), pp. 1-6, Aug. 2001.
Sancho et al., “Efficient Offloading of Collective Communications in Large-Scale Systems,” Proceedings of the 2007 IEEE International Conference on Cluster Computing, pp. 1-10, Sep. 17-20, 2007.
Nudelman et al., U.S. Appl. No. 17/120,321, filed Dec. 14, 2020.
InfiniBand Architecture Specification, vol. 1, Release 1.2.1, pp. 1-1727, Nov. 2007.
Deming, “Infiniband Architectural Overview”, Storage Developer Conference, pp. 1-70, year 2013.
Fugger et al., “Reconciling fault-tolerant distributed computing and systems-on-chip”, Distributed Computing, vol. 24, Issue 6, pp. 323-355, Jan. 2012.
Wikipedia, “System on a chip”, pp. 1-4, Jul. 6, 2018.
Villavieja et al., “On-chip Distributed Shared Memory”, Computer Architecture Department, pp. 1-10, Feb. 3, 2011.
Ben-Moshe et al., U.S. Appl. No. 16/750,019, filed Jan. 23, 2020.
Bruck et al., “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems”, IEEE Transactions on Parallel and Distributed Systems, vol. 8, No. 11, pp. 1143-1156, Nov. 1997.
Gainaru et al., “Using InfiniBand Hardware Gather-Scatter Capabilities to Optimize MPI AII-to-AII”, EuroMPI '16, Edinburgh, United Kingdom, pp. 1-13, year 2016.
Pjesivac-Grbovic et al., “Performance analysis of MPI collective operations”, Cluster Computing, pp. 1-25, 2007.
Bruck et al., “Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems”, Proceedings of the sixth annual ACM symposium on Parallel algorithms and architectures, pp. 298-309, Aug. 1, 1994.
Chiang et al., “Toward supporting data parallel programming on clusters of symmetric multiprocessors”, Proceedings International Conference on Parallel and Distributed Systems, pp. 607-614, Dec. 14, 1998.
Danalis et al., “PTG: an abstraction for unhindered parallelism”, 2014 Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, pp. 1-10, Nov. 17, 2014.
Cosnard et al., “Symbolic Scheduling of Parameterized Task Graphs on Parallel Machines,” Combinatorial Optimization book series (COOP, vol. 7), pp. 217-243, year 2000.
Jeannot et al., “Automatic Multithreaded Parallel Program Generation for Message Passing Multiprocessors using paramerized Task Graphs”, World Scientific, pp. 1-8, Jul. 23, 2001.
Stone, “An Efficient Parallel Algorithm for the Solution of a Tridiagonal Linear System of Equations,” Journal of the Association for Computing Machinery, vol. 10, No. 1, pp. 27-38, Jan. 1973.
Kogge et al., “A Parallel Algorithm for the Efficient Solution of a General Class of Recurrence Equations,” IEEE Transactions on Computers, vol. C-22, No. 8, pp. 786-793, Aug. 1973.
Hoefler et al., “Message Progression in Parallel Computing—To Thread or not to Thread?”, 2008 IEEE International Conference on Cluster Computing, pp. 1-10, Tsukuba, Japan, Sep. 29-Oct. 1, 2008.
Wikipedia, “Loop unrolling,” pp. 1-9, last edited Sep. 9, 2020 downloaded from https://en.wikipedia.org/wiki/Loop_unrolling.
Chapman et al., “Introducing OpenSHMEM: SHMEM for the PGAS Community,” Proceedings of the Forth Conferene on Partitioned Global Address Space Programming Model, pp. 1-4, Oct. 2010.
Priest et al., “You've Got Mail (YGM): Building Missing Asynchronous Communication Primitives”, IEEE International Parallel and Distributed Processing Symposium Workshops, pp. 221-230, year 2019.
Wikipedia, “Nagle's algorithm”, pp. 1-4, Dec. 12, 2019.
U.S. Appl. No. 16/430,457 Office Action dated Jul. 9, 2021.
Yang et al., “SwitchAgg: A Further Step Toward In-Network Computing,” 2019 IEEE International Conference on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking, pp. 36-45, Dec. 2019.
EP Application # 20216972 Search Report dated Jun. 11, 2021.
U.S. Appl. No. 16/782,118 Office Action dated Jun. 3, 2021.
U.S. Appl. No. 16/789,458 Office Action dated Jun. 10, 2021.
U.S. Appl. No. 16/750,019 Office Action dated Jun. 15, 2021.
U.S. Appl. No. 16/782,118 Office Action dated Nov. 8, 2021.
“Message Passing Interface (MPI): History and Evolution,” Virtual Workshop, Cornell University Center for Advanced Computing, NY, USA, pp. 1-2, year 2021, as downloaded from https://cvw.cac.cornell.edu/mpi/history.
Pacheco, “A User's Guide to MPI,” Department of Mathematics, University of San Francisco, CA, USA, pp. 1-51, Mar. 30, 1998.
Wikipedia, “Message Passing Interface,” pp. 1-16, last edited Nov. 7, 2021, as downloaded from https://en.wikipedia.org/wiki/Message_Passing_Interface.
EP Application # 21183290.2 Search Report dated Dec. 8, 2021.
U.S. Appl. No. 16/782,118 Office Action dated Jun. 15, 2022.
U.S. Appl. No. 16/782,118 Office Action dated Sep. 7, 2022.
U.S. Appl. No. 17/495,824 Office Action dated Jan. 27, 2023.
Related Publications (1)
Number Date Country
20210218808 A1 Jul 2021 US
Provisional Applications (1)
Number Date Country
62961232 Jan 2020 US