System and method for facilitating efficient message matching in a network interface controller (NIC)

Information

  • Patent Grant
  • 11882025
  • Patent Number
    11,882,025
  • Date Filed
    Monday, March 23, 2020
    4 years ago
  • Date Issued
    Tuesday, January 23, 2024
    11 months ago
Abstract
A network interface controller (NIC) capable of performing message passing interface (MPI) list matching is provided. The NIC can include a host interface, a network interface, and a hardware list-processing engine (LPE). The host interface can couple the NIC to a host device. The network interface can couple the NIC to a network. During operation, the LPE can receive a match request and perform MPI list matching based on the received match request.
Description
BACKGROUND
Field

This is generally related to the technical field of networking. More specifically, this disclosure is related to systems and methods for facilitating high-speed MPI (message passing interface) list matching in a network interface controller (NIC).


Related Art

As network-enabled devices and applications become progressively more ubiquitous, various types of traffic as well as the ever-increasing network load continue to demand more performance from the underlying network architecture. For example, applications such as high-performance computing (HPC), media streaming, and Internet of Things (JOT) can generate different types of traffic with distinctive characteristics. As a result, in addition to conventional network performance metrics such as bandwidth and delay, network architects continue to face challenges such as scalability, versatility, and efficiency.


SUMMARY

A network interface controller (NIC) capable of performing message passing interface (MPI) list matching is provided. The NIC can include a host interface, a network interface, and a hardware list-processing engine (LPE). The host interface can couple the NIC to a host device. The network interface can couple the NIC to a network. During operation, the LPE can receive a match request and perform MPI list matching based on the received match request.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 shows an exemplary network.



FIG. 2A shows an exemplary NIC chip with a plurality of NICs.



FIG. 2B shows an exemplary architecture of a NIC.



FIG. 3A shows an exemplary architecture of a processing engine.



FIG. 3B shows an exemplary operation pipeline of a matching engine.



FIG. 4A illustrates exemplary match request-queues.



FIG. 4B shows an exemplary block diagram of a persistent list entry cache (PLEC).



FIG. 5 shows a flow chart of performing list matching in a NIC.



FIG. 6 shows an exemplary computer system equipped with a NIC that facilitates MPI list matching.





In the figures, like reference numerals refer to the same figure elements.


DETAILED DESCRIPTION

Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown.


Overview


The present disclosure describes systems and methods that facilitate efficient list matching in a network interface controller (NIC). The NIC implements a hardware list-processing engine coupled to a memory unit. The list-processing engine can achieve high-speed list matching. The list-processing engine (LPE) can perform atomic search and search-with-delete operators in the various lists defined by the message passing interface (MPI) protocol and can dispatch list operations to correct matching units. To enhance speed, multiple processing engines can be used, and each processing engine can include multiple memory banks, which are interconnected using a crossbar. In addition, the LPE achieves list-matching acceleration by separation of endpoint network interfaces. The list-matching hardware can reduce latency by overlapping the matching-attempt pipeline state with the match termination condition, and can use unified search pipeline for priority and unexpected lists and for network search and host append commands. The LPE hardware can also use a unified processing pipeline to search persistent list entries pertaining to an unordered network interface as well as to search entries pertaining to an ordered network interface. The NIC can also process MPI messages, using either the “eager” protocol or the “rendezvous” protocol in an efficient manner.


One embodiment provides a NIC capable of performing MPI list matching.


The NIC can include a host interface, a network interface, and a hardware LPE. The host interface can couple the NIC to a host device. The network interface can couple the NIC to a network. During operation, the LPE can receive a match request and perform MPI list matching based on the received match request.


In a variation on this embodiment, the match request can include a match request corresponding to a command received via the host interface or a match request corresponding to a message received via the network interface.


In a further variation, the NIC can include a first set of match-request queues for match requests corresponding to received commands and a second set of match-request queues for match requests corresponding to received messages. The number of queues in the first or second set of match-request queues corresponds to the number of physical endpoints supported by the NIC.


In a further variation, the message is an MPI message.


In a further variation, the message is based on an eager protocol or a rendezvous protocol associated with MPI.


In a variation on this embodiment, the hardware list-processing engine can include a plurality of processing elements; and a respective processing element comprises a plurality of matching engines and a plurality of memory banks storing one or more lists, wherein the memory banks are interconnected with the matching engines using a crossbar.


In a further variation, a respective matching engine can include a unified search pipeline for searching the one or more lists, and the one or more lists can include a priority list and an unexpected list.


In a further variation, a respective matching engine can include a single pipeline stage to perform, in parallel, a match operation on a previous match request and a computation to determine a current read or write address.


In a variation on this embodiment, the hardware list-processing engine can include a persistent list entry cache to store previously matched list entries to enable fast searches.


In a variation on this embodiment, the list-processing engine can perform atomic search operations in a plurality of lists.


In this disclosure, the description in conjunction with FIG. 1 is associated with the network architecture, and the descriptions in conjunction with FIG. 2A and onward provide more details on the architecture and operations associated with a NIC that supports efficient MPI list matching.



FIG. 1 shows an exemplary network. In this example, a network 100 of switches, which can also be referred to as a “switch fabric,” can include switches 102, 104, 106, 108, and 110. Each switch can have a unique address or ID within switch fabric 100. Various types of devices and networks can be coupled to a switch fabric. For example, a storage array 112 can be coupled to switch fabric 100 via switch 110; an InfiniBand (IB) based HPC network 114 can be coupled to switch fabric 100 via switch 108; a number of end hosts, such as host 116, can be coupled to switch fabric 100 via switch 104; and an IP/Ethernet network 118 can be coupled to switch fabric 100 via switch 102. In general, a switch can have edge ports and fabric ports. An edge port can couple to a device that is external to the fabric. A fabric port can couple to another switch within the fabric via a fabric link. Typically, traffic can be injected into switch fabric 100 via an ingress port of an edge switch, and leave switch fabric 100 via an egress port of another (or the same) edge switch. An ingress link can couple a NIC of an edge device (for example, an HPC end host) to an ingress edge port of an edge switch. Switch fabric 100 can then transport the traffic to an egress edge switch, which in turn can deliver the traffic to a destination edge device via another NIC.


Exemplary NIC Architecture



FIG. 2A shows an exemplary NIC chip with a plurality of NICs. With reference to the example in FIG. 1, a NIC chip 200 can be a custom application-specific integrated circuit (ASIC) designed for host 116 to work with switch fabric 100. In this example, chip 200 can provide two independent NICs 202 and 204. A respective NIC of chip 200 can be equipped with a host interface (HI) (e.g., an interface for connecting to the host processor) and one High-speed Network Interface (HNI) for communicating with a link coupled to switch fabric 100 of FIG. 1. For example, NIC 202 can include an HI 210 and an HNI 220, and NIC 204 can include an HI 211 and an HNI 221.


In some embodiments, HI 210 can be a peripheral component interconnect (PCI) or a peripheral component interconnect express (PCIe) interface. HI 210 can be coupled to a host via a host connection 201, which can include N (e.g., N can be 16 in some chips) PCIe Gen 4 lanes capable of operating at signaling rates up to 25 Gbps per lane. HNI 210 can facilitate a high-speed network connection 203, which can communicate with a link in switch fabric 100 of FIG. 1. HNI 210 can operate at aggregate rates of either 100 Gbps or 200 Gbps using M (e.g., M can be 4 in some chips) full-duplex serial lanes. Each of the M lanes can operate at 25 Gbps or 50 Gbps based on non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4 (PAM4), respectively. HNI 220 can support the Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet-based protocols as well as an enhanced frame format that provides support for higher rates of small messages.


NIC 202 can support one or more of: point-to-point message passing based on message passing interface (MPI), remote memory access (RMA) operations, offloading and progression of bulk data collective operations, and Ethernet packet processing. When the host issues an MPI message, NIC 202 can match the corresponding message type. Furthermore, NIC 202 can implement both eager protocol and rendezvous protocol for MPI, thereby offloading the corresponding operations from the host.


Furthermore, the RMA operations supported by NIC 202 can include PUT, GET, and atomic memory operations (AMO). NIC 202 can provide reliable transport. For example, if NIC 202 is a source NIC, NIC 202 can provide a retry mechanism for idempotent operations. Furthermore, connection-based error detection and retry mechanism can be used for ordered operations that may manipulate a target state. The hardware of NIC 202 can maintain the state necessary for the retry mechanism. In this way, NIC 202 can remove the burden from the host (e.g., the software). The policy that dictates the retry mechanism can be specified by the host via the driver software, thereby ensuring flexibility in NIC 202.


Furthermore, NIC 202 can facilitate triggered operations, a general-purpose mechanism for offloading, and progression of dependent sequences of operations, such as bulk data collectives. NIC 202 can support an application programming interface (API) (e.g., libfabric API) that facilitates fabric communication services provided by switch fabric 100 of FIG. 1 to applications running on host 116. NIC 202 can also support a low-level network programming interface, such as Portals API. In addition, NIC 202 can provide efficient Ethernet packet processing, which can include efficient transmission if NIC 202 is a sender, flow steering if NIC 202 is a target, and checksum computation. Moreover, NIC 202 can support virtualization (e.g., using containers or virtual machines).



FIG. 2B shows an exemplary architecture of a NIC. In NIC 202, the port macro of HNI 220 can facilitate low-level Ethernet operations, such as physical coding sublayer (PCS) and media access control (MAC). In addition, NIC 202 can provide support for link layer retry (LLR). Incoming packets can be parsed by parser 228 and stored in buffer 229. Buffer 229 can be a PFC Buffer provisioned to buffer a threshold amount (e.g., one microsecond) of delay bandwidth. HNI 220 can also include control transmission unit 224 and control reception unit 226 for managing outgoing and incoming packets, respectively.


NIC 202 can include a command queue (CQ) unit 230. CQ unit 230 can be responsible for fetching and issuing host side commands. CQ unit 230 can include command queues 232 and schedulers 234. Command queues 232 can include two independent sets of queues for initiator commands (PUT, GET, etc.) and target commands (Append, Search, etc.), respectively. Command queues 232 can be implemented as circular buffers. In some embodiments, command queues 232 can be maintained in the main memory of the host. Applications running on the host can write to command queues 232 directly. Schedulers 234 can include two separate schedulers for initiator commands and target commands, respectively. The initiator commands are sorted into flow queues 236 based on a hash function. One of flow queues 236 can be allocated to a unique flow. Furthermore, CQ unit 230 can further include a triggered operations module (or logic block) 238, which is responsible for queuing and dispatching triggered commands.


Outbound transfer engine (OXE) 240 can pull commands from flow queues 236 in order to process them for dispatch. OXE 240 can include an address translation request unit (ATRU) 244 that can send address translation requests to address translation unit (ATU) 212. ATU 212 can provide virtual to physical address translation on behalf of different engines, such as OXE 240, inbound transfer engine (IXE) 250, and event engine (EE) 216. ATU 212 can maintain a large translation cache 214. ATU 212 can either perform translation itself or may use host-based address translation services (ATS). OXE 240 can also include message chopping unit (MCU) 246, which can fragment a large message into packets of sizes corresponding to a maximum transmission unit (MTU). MCU 246 can include a plurality of MCU modules. When an MCU module becomes available, the MCU module can obtain the next command from an assigned flow queue. The data received from the host can be written into data buffer 242. The MCU module can then send the packet header, the corresponding traffic class, and the packet size to traffic shaper 248. Shaper 248 can determine which requests presented by MCU 246 can proceed to the network.


Subsequently, the selected packet can be sent to packet and connection tracking (PCT) 270. PCT 270 can store the packet in a queue 274. PCT 270 can also maintain state information for outbound commands and update the state information as responses are returned. PCT 270 can also maintain packet state information (e.g., allowing responses to be matched to requests), message state information (e.g., tracking the progress of multi-packet messages), initiator completion state information, and retry state information (e.g., maintaining the information required to retry a command if a request or response is lost). If a response is not returned within a threshold time, the corresponding command can be stored in retry buffer 272. PCT 270 can facilitate connection management for initiator and target commands based on source tables 276 and target tables 278, respectively. For example, PCT 270 can update its source tables 276 to track the necessary state for reliable delivery of the packet and message completion notification. PCT 270 can forward outgoing packets to HNI 220, which stores the packets in outbound queue 222.


NIC 202 can also include an IXE 250, which provides packet processing if NIC 202 is a target or a destination. IXE 250 can obtain the incoming packets from HNI 220. Parser 256 can parse the incoming packets and pass the corresponding packet information to a list processing engine (LPE) 264 or a message state table (MST) 266 for matching. LPE 264 can match incoming messages to buffers. LPE 264 can determine the buffer and start address to be used by each message. LPE 264 can also manage a pool of list entries 262 used to represent buffers and unexpected messages. MST 266 can store matching results and the information required to generate target side completion events. MST 266 can be used by unrestricted operations, including multi-packet PUT commands, and single-packet and multi-packet GET commands.


Subsequently, parser 256 can store the packets in packet buffer 254. IXE 250 can obtain the results of the matching for conflict checking. DMA write and AMO module 252 can then issue updates to the memory generated by write and AMO operations. If a packet includes a command that generates target side memory read operations (e.g., a GET response), the packet can be passed to the OXE 240. NIC 202 can also include an event engine (EE) 216, which can receive requests to generate event notifications from other modules or units in NIC 202. An event notification can specify that either a fill event or a counting event is generated. EE 216 can manage event queues, located within host processor memory, to which it writes full events. EE 216 can forward counting events to CQ unit 230.


MPI List Matching


In MPI, send/receive operations are identified with an envelope that can include a number of parameters such as source, destination, message ID, and communicator. The envelope can be used to match a given message to its corresponding user buffer. The whole list of buffers posted by a given process can be referred to as the matching list, and the process of finding the corresponding buffer from the matching list to a given buffer is referred as list matching or tag matching.


In some embodiments, the NIC can provide hardware acceleration of MPI list matching, and the list-processing engine in the NIC can include a plurality (e.g., 2048) of physical endpoints. Each physical endpoint can include four lists: “priority,” “overflow,” “unexpected,” and “software request.” The software request list can provide a graceful transition from hardware offload to software managed lists. The priority, overflow, and request lists contain entries that include match criteria and memory descriptor information. The unexpected list contains header information of messages for which a list entry has not been set up in advance. The LPE block of the NIC can include a memory storage for a number (e.g., 64 k) of list entries, divided among the match entries (for matching interface), list entries (for non-matching interface), and unexpected list entries.


In some embodiments, the LPE block of the NIC can be divided into multiple (e.g., four) processing engines, thus enabling the LPE to exploit process-level parallelism in applications or workloads. Each processing engine can access a subset of the list entries. For example, if the LPE block includes a total of 64 k list entries and there are four processing engines, each processing engine can access 16 k list entries. Software can be responsible for allocating physical endpoints to processing engines to provide load balancing.


The LPE can include two list-matching interfaces: one interface receiving target-side commands from the CQ unit, and the other interface receiving message-match requests from an IXE. The IXE sends the first packet of each message to the LPE; the LPE searches the appropriate lists. If a matching entry is found, it can be unlinked and returned to IXE; otherwise, the header may be appended to the unexpected list. Each interface can be called matching or non-matching, depending on the setting of the physical endpoint. CQ command requests and IXE network requests are called match requests in both cases.


In some embodiments, the interfaces for MPI may be initialized in the disabled state. Message matching of incoming traffic only occurs in the hardware offload state. More specifically, the processing engine can perform atomic search and search-with-delete operations in the priority, overflow, and unexpected lists. During the search, the processing engine can dispatch list operations to a correct matching unit.



FIG. 3A shows the exemplary architecture of a processing engine. In this example, processing engine 300 can include a plurality of matching engines (e.g., matching engines 302 and 304) and four memory banks (e.g., memory banks 306, 308, 310, and 312).


In some embodiments, processing engine 300 can include up to eight matching engines. The memory banks can be interconnected to the matching engines using a crossbar to minimize bank conflicts and obtain high parallelism and utilization of the matching engines. Each matching engine can generate a memory address to any of the memory banks in processing engine 300. The multiple matching engines (e.g., matching engines 302 and 304) operate independently of each other. However, these multiple matching engines need to arbitrate for access to the memory banks.



FIG. 3B shows an exemplary operation pipeline of a matching engine. Matching-engine pipeline 320 can include a number of stages, a setup stage 322, a read-address-and-match stage 324, a read-data stage 326, a correct-read-data stage 328, a mux-match-entry stage 330, a write-address stage 332, and a data-write stage 334.


At setup stage 322, the matching engine captures the match request information from the ready-request queue (RRQ). At read-address-and-match stage 324, the matching engine initiates the read request in each memory bank. Each matching engine can have a logic that decides whether to make a read or write request and to which memory bank. In some embodiments, each memory bank can have an arbiter used to select a matching engine and multiplex the address. Note that, if there are eight parallel matching engines, the arbiter can be an 8:1 arbiter. In parallel with the read address computation, read-address-and-match stage 324 also checks if there is a match on the previous match entry. If there is, it prepares the write update (computes a new offset or deletes an entry). The address and data are then registered to the memory bank at write-address stage 332 and data-write stage 334. At write-address stage 332, the matching engine starts the write access; and at data-write stage 334, the matching engine completes the write operation.


At read-data stage 326, the read data is registered on the output of each memory bank. At correct-read-data stage 328, the read data is corrected at the memory bank. At mux-match-entry stage 330, a multiplexer at each matching engine captures the match entry, which includes the new current address. A number of inner loops are performed, with each loop including read-address-and-match stage 324, read-data stage 326, correct-read-data stage 328, and mux-match-entry stage 330. For the case with four memory banks, matching-engine pipeline 320 can include four cycles. Each matching engine includes space to hold the result of each operation. An arbiter selects a result from the multiple matching engines to send to the output arbiter block. When the output arbiter block consumes a result, the matching engine that produces the result can fetch another command from the RRQ.


The pipeline shown in FIG. 3B can provide a number of advantages. First, the overlapping between the match-attempt pipeline stage and the match termination condition (e.g., read-address-and-match stage 324) can reduce latency in the matching engine. Second, pipeline 320 can provide a unified search pipeline for searching the priority and unexpected list and for network searches and host append commands.


In some embodiments, to increase parallelism and avoid blocking by endpoint and traffic class, the NIC can provide list-matching acceleration by separation of queues, with each endpoint network interface having its own queue. More specifically, the match-request queues ensure that, for matching interfaces, only one operation per physical endpoint is processed at a time; and for non-matching interfaces, concurrent access to certain persistent list entries can be allowed. Within a physical endpoint, command requests need to be performed in the order that they arrive, and network match requests need to be performed in the order that they arrive. However, there is no ordering requirement between commands and network requests. The separated queues also ensure that requests from one physical endpoint cannot be blocked by requests from another physical endpoint. Similarly, requests in one traffic class cannot block requests in other traffic classes.



FIG. 4A illustrates exemplary match request-queues. Match-request-queuing block 400 can include two ranks of queues. The first rank of queues includes CQ match-request queues (MRQs) 402 for queuing CQ commands and IXE match-request queues 404 for queuing IXE requests, with each queue indexed by the physical portal index. Each physical endpoint corresponds to a CQ match-request queue and an IXE match-request queue. For a NIC supporting 2048 physical endpoints, CQ match-request queues 402 can include 2048 CQ match-request queues, and IXE match-request queues 404 can include 2048 IXE match-request queues.


One or more arbitrators 406 can be used to select between CQ match-request queues 402 and IXE match-request queues 404, and to select among the plurality of queues in each type of queue. In some embodiments, a standard arbitration mechanism (e.g., round-robin) can be used for arbitration.


When a match request is dequeued from one of these queues, a lookup table 408 is inspected to determine the processing engine (PE) for the physical portal index. Lookup table 408 can be an array of flops that holds the processing engine number for each physical endpoint and can be accessed in parallel. The match request is then enqueued in an appropriate processing-engine/traffic-class match request queue, which belongs to the second rank of queues (processing-engine/traffic-class (PE/TC) MRQs 410) unless it is an IXE request that matches in the persistent list entry (LE) cache (PLEC) 412. A detailed discussion of PLEC 412 follows. An arbitrator 414 can select among PE/TC MRQs 410, and a multiplexer 416 can multiplex the output of arbitrator 414 and PLEC 412.


In some embodiments, to further increase the list-matching speed, the system can also use a unified processing pipeline to search persistent list entries pertaining to an unordered network interface and to search entries pertaining to an ordered network interface. More specifically, the PLEC enables very fast, one-unit delay lookups.



FIG. 4B shows an exemplary block diagram of a persistent list entry cache (PLEC). The PLEC stores a number of entries (e.g., up to 256), matching on the physical portal index. When a physical endpoint has an entry in the cache, the PLEC allows its physical endpoint match request queue to be dequeued at a full rate without blocking.


When an IXE match-request queue (MRQ) is dequeued for a physical endpoint that matches in the PLEC, the PLEC forwards the list entry (LE) to the memory block that stores the match requests. When the CQ MRQ is dequeued, or when the IXE MRQ is dequeued and misses in the PLEC, a blocked bit is set for the physical endpoint. The PLEC maintains a blocked bit for each physical endpoint, ensuring that matching requests and commands are processed atomically, while non-matching IXE requests to qualified persistent list entries are satisfied without blocking.


The PLEC intercepts IXE requests that match in its cache before they are enqueued in the processing-engine/traffic-class queue. When a persistent list entry is copied from the cache, a dequeue is not initiated from the processing-engine/traffic-class queue on that cycle so that the persistent link entry (LE) may advance through the pipeline to the memory of the physical endpoint. More specifically, when a PLEC hit occurs, a dequeue from the PE/TC MRQ is suppressed in order to create a bubble in the pipeline. The dequeue is suppressed as the PLEC memory (i.e., the LE cache) is read so that the PLEC data is available when the bubble occurs. The LE from the PLEC and its match-request ID can be forwarded to the memory block of the physical endpoint.


The PLEC receives allocation and de-allocation requests from the processing engines. An allocation request arrives when a processing engine matches a network request with a persistent LE on the priority list that has events relating to packet matching disabled, in a non-matching, non-space-checking physical endpoint. An allocation request for a physical endpoint that hits an existing entry in the PLEC has no effect. Otherwise, an entry is allocated. If the cache is full, an entry is evicted using round-robin selection. When a processing engine unlinks a cacheable list entry, it sends a de-allocation request to the PLEC. If the PLEC contains an entry with a matching physical endpoint, the PLEC evicts the corresponding entry.


The LPE block on the NIC plays an important role in processing MPI messages. As discussed before, MPI implements the “eager” protocol for handling small messages and the “rendezvous” protocol for handling large messages. More specifically, eager implies that the data is sent along with the PUT command (message). The system software sets an upper limit for the Eager messages. For messages having sizes beyond the limit of the Eager message, MPI requires the messages to be sent using the rendezvous protocol.


In the software implementation of the eager protocol, data is delivered to a system buffer, from which the data must be copied to a user buffer. Although this approach reduces synchronization, it is expensive in terms of memory capacity and memory bandwidth. In some embodiments, the NIC can provide a mechanism for the eager messages to be written directly to the user's buffer, in cases where the target address can be determined quickly.


More specifically, when the LPE receives the first request packet (which contains the MPI message envelope), it searches the physical endpoint's priority list for a matching buffer. The matching can be performed based on the source, a set of match bits carried in the message, and buffer-specific match and ignore bits. The matched list entry consists of information that includes the start address, length, translation context, and various attributes where the PUT data (i.e., the eager message) is written, thus allowing the direct-memory access (DMA) dispatch logic to write data directly into the user buffer. If no match is found in the priority list, the LPE searches the overflow list for a description of the memory parameters into which it can write the PUT data, and appends a list entry describing the message to the unexpected list.


In the software implementation of the rendezvous protocol, the bulk data transfer is delayed until the target address is known. While this approach reduces the use of system memory, it requires software intervention in order to ensure progression. In some embodiments, the rendezvous protocol is offloaded to the NIC, providing strong progression.


More specifically, when transferring large MPI messages, the initiator can send a small initial message containing the MPI envelope used for matching and a modest amount of eager data. On completion of the match operation, the target performs a GET to transfer the bulk data to the user buffer. This can enhance the network performance, because bulk data is delivered as GET responses, which are unordered. The network can adaptively route them on a packet-by-packet basis.


If a rendezvous request ends up on the unexpected list, the GET is launched when the user process posts the matching append. Launch of the rendezvous GET is the same for both cases; it is the completion of the match of a rendezvous PUT request that triggers the launch.


This is a valuable offload. MPI applications are expected to post non-blocking receives early and then return to computation. Offloading rendezvous to the NIC ensures good overlap of computation and communication. The NIC performs the match and asynchronously instantiates the bulk data movement, thus providing strong progression.



FIG. 5 shows a flow chart of performing list matching in a NIC. During operation, the NIC may receive a match request (operation 502). The match request can be a command from the CQ for manipulating the lists or updating the physical endpoint state, or a message-match request from the IXE. The match request can be enqueued to an appropriate MRQ based on its type (operation 504). An arbitrator selects an MRQ to dequeue a match request (operation 506) and sends the dequeued match request to a lookup table, also referred to as the processing engine map, to determine a processing engine for processing the match request (operation 508). The determination can be based on the physical portal index (i.e., the identification of the physical endpoint).


The match request is also sent to the PLEC (operation 510), which attempts to find a match (operation 512). In response to a match found in the PLEC, the PLEC outputs the matching entry (operation 514). Otherwise, the match request is sent to a PE/TC MRQ (operation 516). An arbitrator selects a PE/TC MRQ to dequeue (operation 518). In some embodiments, the arbitration may occur in two steps. At the first step, a ready processing engine is selected using round-robin. At the second step, a ready TC within that processing engine can be selected using a weighted round-robin arbitration, with each TC having a predetermined weight factor.


The dequeued request from the PE/TC MRQ is sent to the corresponding processing engine, which in turn searches for the matching list entry (operation 520). The matching operations of the processing engine are shown in FIG. 3B.


Exemplary Computer System



FIG. 6 shows an exemplary computer system equipped with a NIC that facilitates MPI list matching. Computer system 650 includes a processor 652, a memory device 654, and a storage device 656. Memory device 654 can include a volatile memory device (e.g., a dual in-line memory module (DIMM)). Furthermore, computer system 650 can be coupled to a keyboard 662, a pointing device 664, and a display device 666. Storage device 656 can store an operating system 670. An application 672 can operate on operating system 670.


Computer system 650 can be equipped with a host interface coupling a NIC 620 that facilitates efficient data request management. NIC 620 can provide one or more HNIs to computer system 650. NIC 620 can be coupled to a switch 602 via one of the HNIs. NIC 620 can include a list-processing logic block 630, as described in conjunction with FIG. 2B. List-processing logic block 630 can include a match-request queue (MRQ) logic block 632 that stores to-be-processed match requests, an PLEC logic block 634 that facilitates fast lookup, and a processing engine 636 for matching the incoming match request to a list entry stored in the memory bank.


In summary, the present disclosure describes a NIC that facilitates MPI list matching. The NIC can include a host interface, a network interface, and a hardware LPE. The host interface can couple the NIC to a host device. The network interface can couple the NIC to a network. The hardware LPE can achieve high-speed list matching. A high degree of parallelism can be achieved by implementing multiple processing engines (PEs) and multiple memory banks within a processing engine. Because each processing engine or TC is allocated with its own queue, the system prevents a processing engine or TC from blocking queues of other processing engine or TCs. In the hardware list-processing engine, the match pipeline stage and the match termination condition overlap to reduce latency. The NIC also enables the offloading of the processing of the MPI messages, including both eager and rendezvous messages.


The methods and processes described above can be performed by hardware logic blocks, modules, logic blocks, or apparatus. The hardware logic blocks, modules, or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware logic blocks, modules, or apparatus are activated, they perform the methods and processes included within them.


The methods and processes described herein can also be embodied as code or data, which can be stored in a storage device or computer-readable storage medium. When a processor reads and executes the stored code or data, the processor can perform these methods and processes.


The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A network interface controller (NIC), comprising: a host interface to couple a host device;a network interface to couple a network; anda hardware list-processing engine (LPE) to: receive a match request; andperform message passing interface (MPI) list matching based on the received match request,wherein the LPE comprises a persistent list entry cache (PLEC) to store previously matched list entries, andwherein performing the MPI list matching comprises bypassing a processing pipeline comprising a lookup table and a number of match-request queues in response to a matched entry being found in the PLEC for the received match request.
  • 2. The network interface controller of claim 1, wherein the match request comprises: a match request corresponding to a command received via the host interface; ora match request corresponding to a message received via the network interface.
  • 3. The network interface controller of claim 2, wherein the LPE is further to: maintain a first set of match-request queues for match requests corresponding to received commands; andmaintain a second set of match-request queues for match requests corresponding to received messages;wherein a number of queues in the first or second set of match-request queues corresponds to a number of physical endpoints supported by the network interface controller.
  • 4. The network interface controller of claim 2, wherein the message is an MPI message.
  • 5. The network interface controller of claim 4, wherein the message is based on an eager protocol or a rendezvous protocol associated with MPI.
  • 6. The network interface controller of claim 1, wherein the LPE further comprises a plurality of processing elements; and wherein a respective processing element comprises a plurality of matching engines and a plurality of memory banks storing one or more lists, wherein the memory banks are interconnected with the matching engines using a crossbar.
  • 7. The network interface controller of claim 6, wherein a respective matching engine comprises a unified search pipeline for searching the one or more lists, and wherein the one or more lists comprise a priority list and an unexpected list.
  • 8. The network interface controller of claim 6, wherein a respective matching engine comprises a single pipeline stage to perform, in parallel, a match operation on a previous match request and a computation to determine a current read or write address.
  • 9. The network interface controller of claim 1, wherein the LPE is further to perform atomic search operations in a plurality of lists.
  • 10. A method, comprising: receiving, by a network interface controller (NIC), a match request, wherein the NIC comprises a host interface to couple a host device and a network interface to couple a network; andperforming, by a hardware list-processing engine (LPE) in the NIC, message passing interface (MPI) list matching based on the received match request,wherein the LPE comprises a persistent list entry cache (PLEC) to store previously matched list entries, andwherein performing the MPI list matching comprises bypassing a processing pipeline comprising a lookup table and a number of match-request queues in response to a matched entry being found in the PLEC for the received match request.
  • 11. The method of claim 10, wherein the match request comprises: a match request corresponding to a command received via the host interface; ora match request corresponding to a message received via the network interface.
  • 12. The method of claim 11, further comprising: enqueuing, by the LPE, match requests corresponding to received commands in a first set of match-request queues; andenqueuing, by the LPE, match requests corresponding to received messages in a second set of match-request queues;wherein a number of queues in the first or second set of match-request queues corresponds to a number of physical endpoints supported by the NIC.
  • 13. The method of claim 11, wherein the message is an MPI message.
  • 14. The method of claim 13, wherein the message is based on an eager protocol or a rendezvous protocol associated with MPI.
  • 15. The method of claim 10, wherein performing the MPI list matching comprises: selecting, from a plurality of processing elements within the hardware list-processing engine, a processing element to process the request; andselecting, from a plurality of matching engines within a respective processing element, a matching engine to perform a match operation, wherein a plurality of memory banks storing one or more lists are interconnected with the plurality of matching engines using a crossbar.
  • 16. The method of claim 15, wherein a respective matching engine comprises a unified search pipeline for searching the one or more lists, and wherein the one or more lists comprise a priority list and an unexpected list.
  • 17. The method of claim 15, wherein a respective matching engine performs the match operation on a previous match request, in parallel, with a computation to determine a current read or write address.
  • 18. The method of claim 10, wherein performing the MPI list matching comprises performing atomic search operations in a plurality of lists.
PCT Information
Filing Document Filing Date Country Kind
PCT/US2020/024311 3/23/2020 WO
Publishing Document Publishing Date Country Kind
WO2020/236295 11/26/2020 WO A
US Referenced Citations (427)
Number Name Date Kind
4807118 Lin et al. Feb 1989 A
5138615 Lamport et al. Aug 1992 A
5457687 Newman Oct 1995 A
5937436 Watkins Aug 1999 A
5960178 Cochinwala et al. Sep 1999 A
5970232 Passint et al. Oct 1999 A
5983332 Watkins Nov 1999 A
6112265 Harriman et al. Aug 2000 A
6230252 Passint et al. May 2001 B1
6246682 Roy et al. Jun 2001 B1
6493347 Sindhu et al. Dec 2002 B2
6545981 Garcia et al. Apr 2003 B1
6633580 Toerudbakken et al. Oct 2003 B1
6674720 Passint et al. Jan 2004 B1
6714553 Poole et al. Mar 2004 B1
6728211 Peris et al. Apr 2004 B1
6732212 Sugahara et al. May 2004 B2
6735173 Enoski et al. May 2004 B1
6894974 Aweva et al. May 2005 B1
7023856 Washabaugh et al. Apr 2006 B1
7133940 Blightman et al. Nov 2006 B2
7218637 Best et al. May 2007 B1
7269180 Bly et al. Sep 2007 B2
7305487 Blumrich et al. Dec 2007 B2
7337285 Tanoue Feb 2008 B2
7397797 Alfieri et al. Jul 2008 B2
7430559 Lomet Sep 2008 B2
7441006 Biran et al. Oct 2008 B2
7464174 Ngai Dec 2008 B1
7483442 Torudbaken et al. Jan 2009 B1
7562366 Pope et al. Jul 2009 B2
7593329 Kwan et al. Sep 2009 B2
7596628 Aloni et al. Sep 2009 B2
7620791 Wentzlaff et al. Nov 2009 B1
7633869 Morris et al. Dec 2009 B1
7639616 Manula et al. Dec 2009 B1
7734894 Wentzlaff et al. Jun 2010 B1
7774461 Tanaka et al. Aug 2010 B2
7782869 Chitlur Srinivasa Aug 2010 B1
7796579 Bruss Sep 2010 B2
7856026 Finan et al. Dec 2010 B1
7933282 Gupta et al. Apr 2011 B1
7953002 Opsasnick May 2011 B2
7975120 Sabbatini, Jr. et al. Jul 2011 B2
8014278 Subramanian et al. Sep 2011 B1
8023521 Woo et al. Sep 2011 B2
8050180 Judd Nov 2011 B2
8077606 Mark Dec 2011 B1
8103788 Miranda Jan 2012 B1
8160085 Voruganti et al. Apr 2012 B2
8175107 Yalagandula et al. May 2012 B1
8249072 Sugumar Aug 2012 B2
8281013 Mundkur et al. Oct 2012 B2
8352727 Chen et al. Jan 2013 B2
8353003 Noehring et al. Jan 2013 B2
8443151 Tang et al. May 2013 B2
8473783 Andrade et al. Jun 2013 B2
8543534 Alves et al. Sep 2013 B2
8619793 Lavian et al. Dec 2013 B2
8626957 Blumrich et al. Jan 2014 B2
8650582 Archer et al. Feb 2014 B2
8706832 Blocksome Apr 2014 B2
8719543 Kaminski et al. May 2014 B2
8811183 Anand et al. Aug 2014 B1
8948175 Bly et al. Feb 2015 B2
8966457 Ebcioglu Feb 2015 B2
8971345 McCanne et al. Mar 2015 B1
9001663 Attar et al. Apr 2015 B2
9053012 Northcott et al. Jun 2015 B1
9088496 Vaidya et al. Jul 2015 B2
9094327 Jacobs et al. Jul 2015 B2
9178782 Matthews et al. Nov 2015 B2
9208071 Talagala et al. Dec 2015 B2
9218278 Talagala et al. Dec 2015 B2
9231876 Mir et al. Jan 2016 B2
9231888 Bogdanski et al. Jan 2016 B2
9239804 Kegel et al. Jan 2016 B2
9269438 Nachimuthu et al. Feb 2016 B2
9276864 Pradeep Mar 2016 B1
9436651 Underwood et al. Sep 2016 B2
9455915 Sinha et al. Sep 2016 B2
9460178 Bashyam et al. Oct 2016 B2
9479426 Munger et al. Oct 2016 B2
9496991 Plamondon et al. Nov 2016 B2
9544234 Markine Jan 2017 B1
9548924 Pettit et al. Jan 2017 B2
9594521 Blagodurov et al. Mar 2017 B2
9635121 Mathew et al. Apr 2017 B2
9742855 Shuler et al. Aug 2017 B2
9762488 Previdi et al. Sep 2017 B2
9762497 Kishore et al. Sep 2017 B2
9830273 Bk et al. Nov 2017 B2
9838500 Ilan et al. Dec 2017 B1
9853900 Mula et al. Dec 2017 B1
9887923 Chorafakis et al. Feb 2018 B2
10003544 Liu et al. Jun 2018 B2
10009270 Stark et al. Jun 2018 B1
10031857 Menachem et al. Jul 2018 B2
10050896 Yang et al. Aug 2018 B2
10061613 Brooker et al. Aug 2018 B1
10063481 Jiang et al. Aug 2018 B1
10089220 McKelvie et al. Oct 2018 B1
10169060 Vincent et al. Jan 2019 B1
10178035 Dillon Jan 2019 B2
10200279 Aljaedi Feb 2019 B1
10218634 Aldebert et al. Feb 2019 B2
10270700 Burnette et al. Apr 2019 B2
10305772 Zur et al. May 2019 B2
10331590 MacNamara et al. Jun 2019 B2
10353833 Hagspiel et al. Jul 2019 B2
10454835 Contavalli et al. Oct 2019 B2
10498672 Graham et al. Dec 2019 B2
10567307 Fairhurst et al. Feb 2020 B2
10728173 Agrawal et al. Jul 2020 B1
10802828 Volpe et al. Oct 2020 B1
10817502 Talagala et al. Oct 2020 B2
11128561 Matthews et al. Sep 2021 B1
11271869 Agrawal et al. Mar 2022 B1
11416749 Bshara et al. Aug 2022 B2
11444886 Stawitzky et al. Sep 2022 B1
20010010692 Sindhu et al. Aug 2001 A1
20010047438 Forin Nov 2001 A1
20020174279 Wynne et al. Nov 2002 A1
20030016808 Hu et al. Jan 2003 A1
20030041168 Musoll Feb 2003 A1
20030110455 Baumgartner et al. Jun 2003 A1
20030174711 Shankar Sep 2003 A1
20030200363 Futral Oct 2003 A1
20030223420 Ferolito Dec 2003 A1
20040008716 Stiliadis Jan 2004 A1
20040059828 Hooper et al. Mar 2004 A1
20040095882 Hamzah et al. May 2004 A1
20040133634 Luke et al. Jul 2004 A1
20040223452 Santos et al. Nov 2004 A1
20050021837 Haselhorst et al. Jan 2005 A1
20050047334 Paul et al. Mar 2005 A1
20050088969 Carlsen et al. Apr 2005 A1
20050091396 Nilakantan et al. Apr 2005 A1
20050108444 Flauaus et al. May 2005 A1
20050108518 Pandya May 2005 A1
20050152274 Simpson Jul 2005 A1
20050182854 Pinkerton et al. Aug 2005 A1
20050270974 Mayhew Dec 2005 A1
20050270976 Yang et al. Dec 2005 A1
20060023705 Zoranovic et al. Feb 2006 A1
20060067347 Naik et al. Mar 2006 A1
20060075480 Noehring et al. Apr 2006 A1
20060174251 Pope et al. Aug 2006 A1
20060203728 Kwan et al. Sep 2006 A1
20070061433 Reynolds et al. Mar 2007 A1
20070070901 Aloni et al. Mar 2007 A1
20070198804 Moyer Aug 2007 A1
20070211746 Oshikiri et al. Sep 2007 A1
20070242611 Archer et al. Oct 2007 A1
20070268825 Corwin et al. Nov 2007 A1
20080013453 Chiang et al. Jan 2008 A1
20080013549 Okagawa et al. Jan 2008 A1
20080071757 Ichiriu et al. Mar 2008 A1
20080084864 Archer et al. Apr 2008 A1
20080091915 Moertl et al. Apr 2008 A1
20080126553 Boucher May 2008 A1
20080147881 Krishnamurthy et al. Jun 2008 A1
20080159138 Shepherd et al. Jul 2008 A1
20080253289 Naven et al. Oct 2008 A1
20090003212 Kwan et al. Jan 2009 A1
20090010157 Holmes et al. Jan 2009 A1
20090013175 Elliott Jan 2009 A1
20090055496 Garg et al. Feb 2009 A1
20090092046 Naven et al. Apr 2009 A1
20090141621 Fan et al. Jun 2009 A1
20090198958 Arimilli et al. Aug 2009 A1
20090259713 Blumrich et al. Oct 2009 A1
20090285222 Hoover et al. Nov 2009 A1
20100061241 Sindhu et al. Mar 2010 A1
20100169608 Kuo et al. Jul 2010 A1
20100172260 Kwan et al. Jul 2010 A1
20100183024 Gupta Jul 2010 A1
20100220595 Petersen Sep 2010 A1
20100274876 Kagan et al. Oct 2010 A1
20100302942 Shankar et al. Dec 2010 A1
20100316053 Miyoshi et al. Dec 2010 A1
20110051724 Scott et al. Mar 2011 A1
20110066824 Bestler Mar 2011 A1
20110072179 Lacroute et al. Mar 2011 A1
20110099326 Jung et al. Apr 2011 A1
20110110383 Yang et al. May 2011 A1
20110128959 Bando et al. Jun 2011 A1
20110158096 Leung et al. Jun 2011 A1
20110158248 Vorunganti et al. Jun 2011 A1
20110164496 Loh et al. Jul 2011 A1
20110173370 Jacobs et al. Jul 2011 A1
20110264822 Ferguson et al. Oct 2011 A1
20110276699 Pedersen Nov 2011 A1
20110280125 Jayakumar Nov 2011 A1
20110320724 Mejdrich et al. Dec 2011 A1
20120093505 Yeap et al. Apr 2012 A1
20120102506 Hopmann et al. Apr 2012 A1
20120117423 Andrade et al. May 2012 A1
20120137075 Vorbach May 2012 A1
20120144064 Parker et al. Jun 2012 A1
20120144065 Parker et al. Jun 2012 A1
20120147752 Ashwood-Smith et al. Jun 2012 A1
20120170462 Sinha Jul 2012 A1
20120170575 Mehra Jul 2012 A1
20120213118 Lindsay et al. Aug 2012 A1
20120250512 Jagadeeswaran et al. Oct 2012 A1
20120287821 Godfrey et al. Nov 2012 A1
20120297083 Ferguson et al. Nov 2012 A1
20120300669 Zahavi Nov 2012 A1
20120314707 Epps et al. Dec 2012 A1
20130010636 Regula Jan 2013 A1
20130039169 Schlansker et al. Feb 2013 A1
20130060944 Archer et al. Mar 2013 A1
20130103777 Kagan et al. Apr 2013 A1
20130121178 Mainaud et al. May 2013 A1
20130136090 Liu et al. May 2013 A1
20130182704 Jacobs et al. Jul 2013 A1
20130194927 Yamaguchi et al. Aug 2013 A1
20130203422 Masputra et al. Aug 2013 A1
20130205002 Wang et al. Aug 2013 A1
20130208593 Nandagopal Aug 2013 A1
20130246552 Underwood et al. Sep 2013 A1
20130290673 Archer et al. Oct 2013 A1
20130301645 Bogdanski et al. Nov 2013 A1
20130304988 Totolos et al. Nov 2013 A1
20130311525 Neerincx et al. Nov 2013 A1
20130329577 Suzuki et al. Dec 2013 A1
20130336164 Yang et al. Dec 2013 A1
20140019661 Hormuth et al. Jan 2014 A1
20140032695 Michels et al. Jan 2014 A1
20140036680 Lih et al. Feb 2014 A1
20140064082 Yeung et al. Mar 2014 A1
20140095753 Crupnicoff et al. Apr 2014 A1
20140098675 Frost et al. Apr 2014 A1
20140119367 Han et al. May 2014 A1
20140122560 Ramey et al. May 2014 A1
20140129664 McDaniel et al. May 2014 A1
20140133292 Yamatsu et al. May 2014 A1
20140136646 Tamir et al. May 2014 A1
20140169173 Naouri et al. Jun 2014 A1
20140185621 Decusatis et al. Jul 2014 A1
20140189174 Ajanovic et al. Jul 2014 A1
20140207881 Nussle et al. Jul 2014 A1
20140211804 Makikeni et al. Jul 2014 A1
20140226488 Shamis et al. Aug 2014 A1
20140241164 Cociglio et al. Aug 2014 A1
20140258438 Ayoub Sep 2014 A1
20140301390 Scott et al. Oct 2014 A1
20140307554 Basso et al. Oct 2014 A1
20140325013 Tamir et al. Oct 2014 A1
20140328172 Kumar et al. Nov 2014 A1
20140347997 Bergamasco et al. Nov 2014 A1
20140362698 Arad Dec 2014 A1
20140369360 Carlstrom Dec 2014 A1
20140379847 Williams Dec 2014 A1
20150003247 Mejia et al. Jan 2015 A1
20150006849 Xu et al. Jan 2015 A1
20150009823 Ganga et al. Jan 2015 A1
20150026361 Matthews et al. Jan 2015 A1
20150029848 Jain Jan 2015 A1
20150055476 Decusatis et al. Feb 2015 A1
20150055661 Boucher et al. Feb 2015 A1
20150067095 Gopal et al. Mar 2015 A1
20150089495 Persson et al. Mar 2015 A1
20150103667 Elias et al. Apr 2015 A1
20150124826 Edsall et al. May 2015 A1
20150146527 Kishore et al. May 2015 A1
20150154004 Aggarwal Jun 2015 A1
20150161064 Pope Jun 2015 A1
20150180782 Rimmer et al. Jun 2015 A1
20150186318 Kim et al. Jul 2015 A1
20150193262 Archer et al. Jul 2015 A1
20150195388 Snyder et al. Jul 2015 A1
20150208145 Parker et al. Jul 2015 A1
20150220449 Stark et al. Aug 2015 A1
20150237180 Swartzentruber et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150244804 Warfield et al. Aug 2015 A1
20150261434 Kagan et al. Sep 2015 A1
20150263955 Talaski et al. Sep 2015 A1
20150263994 Haramaty et al. Sep 2015 A1
20150288626 Aybay Oct 2015 A1
20150365337 Pannell Dec 2015 A1
20150370586 Cooper et al. Dec 2015 A1
20160006664 Sabato et al. Jan 2016 A1
20160012002 Arimilli et al. Jan 2016 A1
20160028613 Haramaty et al. Jan 2016 A1
20160065455 Wang et al. Mar 2016 A1
20160094450 Ghanwani et al. Mar 2016 A1
20160134518 Callon et al. May 2016 A1
20160134535 Callon May 2016 A1
20160134559 Abel et al. May 2016 A1
20160134573 Gagliardi et al. May 2016 A1
20160142318 Beecroft May 2016 A1
20160154756 Dodson et al. Jun 2016 A1
20160182383 Pedersen Jun 2016 A1
20160205023 Janardhanan Jul 2016 A1
20160226797 Aravinthan et al. Aug 2016 A1
20160254991 Eckert et al. Sep 2016 A1
20160259394 Ragavan Sep 2016 A1
20160283422 Crupnicoff et al. Sep 2016 A1
20160285545 Schmidtke et al. Sep 2016 A1
20160285677 Kashyap et al. Sep 2016 A1
20160294694 Parker et al. Oct 2016 A1
20160294926 Zur et al. Oct 2016 A1
20160301610 Amit et al. Oct 2016 A1
20160344620 G. Santos et al. Nov 2016 A1
20160381189 Caulfield et al. Dec 2016 A1
20170024263 Verplanken Jan 2017 A1
20170039063 Gopal et al. Feb 2017 A1
20170041239 Goldenberg et al. Feb 2017 A1
20170048144 Liu Feb 2017 A1
20170054633 Underwood et al. Feb 2017 A1
20170091108 Arellano et al. Mar 2017 A1
20170097840 Bridgers Apr 2017 A1
20170103108 Datta et al. Apr 2017 A1
20170118090 Pettit et al. Apr 2017 A1
20170118098 Littlejohn et al. Apr 2017 A1
20170153852 Ma et al. Jun 2017 A1
20170177541 Berman et al. Jun 2017 A1
20170220500 Tong Aug 2017 A1
20170237654 Turner et al. Aug 2017 A1
20170237671 Rimmer et al. Aug 2017 A1
20170242753 Sherlock et al. Aug 2017 A1
20170250914 Caulfield et al. Aug 2017 A1
20170251394 Johansson et al. Aug 2017 A1
20170270051 Chen et al. Sep 2017 A1
20170272331 Mendle Sep 2017 A1
20170272370 Ganga et al. Sep 2017 A1
20170286316 Doshi et al. Oct 2017 A1
20170289066 Haramaty et al. Oct 2017 A1
20170295098 Watkins et al. Oct 2017 A1
20170324664 Xu et al. Nov 2017 A1
20170371778 McKelvie et al. Dec 2017 A1
20180004705 Menachem et al. Jan 2018 A1
20180019948 Patwardhan et al. Jan 2018 A1
20180026878 Zahavi et al. Jan 2018 A1
20180077064 Wang Mar 2018 A1
20180083868 Cheng Mar 2018 A1
20180097645 Rajagopalan et al. Apr 2018 A1
20180097912 Chumbalkar et al. Apr 2018 A1
20180113618 Chan et al. Apr 2018 A1
20180115469 Erickson et al. Apr 2018 A1
20180131602 Civanlar et al. May 2018 A1
20180131678 Agarwal et al. May 2018 A1
20180150374 Ratcliff May 2018 A1
20180152317 Chang et al. May 2018 A1
20180152357 Natham et al. May 2018 A1
20180173557 Nakil et al. Jun 2018 A1
20180183724 Callard et al. Jun 2018 A1
20180191609 Caulfield et al. Jul 2018 A1
20180198736 Labonte et al. Jul 2018 A1
20180212876 Bacthu et al. Jul 2018 A1
20180212902 Steinmacher-Burow Jul 2018 A1
20180219804 Graham Aug 2018 A1
20180225238 Karguth et al. Aug 2018 A1
20180234343 Zdornov et al. Aug 2018 A1
20180254945 Bogdanski et al. Sep 2018 A1
20180260324 Marathe et al. Sep 2018 A1
20180278540 Shalev et al. Sep 2018 A1
20180287928 Levi et al. Oct 2018 A1
20180323898 Dods Nov 2018 A1
20180335974 Simionescu et al. Nov 2018 A1
20180341494 Sood et al. Nov 2018 A1
20190007349 Wang et al. Jan 2019 A1
20190018808 Beard et al. Jan 2019 A1
20190036771 Sharpless et al. Jan 2019 A1
20190042337 Dinan et al. Feb 2019 A1
20190042518 Marolia et al. Feb 2019 A1
20190044809 Willis et al. Feb 2019 A1
20190044827 Ganapathi et al. Feb 2019 A1
20190044863 Mula et al. Feb 2019 A1
20190044872 Ganapathi et al. Feb 2019 A1
20190044875 Murty et al. Feb 2019 A1
20190052327 Motozuka et al. Feb 2019 A1
20190058663 Song Feb 2019 A1
20190068501 Schneider et al. Feb 2019 A1
20190081903 Kobayashi et al. Mar 2019 A1
20190095134 Li Mar 2019 A1
20190104057 Goel et al. Apr 2019 A1
20190104206 Goel et al. Apr 2019 A1
20190108106 Aggarwal et al. Apr 2019 A1
20190108332 Ha Apr 2019 A1
20190109791 Mehra et al. Apr 2019 A1
20190121781 Kasichainula Apr 2019 A1
20190140979 Levi et al. May 2019 A1
20190146477 Cella et al. May 2019 A1
20190171612 Shahar et al. Jun 2019 A1
20190196982 Rozas et al. Jun 2019 A1
20190199646 Singh et al. Jun 2019 A1
20190253354 Caulfield et al. Aug 2019 A1
20190280978 Schmatz et al. Sep 2019 A1
20190294575 Dennison et al. Sep 2019 A1
20190306134 Shanbhogue et al. Oct 2019 A1
20190332314 Zhang et al. Oct 2019 A1
20190334624 Bernard Oct 2019 A1
20190356611 Das et al. Nov 2019 A1
20190361728 Kumar et al. Nov 2019 A1
20190379610 Srinivasan et al. Dec 2019 A1
20200036644 Belogolovy et al. Jan 2020 A1
20200084150 Burstein et al. Mar 2020 A1
20200145725 Eberle et al. May 2020 A1
20200177505 Li Jun 2020 A1
20200177521 Blumrich et al. Jun 2020 A1
20200259755 Wang et al. Aug 2020 A1
20200272579 Humphrey et al. Aug 2020 A1
20200274832 Humphrey et al. Aug 2020 A1
20200334195 Chen et al. Oct 2020 A1
20200349098 Caulfield et al. Nov 2020 A1
20200364088 Ashwathnarayan Nov 2020 A1
20210081410 Chavan et al. Mar 2021 A1
20210152494 Johnsen et al. May 2021 A1
20210263779 Haghighat et al. Aug 2021 A1
20210334206 Colgrove et al. Oct 2021 A1
20210377156 Michael et al. Dec 2021 A1
20210409351 Das et al. Dec 2021 A1
20220131768 Ganapathi et al. Apr 2022 A1
20220166705 Froese May 2022 A1
20220200900 Roweth Jun 2022 A1
20220210058 Bataineh et al. Jun 2022 A1
20220217078 Ford et al. Jul 2022 A1
20220217101 Yefet et al. Jul 2022 A1
20220245072 Roweth et al. Aug 2022 A1
20220278941 Shalev et al. Sep 2022 A1
20220309025 Chen et al. Sep 2022 A1
20230035420 Sankaran et al. Feb 2023 A1
20230046221 Pismenny et al. Feb 2023 A1
Foreign Referenced Citations (32)
Number Date Country
101729609 Jun 2010 CN
102932203 Feb 2013 CN
110324249 Oct 2019 CN
110601888 Dec 2019 CN
0275135 Jul 1988 EP
2187576 May 2010 EP
2219329 Aug 2010 EP
2947832 Nov 2015 EP
3445006 Feb 2019 EP
2003-244196 Aug 2003 JP
3459653 Oct 2003 JP
10-2012-0062864 Jun 2012 KR
10-2012-0082739 Jul 2012 KR
10-2014-0100529 Aug 2014 KR
10-2015-0026939 Mar 2015 KR
10-2015-0104056 Sep 2015 KR
10-2017-0110106 Oct 2017 KR
10-1850749 Apr 2018 KR
2001069851 Sep 2001 WO
0247329 Jun 2002 WO
2003019861 Mar 2003 WO
2004001615 Dec 2003 WO
2005094487 Oct 2005 WO
2007034184 Mar 2007 WO
2009010461 Jan 2009 WO
2009018232 Feb 2009 WO
2014092780 Jun 2014 WO
2014137382 Sep 2014 WO
2014141005 Sep 2014 WO
2018004977 Jan 2018 WO
2018046703 Mar 2018 WO
2019072072 Apr 2019 WO
Non-Patent Literature Citations (74)
Entry
Awerbuch, B., et al.; “An On-Demand Secure Routing Protocol Resilient to Byzantine Failures”; Sep. 2002; 10 pages.
Belayneh L.W., et al.; “Method and Apparatus for Routing Data in an Inter-Nodal Communications Lattice of a Massively Parallel Computer System by Semi-Randomly Varying Routing Policies for Different Packets”; 2019; 3 pages.
Bhatele, A., et al.; “Analyzing Network Health and Congestion in Dragonfly-based Supercomputers”; May 23-27, 2016; 10 pages.
Blumrich, M.A., et al.; “Exploiting Idle Resources in a High-Radix Switch for Supplemental Storage”; Nov. 2018; 13 pages.
Chang, F., et al.; “PVW: Designing Vir PVW: Designing Virtual World Ser orld Server Infr er Infrastructur astructure”; 2010; 8 pages.
Chang, F., et al.; “PVW: Designing Virtual World Server Infrastructure”; 2010; 8 pages.
Chen, F., et al.; “Requirements for RoCEv3 Congestion Management”; Mar. 21, 2019; 8 pages.
Cisco Packet Tracer; “packet-tracer;—ping”; https://www.cisco.com/c/en/us/td/docs/security/asa/asa-command-reference/l-R/cmdref2/p1.html; 2017.
Cisco; “Understanding Rapid Spanning Tree Protocol (802.1w)”; Aug. 1, 2017; 13 pages.
Eardley, ED, P; “Pre-Congestion Notification (PCN) Architecture”; Jun. 2009; 54 pages.
Escudero-Sahuquillo, J., et al.; “Combining Congested-Flow Isolation and Injection Throttling in HPC Interconnection Networks”; Sep. 13-16, 2011; 3 pages.
Hong, Y.; “Mitigating the Cost, Performance, and Power Overheads Induced by Load Variations in Multicore Cloud Servers”; Fall 2013; 132 pages.
Huawei; “The Lossless Network For Data Centers”; Nov. 7, 2017; 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024248, dated Jul. 8, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/024332, dated Jul. 8, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24243, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24253, dated Jul. 6, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24256, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24257, dated Jul. 7, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24258, dated Jul. 7, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24259, dated Jul. 9, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24260, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24268, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24269, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24339, dated Jul. 8, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024125, dated Jul. 10, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024129, dated Jul. 10, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024237, dated Jul. 14, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024239, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024241, dated Jul. 14, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024242, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024244, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024245, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024246, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024250, dated Jul. 14, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024254, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024262, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024266, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024270, dated Jul. 10, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024271, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024272, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024276, dated Jul. 13, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024304, dated Jul. 15, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024321, dated Jul. 9, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024324, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024327, dated Jul. 10, 2020, 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24158, dated Jul. 6, 2020, 18 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24251, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24267, dated Jul. 6, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24303, dated Oct. 21, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024311, dated Jul. 17, 2020, 8 pages.
Underwood, K.D., et al.; “A hardware acceleration unit for MPI queue processing”; Apr. 18, 2005; 10 pages.
Ramakrishnan et al., RFC 3168, “The addition of Explicit Congestion Notification (ECN) to IP”, Sep. 2001 (Year: 2001).
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24340, dated Oct. 26, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24342, dated Oct. 27, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024192, dated Oct. 23, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024221, dated Oct. 26, 2020, 9 pages.
International Search Report cited in PCT/US2020/024170 dated Dec. 16, 2020; 3 pages.
Maabi, S., et al.; “ERFAN: Efficient reconfigurable fault-tolerant deflection routing algorithm for 3-D Network-on-Chip”; Sep. 6-9, 2016.
Maglione-Mathey, G., et al.; “Scalable Deadlock-Free Deterministic Minimal-Path Routing Engine for InfiniBand-Based Dragonfly Networks”; Aug. 21, 2017; 15 pages.
Mamidala, A.R., et al.; “Efficient Barrier and Allreduce on Infiniband clusters using multicast and adaptive algorithms”; Sep. 20-23, 2004; 10 pages.
Mammeri, Z; “Reinforcement Learning Based Routing in Networks: Review and Classification of Approaches”; Apr. 29, 2019; 35 pages.
Mollah; M. A., et al.; “High Performance Computing Systems. Performance Modeling, Benchmarking, and Simulation: 8th International Workshop”; Nov. 13, 2017.
Open Networking Foundation; “OpenFlow Switch Specification”; Mar. 26, 2015; 283 pages.
Prakash, P., et al.; “The TCP Outcast Problem: Exposing Unfairness in Data Center Networks”; 2011; 15 pages.
Ramakrishnan, K., et al.; “The Addition of Explicit Congestion Notification (ECN) to IP”; Sep. 2001; 63 pages.
Roth, P. C., et al.; “MRNet: A Software-Based Multicast/Reduction Network for Scalable Tools1”; Nov. 15-21, 2003; 16 pages.
Silveira, J., et al.; “Preprocessing of Scenarios for Fast and Efficient Routing Reconfiguration in Fault-Tolerant NoCs”; Mar. 4-6, 2015.
Tsunekawa, K.; “Fair bandwidth allocation among LSPs for AF class accommodating TCP and UDP traffic in a Diffserv-capable MPLS network”; Nov. 17, 2005; 9 pages.
Wu, J.; “Fault-tolerant adaptive and minimal routing in mesh-connected multicomputers using extended safety levels”; Feb. 2000; 11 pages.
Xiang, D., et al.; “Fault-Tolerant Adaptive Routing in Dragonfly Networks”; Apr. 12, 2017; 15 pages.
Xiang, D., et al.; “Deadlock-Free Broadcast Routing in Dragonfly Networks without Virtual Channels”, submission to IEEE transactions on Parallel and Distributed Systems, 2015, 15 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20809930.9, dated Mar. 2, 2023, 9 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20810784.7, dated Mar. 9, 2023, 7 pages.
Related Publications (1)
Number Date Country
20220229800 A1 Jul 2022 US
Provisional Applications (3)
Number Date Country
62852273 May 2019 US
62852203 May 2019 US
62852289 May 2019 US