System and method for facilitating efficient event notification management for a network interface controller (NIC)

Information

  • Patent Grant
  • 11991072
  • Patent Number
    11,991,072
  • Date Filed
    Monday, March 23, 2020
    4 years ago
  • Date Issued
    Tuesday, May 21, 2024
    a month ago
Abstract
A network interface controller (NIC) capable of efficient event management is provided. The NIC can be equipped with a host interface, a first memory device, and an event management module. During operation, the host interface can couple the NIC to a host device. The event management module can identify an event associated with an event queue stored in a second memory device of the host device. The event management module can insert, into a buffer, an event notification associated with the event. The buffer can be associated with the event queue and stored in the first memory device. If the buffer has met a release criterion, the event management module can insert, via the host interface, the aggregated event notifications into the event queue.
Description
BACKGROUND
Field

This is generally related to the technical field of networking. More specifically, this disclosure is related to systems and methods for facilitating a network interface controller (NIC) with efficient event management.


Related Art

As network-enabled devices and applications become progressively more ubiquitous, various types of traffic as well as the ever-increasing network load continue to demand more performance from the underlying network architecture. For example, applications such as high-performance computing (HPC), media streaming, and Internet of Things (JOT) can generate different types of traffic with distinctive characteristics. As a result, in addition to conventional network performance metrics such as bandwidth and delay, network architects continue to face challenges such as scalability, versatility, and efficiency.


SUMMARY

A network interface controller (NIC) capable of efficient event management is provided. The NIC can be equipped with a host interface, a first memory device, and an event management logic block. During operation, the host interface can couple the NIC to a host device. The event management logic block can identify an event associated with an event queue stored in a second memory device of the host device. The event management logic block can insert, into a buffer, an event notification associated with the event. The buffer can be associated with the event queue and stored in the first memory device. If the buffer has met a release criterion, the event management logic block can insert, via the host interface, the aggregated event notifications into the event queue.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 shows an exemplary network.



FIG. 2A shows an exemplary NIC chip with a plurality of NICs.



FIG. 2B shows an exemplary architecture of a NIC.



FIG. 3A shows an exemplary efficient notification management process in a NIC.



FIG. 3B shows an exemplary combining buffer for facilitating efficient event notification management in a NIC.



FIG. 4A shows a flow chart of an event notification combination process in a NIC.



FIG. 4B shows a flow chart of an insertion process for a combining buffer in a NIC.



FIG. 4C shows a flow chart of a timer management process for a combining buffer of a NIC.



FIG. 5 shows an exemplary computer system equipped with a NIC that facilitates efficient event notification management.





In the figures, like reference numerals refer to the same figure elements.


DETAILED DESCRIPTION

Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown.


Overview


The present disclosure describes systems and methods that facilitate efficient notification management in a network interface controller (NIC). The NIC allows a host to communicate with a data-driven network.


The embodiments described herein solve the problem of under-utilization of host interface width caused by inefficient event notification transfers by (i) aggregating event notifications in a combining buffer of a NIC, and (ii) providing the aggregated event notifications via the host interface, thereby utilizing the width of the host interface for a respective event notification.


During operation, an application, which can operate on a host device of a NIC, can issue a command for a data operation (e.g., a “GET” or a “PUT” command of remote direct memory access (RDMA)) to the NIC. Consequently, the host device can transfer the command (e.g., a direct memory access (DMA) descriptor of the command) to the NIC. Upon completion of the operation, the NIC may notify the host device that the operation has been completed. In addition to the event completion notifications, the NIC may also need to provide other event notifications, such as an error notification or telemetry data obtained via measurements. Typically, the memory of the host device can maintain one or more event queues associated with different events. The NIC can send an event notification that writes into a corresponding event queue.


Upon receiving the event notification via the host interface, the host device can write the event notification into the event queue of the memory. The application can obtain the event notification from the event queue. However, an event notification may be relatively short compared to the bit width (or width) of the host interface. For example, the host interface can be a peripheral component interconnect express (PC1e) interface that can support N bytes of data transfer per clock cycle. In other words, the width of the host interface can be N bytes. However, a typical event notification may be M, where M<N, bytes long. Consequently, issuing PCIe write operation for an M-byte event notification can cause significant underutilization of the host interface.


To solve this problem, the NIC can aggregate a plurality of event notifications into a single notification and send the aggregated event notification via the host interface. The aggregated notification may have a length that can correspond to the width of the interface. As a result, the aggregated notification can efficiently utilize the width of the interface. During operation, the NIC can generate an event notification for an event and determine a corresponding event queue associated with the event. The NIC can then allocate the event notification to a combining buffer associated with the event queue. The NIC can continue to aggregate notifications associated with the event until a new event notification cannot be included in the buffer, or the new event notification fills the buffer. The NIC can then issue an interface-based write operation that inserts the event notifications from the buffer to the corresponding event queue. In this way, the NIC can utilize the width of the host interface, thereby facilitating efficient event management for the NIC.


One embodiment of the present invention provides a NIC that can be equipped with a host interface, a first memory device, and an event management logic block. During operation, the host interface can couple the NIC to a host device. The event management logic block can identify an event associated with an event queue stored in a second memory device of the host device. The event management logic block can insert, into a buffer, an event notification associated with the event. The buffer can be associated with the event queue and stored in the first memory device. If the buffer has met a release criterion, the event management logic block can insert, via the host interface, the aggregated event notifications into the event queue.


In a variation on this embodiment, the event management logic block can determine that the buffer has met the release criterion by determining that a timer for the buffer has reached a latency tolerance for the event queue.


In a further variation, the second memory device may store a plurality of event queues. A respective event queue can then be associated with a distinct latency tolerance.


In a variation on this embodiment, the buffer can be distributed across a plurality of memory logic blocks of the first memory device.


In a further variation, the granularity of the event notification may correspond to a multiple of a size of a memory logic block.


In a variation on this embodiment, if a new buffer has not been allocated for the event queue, the event management logic block can allocate the new buffer for the event queue.


In a variation on this embodiment, the release criterion can be based on one or more of: (i) the buffer not having the capacity for a new event notification, and (ii) the buffer being filled by the insertion of the event notification.


In a variation on this embodiment, if the event notification is an initial notification, the event management logic block can insert null events into a remainder of the buffer.


In a variation on this embodiment, the event management logic block can store event notifications of different sizes in the same buffer.


In a variation on this embodiment, the host interface can be a peripheral component interconnect express (PCIe) interface. The event management logic block can then insert the aggregated event notifications into the event queue based on a PCIe write.


In this disclosure, the description in conjunction with FIG. 1 is associated with the network architecture and the description in conjunction with FIG. 2A and onward provide more details on the architecture and operations associated with a NIC that supports efficient event management.



FIG. 1 shows an exemplary network. In this example, a network 100 of switches, which can also be referred to as a “switch fabric,” can include switches 102, 104, 106, 108, and 110. Each switch can have a unique address or ID within switch fabric 100. Various types of devices and networks can be coupled to a switch fabric. For example, a storage array 112 can be coupled to switch fabric 100 via switch 110; an InfiniBand (IB) based HPC network 114 can be coupled to switch fabric 100 via switch 108; a number of end hosts, such as host 116, can be coupled to switch fabric 100 via switch 104; and an IP/Ethernet network 118 can be coupled to switch fabric 100 via switch 102. In general, a switch can have edge ports and fabric ports. An edge port can couple to a device that is external to the fabric. A fabric port can couple to another switch within the fabric via a fabric link. Typically, traffic can be injected into switch fabric 100 via an ingress port of an edge switch, and leave switch fabric 100 via an egress port of another (or the same) edge switch. An ingress link can couple a NIC of an edge device (for example, an HPC end host) to an ingress edge port of an edge switch. Switch fabric 100 can then transport the traffic to an egress edge switch, which in turn can deliver the traffic to a destination edge device via another NIC.


Exemplary NIC Architecture



FIG. 2A shows an exemplary NIC chip with a plurality of NICs. With reference to the example in FIG. 1, a NIC chip 200 can be a custom application-specific integrated circuit (ASIC) designed for host 116 to work with switch fabric 100. In this example, chip 200 can provide two independent NICs 202 and 204. A respective NIC of chip 200 can be equipped with a host interface (HI) (e.g., an interface for connecting to the host processor) and one High-speed Network Interface (HNI) for communicating with a link coupled to switch fabric 100 of FIG. 1. For example, NIC 202 can include an HI 210 and an HNI 220, and NIC 204 can include an HI 211 and an HNI 221.


In some embodiments, HI 210 can be a peripheral component interconnect (PCI) or a peripheral component interconnect express (PCIe) interface. HI 210 can be coupled to a host via a host connection 201, which can include N (e.g., N can be 16 in some chips) PCle Gen 4 lanes capable of operating at signaling rates up to 25 Gbps per lane. HNI 210 can facilitate a high-speed network connection 203, which can communicate with a link in switch fabric 100 of FIG. 1. HNI 210 can operate at aggregate rates of either 100 Gbps or 200 Gbps using M (e.g., M can be 4 in some chips) full-duplex serial lanes. Each of the M lanes can operate at 25 Gbps or 50 Gbps based on non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4 (PAM4), respectively. HNI 220 can support the Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet-based protocols as well as an enhanced frame format that provides support for higher rates of small messages.


NIC 202 can support one or more of: point-to-point message passing based on Message Passing Interface (MPI), remote memory access (RMA) operations, offloading and progression of bulk data collective operations, and Ethernet packet processing. When the host issues an MPI message, NIC 202 can match the corresponding message type. Furthermore, NIC 202 can implement both eager protocol and rendezvous protocol for MPI, thereby offloading the corresponding operations from the host.


Furthermore, the RMA operations supported by NIC 202 can include PUT, GET, and Atomic Memory Operations (AMO). NIC 202 can provide reliable transport. For example, if NIC 202 is a source NIC, NIC 202 can provide a retry mechanism for idempotent operations. Furthermore, connection-based error detection and retry mechanism can be used for ordered operations that may manipulate a target state. The hardware of NIC 202 can maintain the state necessary for the retry mechanism. In this way, NIC 202 can remove the burden from the host (e.g., the software). The policy that dictates the retry mechanism can be specified by the host via the driver software, thereby ensuring flexibility in NIC 202.


Furthermore, NIC 202 can facilitate triggered operations, a general-purpose mechanism for offloading, and progression of dependent sequences of operations, such as bulk data collectives. NIC 202 can support an application programming interface (API) (e.g., libfabric API) that facilitates fabric communication services provided by switch fabric 100 of FIG. 1 to applications running on host 116. NIC 202 can also support a low-level network programming interface, such as Portals API. In addition, NIC 202 can provide efficient Ethernet packet processing, which can include efficient transmission if NIC 202 is a sender, flow steering if NIC 202 is a target, and checksum computation. Moreover, NIC 202 can support virtualization (e.g., using containers or virtual machines).



FIG. 2B shows an exemplary architecture of a NIC. In NIC 202, the port macro of HNI 220 can facilitate low-level Ethernet operations, such as physical coding sublayer (PCS) and media access control (MAC). In addition, NIC 202 can provide support for link layer retry (LLR). Incoming packets can be parsed by parser 228 and stored in buffer 229. Buffer 229 can be a PFC Buffer provisioned to buffer a threshold amount (e.g., one microsecond) of delay bandwidth. HNI 220 can also include control transmission unit 224 and control reception unit 226 for managing outgoing and incoming packets, respectively.


NIC 202 can include a Command Queue (CQ) unit 230. CQ unit 230 can be responsible for fetching and issuing host side commands. CQ unit 230 can include command queues 232 and schedulers 234. Command queues 232 can include two independent sets of queues for initiator commands (PUT, GET, etc.) and target commands (Append, Search, etc.), respectively. Command queues 232 can be implemented as circular buffers maintained in the memory of NIC 202. Applications running on the host can write to command queues 232 directly. Schedulers 234 can include two separate schedulers for initiator commands and target commands, respectively. The initiator commands are sorted into flow queues 236 based on a hash function. One of flow queues 236 can be allocated to a unique flow. Furthermore, CQ unit 230 can further include a triggered operations module (or logic block) 238, which is responsible for queuing and dispatching triggered commands.


Outbound transfer engine (OXE) 240 can pull commands from flow queues 236 in order to process them for dispatch. OXE 240 can include an address translation request unit (ATRU) 244 that can send address translation requests to address translation unit (ATU) 212. ATU 212 can provide virtual to physical address translation on behalf of different engines, such as OXE 240, inbound transfer engine (IXE) 250, and event engine (EE) 216. ATU 212 can maintain a large translation cache 214. ATU 212 can either perform translation itself or may use host-based address translation services (ATS). OXE 240 can also include message chopping unit (MCU) 246, which can fragment a large message into packets of sizes corresponding to a maximum transmission unit (MTU). MCU 246 can include a plurality of MCU modules. When an MCU module becomes available, the MCU module can obtain the next command from an assigned flow queue. The received data can be written into data buffer 242. The MCU module can then send the packet header, the corresponding traffic class, and the packet size to traffic shaper 248. Shaper 248 can determine which requests presented by MCU 246 can proceed to the network.


Subsequently, the selected packet can be sent to packet and connection tracking (PCT) 270. PCT 270 can store the packet in a queue 274. PCT 270 can also maintain state information for outbound commands and update the state information as responses are returned. PCT 270 can also maintain packet state information (e.g., allowing responses to be matched to requests), message state information (e.g., tracking the progress of multi-packet messages), initiator completion state information, and retry state information (e.g., maintaining the information required to retry a command if a request or response is lost). If a response is not returned within a threshold time, the corresponding command can be retrieved from retry buffer 272. PCT 270 can facilitate connection management for initiator and target commands based on source tables 276 and target tables 278, respectively. For example, PCT 270 can update its source tables 276 to track the necessary state for reliable delivery of the packet and message completion notification. PCT 270 can forward outgoing packets to HNI 220, which stores the packets in outbound queue 222.


NIC 202 can also include an IXE 250, which provides packet processing if NIC 202 is a target or a destination. IXE 250 can obtain the incoming packets from HNI 220. Parser 256 can parse the incoming packets and pass the corresponding packet information to a List Processing Engine (LPE) 264 or a Message State Table (MST) 266 for matching. LPE 264 can match incoming messages to buffers. LPE 264 can determine the buffer and start address to be used by each message. LPE 264 can also manage a pool of list entries 262 used to represent buffers and unexpected messages. MST 266 can store matching results and the information required to generate target side completion events. An event can be an internal control message for communication among the elements of NIC 202. MST 266 can be used by unrestricted operations, including multi-packet PUT commands, and single-packet and multi-packet GET commands.


Subsequently, parser 256 can store the packets in packet buffer 254. IXE 250 can obtain the results of the matching for conflict checking. DMA write and AMO module 252 can then issue updates to the memory generated by write and AMO operations. If a packet includes a command that generates target side memory read operations (e.g., a GET request), the packet can be passed to the OXE 240. NIC 202 can also include an EE 216, which can receive requests to generate event notifications from other modules or units in NIC 202. An event notification can specify that either a full event or a counting event is generated. EE 216 can manage event queues, located within host processor memory, to which it writes full events. EE 216 can forward counting events to CQ unit 230.


Event Management in NIC



FIG. 3A shows an exemplary efficient notification management process in a NIC. In this example, a host device 300 can be equipped with a NIC 330. Device 300 can include a processor 302, a memory device 304, and an interface system 306. An HI 332 of NIC 330 can be coupled to interface system 306 of device 300. In some embodiments, HI 332 can be a PCIe interface, and interface system 306 can be a PCIe system that provides a slot for HI 332. NIC 330 can also include an EE 334 for managing events, as described in conjunction with FIG. 2B.


Typically, an application 308 running on device 300 can issue a command for a data operation (e.g., an RDMA operation) to NIC 330. Consequently, device 300 can transfer the command to NIC 330. Upon completion of the operation, NIC 330 may notify device 330 that the operation has been completed. In addition to the event completion notifications, NIC 330 may also need to provide other event notifications, such as an error notification or telemetry data obtained via measurements, to device 300. Typically, memory device 304 can maintain one or more event queues associated with different events. For example, memory device 304 can include an event queue 312 for a type of event, such as notifications for write operations.


Upon completion of an event, EE 334 can generate an event notification 322 (e.g., a DMA descriptor) associated with an event. EE 334 can then issue a corresponding interface-based write operation for storing event notification 322 into event queue 312. The write operation can be associated with an enqueue operation into event queue 312. Upon receiving event notification 322 via interface system 306, processor 302 can write event notification 322 into event queue 312. Application 308 can obtain event notification 322 from event queue 312. In this way, application 308 can determine whether an operation issued from application 308 has been completed.


However, event notification 322 may be relatively short compared to width 360 of interface system 306. For example, interface system 306 can provide a PCIe interface that can support 64 bytes of data transfer per clock cycle. Hence, width 360 of interface system 306 can be 64 bytes. However, event notification 322 may be significantly shorter than width 360. Consequently, issuing PCIe write operation (e.g., the enqueue operation into event queue 312) for event notification 322 can cause significant underutilization of the transfer capability offered by width 360.


To solve this problem, NIC 330 can aggregate a plurality of event notifications associated with event queue 312 into a single notification 320 and send aggregated event notification 320 via HI 332. During operation, upon retrieving an event, EE 334 can generate an event notification 322 and determine that event queue 312 is associated with the event. Instead of sending event notification 322 to host 300, EE 334 can identify a combining buffer 314 in NIC 330. Here, combining buffer 314 can be allocated for aggregating event notifications destined to event queue 312. If no combining buffer is allocated for event queue 312 when event notification 322 is generated, NIC 330 can allocate a new combining buffer (i.e., combining buffer 314 can be a newly allocated buffer). EE 334 can then store event notification 322 in combining buffer 314.


Similarly, upon generating another event notification 324 for event queue 312, EE 334 can determine whether a combining buffer has been allocated for event queue 312. EE 334 can identify combining buffer 314 and insert event notification 324 into combining buffer 314. At this point, combining buffer 314 can store event notifications 322 and 324. In this way, EE 334 can continue to aggregate notifications associated with event queue 312. EE 334 can then determine whether combining buffer 314 meets a release criterion. For example, if a subsequent event notification cannot be included in combining buffer 314 or event notification 324 fills combining buffer 314, EE 334 can determine that combining buffer 314 has met the release criterion and should be inserted into event queue 312. Accordingly, EE 334 can initiate data transfer to host 300.


EE 334 can aggregate the content of combining buffer 314, which can include event notifications 322 and 324, into a single aggregated notification 320. Aggregated notification 320 may have a length that can correspond to width 360. If the combined length of event notifications 322 and 324 is smaller than width 360, EE 334 may pad aggregated notification 320 with null events (e.g., null values) to adjust to width 360. For efficient padding, NIC 330 may pad combining buffer 314 with the null events upon inserting event notification 322. Upon insertion into combining buffer 314, event notification 324 can replace the null event subsequent to event notification 322 in combining buffer 314. EE 334 can then issue an interface-based write operation that inserts aggregated notification 320 to event queue 312. In this way, aggregated notification 320 can efficiently utilize width 360, thereby facilitating efficient event management for NIC 330.


In some embodiments, event queue 312 can be configured with a latency tolerance. Combining buffer 314 can store event notifications for a duration indicated by the latency tolerance. If the latency tolerance is set to zero, upon receiving event notification 322 and inserting into combining buffer 314, EE 334 can insert the content of combining buffer 314 into event queue 312. Since the rest of combining buffer 314 can be padded with null events, EE 334 can readily issue the write operation via HI 332. However, if the latency tolerance is set to a non-zero value, EE 334 may wait for more event notifications until a timer corresponding to the latency tolerance expires. Suppose that device 300 maintains a plurality of event queues in memory device 304. Two different event queues can then have the same or two distinct latency tolerances.



FIG. 3B shows an exemplary combining buffer for facilitating efficient event notification management in a NIC. Combining buffer 314 can be facilitated based on a number of buffer modules 352, 354, 356, and 358. In some embodiments, a buffer module can be a random access memory (RAM) device. For example, if width 360 in FIG. 3A is 64 bytes, combining buffer 314 may also have the capacity to hold 64 bytes. Consequently, each of buffer modules 352, 354, 356, and 358 can store at least 16 bytes of data. In this way, four separate buffer modules can facilitate the overall width of 64 bytes. Furthermore, each of buffer modules 352, 354, 356, and 358 can include additional capacity to store error-correcting codes (e.g., a single error correction/double error detection (SECDED) code). The SECDED code stored in a buffer module can be used to protect that buffer module.


Combining multiple buffer modules to form combining buffer 314 can allow individual event notifications (e.g., individual 16-byte segments) to be written while leaving other segments unmodified. As a result, if buffer module 352 already stores event notification 322, inserting another event notification 324 into buffer modules 356 and 358 may not affect the content of buffer module 352. As a result, inserting an event notification into partially-filled combining buffer 314 may not need reading the content of combining buffer 314, modifying the content, and rewriting the modified content into combining buffer 314. Since a buffer module may support a single read and write per clock cycle, one of the buffer modules can be read from while another of the buffer modules can be written into in the same clock cycle.


In some embodiments, an event notification can be generated to align with the size of a buffer module. Therefore, the size of an event notification can be a multiple of the size of a buffer module. For example, if the size of a buffer module is 16 bytes, the size of an event notification can be 16, 32, or 64 bytes long, depending on the content of the event notification. EE 334 may store event notifications of different sizes in the same combining buffer 314. An event notification can be stored at an address offset aligned for the size of the event notification. For example, a 16-byte event notification can be written at a location aligned with a 16 byte-aligned address. If an event notification does not align with the size of a buffer module, EE 334 may insert null events to provide the alignment.


If the size of a buffer module is 16 bytes, in this example, event notifications 322 and 324 can be 32 and 16 bytes long, respectively. Consequently, event notification 322 can be stored in buffer modules 352 and 354, and event notification 324 can be stored in buffer module 356. If EE 334 determines that the subsequent event notification does not fit in buffer module 358, EE 334 can enqueue the content of combining buffer 314. When EE 334 writes the first event notification in combining buffer 314, EE 334 can simultaneously write null events into the remainder of combining buffer 314. For example, upon writing event notification 322 in buffer modules 352354, EE 334 can write null events into buffer modules 356 and 358. This allows EE 334 to initiate the enqueue operation without the need to write null events to any available capacity that may exist at that time.



FIG. 4A shows a flow chart of an event notification combination process in a NIC. During operation, an EE of the NIC can generate an event notification associated with an event queue (operation 402). The EE can determine whether a combining buffer is allocated to the event queue (operation 404). If a combining buffer is not allocated to the event queue, the EE can allocate a combining buffer for the event queue and insert the event notification into the combining buffer (operation 414). If a combining buffer is allocated to the event queue, the EE can determine whether the combining buffer has available capacity for event notification (operation 406).


If the combining buffer does not have sufficient available capacity, the EE can enqueue the combining buffer, allocate a new combining buffer for the event queue, and insert the event notification into the new combining buffer (operation 416). On the other hand, if the combining buffer has available capacity, the EE can insert the event notification into the combining buffer (operation 408). Upon inserting the event notification (operation 408, 414, or 416), the EE can determine whether the combining buffer is full (e.g., due to the insertion) (operation 410). If the combining buffer is full, the EE can enqueue the combining buffer (operation 412). If the combining buffer is not full (operation 410) or upon performing the enqueue operation (operation 412), the EE can continue to generate another event notification associated with an event queue (operation 402).



FIG. 4B shows a flow chart of an insertion process for a combining buffer in a NIC. During operation, an EE of the NIC can identify a combining buffer for an event notification (operation 432). The EE can then determine a location for insertion in the combining buffer (operation 434) and insert the event notification into the combining buffer at the determined location (operation 436). Subsequently, the EE can determine whether the insertion is the initial insertion (operation 438). If the insertion is the initial insertion, the EE can insert null events into the rest of the combining buffer (operation 440).



FIG. 4C shows a flow chart of a timer management process for a combining buffer of a NIC. During operation, the EE of the NIC can determine state information associated with a respective combining buffer in parallel (operation 452). The NIC can maintain a combining buffer for a respective event queue. Since the host device of the NIC can include a plurality of event queues, the NIC can maintain a plurality of combining buffers. The EE can then calculate a current time based on a timer counter (operation 454).


Subsequently, the EE can compare the state of a respective combining buffer with the current time and a tolerance value to identify combining buffers that have expired (operation 456). The EE can set the state of a respective identified combining buffer as expired and enqueue the combining buffer (operation 458). The EE can then increment the timer counter (operation 460). In some embodiments, the EE may increment the timer counter at a predetermined interval. Upon each increment, the EE can check which combining buffers have expired.


Exemplary Computer System



FIG. 5 shows an exemplary computer system equipped with a NIC that facilitates efficient event notification management. Computer system 550 includes a processor 552, a memory device 554, and a storage device 556. Memory device 554 can include a volatile memory device (e.g., a dual in-line memory module (DIMM)). Furthermore, computer system 550 can be coupled to a keyboard 562, a pointing device 564, and a display device 566. Storage device 556 can store an operating system 570. An application 572 can operate on operating system 570.


Computer system 550 can be equipped with a host interface coupling a NIC 520 that facilitates efficient event management. NIC 520 can provide one or more HNIs to computer system 550. NIC 520 can be coupled to a switch 502 via one of the HNIs. NIC 520 can include an event logic block 530, as described in conjunction with FIGS. 2B and 3. Event logic block 530 can include a notification logic block 532 and an enqueue logic block 534. Event logic block 530 can maintain a combining buffer 536, which can be associated with an event queue 560 in memory device 554.


Notification logic block 532 can generate an event notification and store the event notification in combining buffer 536. Notification logic block 532 can determine a release criterion for combining 536. The release criterion can indicate one or more of: whether combining buffer 536 has sufficient capacity for the next event notification and whether combining buffe 536 is full, and whether an event notification has been held in combining buffer 536, when it is partially full, for a sufficiently long duration so as to have reached the latency tolerance of event queue 560. If combining buffer 536 has met the release criterion (e.g., has stored a sufficient number of event notifications), enqueue logic block 534 can enqueue the content of combining buffer 536 into event queue 560 (e.g., based on a write request via the HI).


In summary, the present disclosure describes a NIC that facilitates efficient event management. The NIC can be equipped with a host interface, a first memory device, and an event management logic block. During operation, the host interface can couple the NIC to a host device. The event management logic block can identify an event associated with an event queue stored in a second memory device of the host device. The event management logic block can insert, into a buffer, an event notification associated with the event. The buffer can be associated with the event queue and stored in the first memory device. If the buffer has met a release criterion, the event management logic block can insert, via the host interface, the aggregated event notifications into the event queue.


The methods and processes described above can be performed by hardware logic blocks, modules, or apparatus. The hardware logic blocks, modules, logic blocks, or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware logic blocks, modules, or apparatus are activated, they perform the methods and processes included within them.


The methods and processes described herein can also be embodied as code or data, which can be stored in a storage device or computer-readable storage medium. When a processor reads and executes the stored code or data, the processor can perform these methods and processes.


The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A network interface controller (NIC), comprising: a host interface coupling a host device;a first memory device; andan event management logic block to: identify an event associated with an event queue stored in a second memory device of the host device;insert, into a buffer stored in the first memory device, a first event notification associated with the event, wherein the buffer is associated with the event queue;insert null values into a remainder of the buffer in response to the first event notification being an initial notification;insert, into the buffer until the buffer has met a release criterion, zero or more other event notifications associated with other events to obtain aggregated event notifications, wherein the other events are associated with the event queue; andin response to determining that the buffer has met the release criterion, insert, via the host interface, the aggregated event notifications into the event queue.
  • 2. The network interface controller of claim 1, wherein determining that the buffer has met the release criterion further comprises determining that a timer for the buffer has reached a latency tolerance for the event queue.
  • 3. The network interface controller of claim 1, wherein the second memory device is to store a plurality of event queues, and wherein a respective event queue is associated with a distinct latency tolerance.
  • 4. The network interface controller of claim 1, wherein the buffer is distributed across a plurality of memory modules of the first memory device.
  • 5. The network interface controller of claim 1, wherein a granularity of the event notification corresponds to a multiple of a size of a memory module.
  • 6. The network interface controller of claim 1, wherein the event management logic block is further to, in response to a new buffer not being allocated for the event queue, allocate the new buffer for the event queue.
  • 7. The network interface controller of claim 1, wherein the release criterion is based on one or more of: the buffer not having capacity for a new event notification; and the buffer being filled by the insertion of the event notification.
  • 8. The network interface controller of claim 1, wherein the event management logic block is to store event notifications of different sizes in the same buffer.
  • 9. The network interface controller of claim 1, wherein the host interface is a peripheral component interconnect express (PC1e) interface; and wherein the event management logic block is further to insert the aggregated event notifications into the event queue based on a PCIe write.
  • 10. A method for facilitating efficient event management in a network interface controller (NIC), the method comprising: maintaining a buffer in a first memory device of the NIC;identifying an event associated with an event queue stored in a second memory device of a host device of the NIC;inserting, into the buffer, a first event notification associated with the event, wherein the buffer is associated with the event queue;inserting null values into a remainder of the buffer in response to the first event notification being an initial notification;inserting, into the buffer until the buffer has met a release criterion, zero or more other event notifications associated with other events to obtain aggregated event notifications, wherein the other events are associated with the event queue; andin response to determining that the buffer has met the release criterion, inserting, via a host interface coupling the host device, the aggregated event notifications into the event queue.
  • 11. The method of claim 10, wherein determining that the buffer has met the release criterion further comprises determining that a timer for the buffer has reached a latency tolerance for the event queue.
  • 12. The method of claim 10, wherein the second memory device is to store a plurality of event queues, and wherein a respective event queue is associated with a distinct latency tolerance.
  • 13. The method of claim 10, wherein the buffer is distributed across a plurality of memory modules of the first memory device.
  • 14. The method of claim 10, wherein a granularity of the event notification corresponds to a multiple of a size of a memory module.
  • 15. The method of claim 10, further comprising, in response to a new buffer not being allocated for the event queue, allocating the new buffer for the event queue.
  • 16. The method of claim 10, wherein the release criterion is based on one or more of: the buffer not having capacity for a new event notification; andthe buffer being filled by the insertion of the event notification.
  • 17. The method of claim 10, further comprising storing event notifications of different sizes in the same buffer.
  • 18. The method of claim 10, wherein the host interface is a peripheral component interconnect express (PC1e) interface; and wherein the method further comprises inserting the aggregated event notifications into the event queue based on a PCIe write.
PCT Information
Filing Document Filing Date Country Kind
PCT/US2020/024248 3/23/2020 WO
Publishing Document Publishing Date Country Kind
WO2020/236274 11/26/2020 WO A
US Referenced Citations (432)
Number Name Date Kind
4807118 Lin et al. Feb 1989 A
5138615 Lamport et al. Aug 1992 A
5457687 Newman Oct 1995 A
5937436 Watkins Aug 1999 A
5960178 Cochinwala et al. Sep 1999 A
5970232 Passint et al. Oct 1999 A
5983332 Watkins Nov 1999 A
6112265 Harriman et al. Aug 2000 A
6230252 Passint et al. May 2001 B1
6246682 Roy et al. Jun 2001 B1
6493347 Sindhu et al. Dec 2002 B2
6545981 Garcia et al. Apr 2003 B1
6633580 Toerudbakken et al. Oct 2003 B1
6674720 Passint et al. Jan 2004 B1
6714553 Poole et al. Mar 2004 B1
6728211 Peris et al. Apr 2004 B1
6732212 Sugahara et al. May 2004 B2
6735173 Lenoski et al. May 2004 B1
6894974 Aweva et al. May 2005 B1
7023856 Washabaugh et al. Apr 2006 B1
7133940 Blightman et al. Nov 2006 B2
7218637 Best et al. May 2007 B1
7269180 Bly et al. Sep 2007 B2
7305487 Blumrich et al. Dec 2007 B2
7337285 Tanoue Feb 2008 B2
7397797 Alfieri et al. Jul 2008 B2
7430559 Lomet Sep 2008 B2
7441006 Biran et al. Oct 2008 B2
7464174 Ngai Dec 2008 B1
7483442 Torudbakken et al. Jan 2009 B1
7562366 Pope et al. Jul 2009 B2
7593329 Kwan et al. Sep 2009 B2
7596628 Aloni et al. Sep 2009 B2
7620791 Wentzlaff et al. Nov 2009 B1
7633869 Morris et al. Dec 2009 B1
7639616 Manula et al. Dec 2009 B1
7734894 Wentzlaff et al. Jun 2010 B1
7774461 Tanaka et al. Aug 2010 B2
7782869 Chitlur Srinivasa Aug 2010 B1
7796579 Bruss Sep 2010 B2
7856026 Finan et al. Dec 2010 B1
7933282 Gupta et al. Apr 2011 B1
7953002 Opsasnick May 2011 B2
7975120 Sabbatini, Jr. et al. Jul 2011 B2
8014278 Subramanian et al. Sep 2011 B1
8023521 Woo et al. Sep 2011 B2
8050180 Judd Nov 2011 B2
8077606 Litwack Dec 2011 B1
8103788 Miranda Jan 2012 B1
8160085 Voruganti et al. Apr 2012 B2
8175107 Yalagandula et al. May 2012 B1
8249072 Sugumar et al. Aug 2012 B2
8281013 Mundkur et al. Oct 2012 B2
8352727 Chen et al. Jan 2013 B2
8353003 Noehring et al. Jan 2013 B2
8443151 Tang et al. May 2013 B2
8473783 Andrade et al. Jun 2013 B2
8543534 Alves et al. Sep 2013 B2
8619793 Lavian et al. Dec 2013 B2
8626957 Blumrich et al. Jan 2014 B2
8650582 Archer et al. Feb 2014 B2
8706832 Blocksome Apr 2014 B2
8719543 Kaminski et al. May 2014 B2
8811183 Anand et al. Aug 2014 B1
8948175 Bly et al. Feb 2015 B2
8971345 Mccanne et al. Mar 2015 B1
9001663 Attar et al. Apr 2015 B2
9053012 Northcott et al. Jun 2015 B1
9088496 Vaidya et al. Jul 2015 B2
9094327 Jacobs et al. Jul 2015 B2
9178782 Matthews et al. Nov 2015 B2
9208071 Talagala et al. Dec 2015 B2
9218278 Talagala et al. Dec 2015 B2
9231876 Mir et al. Jan 2016 B2
9231888 Bogdanski et al. Jan 2016 B2
9239804 Kegel et al. Jan 2016 B2
9269438 Nachimuthu et al. Feb 2016 B2
9276864 Pradeep Mar 2016 B1
9436651 Underwood et al. Sep 2016 B2
9455915 Sinha et al. Sep 2016 B2
9460178 Bashyam et al. Oct 2016 B2
9479426 Munger et al. Oct 2016 B2
9496991 Plamondon et al. Nov 2016 B2
9544234 Markine Jan 2017 B1
9548924 Pettit et al. Jan 2017 B2
9594521 Blagodurov et al. Mar 2017 B2
9635121 Mathew et al. Apr 2017 B2
9742855 Shuler et al. Aug 2017 B2
9762488 Previdi et al. Sep 2017 B2
9762497 Kishore et al. Sep 2017 B2
9830273 Bk et al. Nov 2017 B2
9838500 Ilan et al. Dec 2017 B1
9853900 Mula et al. Dec 2017 B1
9887923 Chorafakis et al. Feb 2018 B2
10003544 Liu et al. Jun 2018 B2
10009270 Stark et al. Jun 2018 B1
10031857 Menachem et al. Jul 2018 B2
10050896 Yang et al. Aug 2018 B2
10061613 Brooker et al. Aug 2018 B1
10063481 Jiang et al. Aug 2018 B1
10089220 Mckelvie et al. Oct 2018 B1
10169060 Mncent et al. Jan 2019 B1
10178035 Dillon Jan 2019 B2
10200279 Aljaedi Feb 2019 B1
10218634 Aldebert et al. Feb 2019 B2
10270700 Burnette et al. Apr 2019 B2
10305772 Zur et al. May 2019 B2
10331590 Macnamara et al. Jun 2019 B2
10353833 Hagspiel et al. Jul 2019 B2
10454835 Contavalli et al. Oct 2019 B2
10498672 Graham et al. Dec 2019 B2
10567307 Fairhurst et al. Feb 2020 B2
10728173 Agrawal et al. Jul 2020 B1
10740243 Benisty Aug 2020 B1
10802828 Volpe et al. Oct 2020 B1
10817502 Talagala et al. Oct 2020 B2
11128561 Matthews et al. Sep 2021 B1
11271869 Agrawal et al. Mar 2022 B1
11416749 Bshara et al. Aug 2022 B2
11444886 Stawitzky et al. Sep 2022 B1
20010010692 Sindhu et al. Aug 2001 A1
20010047438 Forin Nov 2001 A1
20020152328 Kagan Oct 2002 A1
20020174279 Wynne et al. Nov 2002 A1
20030016808 Hu et al. Jan 2003 A1
20030041168 Musoll Feb 2003 A1
20030091055 Craddock May 2003 A1
20030110455 Baumgartner et al. Jun 2003 A1
20030174711 Shankar Sep 2003 A1
20030200363 Futral Oct 2003 A1
20030223420 Ferolito Dec 2003 A1
20040008716 Stiliadis Jan 2004 A1
20040049580 Boyd Mar 2004 A1
20040059828 Hooper et al. Mar 2004 A1
20040095882 Hamzah et al. May 2004 A1
20040133634 Luke et al. Jul 2004 A1
20040223452 Santos et al. Nov 2004 A1
20050021837 Haselhorst et al. Jan 2005 A1
20050047334 Paul et al. Mar 2005 A1
20050088969 Carlsen et al. Apr 2005 A1
20050091396 Nilakantan et al. Apr 2005 A1
20050108444 Flauaus et al. May 2005 A1
20050108518 Pandya May 2005 A1
20050152274 Simpson Jul 2005 A1
20050182854 Pinkerton et al. Aug 2005 A1
20050270974 Mayhew Dec 2005 A1
20050270976 Yang et al. Dec 2005 A1
20060023705 Zoranovic et al. Feb 2006 A1
20060067347 Naik et al. Mar 2006 A1
20060075480 Noehring et al. Apr 2006 A1
20060173970 Pope Aug 2006 A1
20060174251 Pope et al. Aug 2006 A1
20060203728 Kwan et al. Sep 2006 A1
20070061433 Reynolds et al. Mar 2007 A1
20070070901 Aloni et al. Mar 2007 A1
20070198804 Moyer Aug 2007 A1
20070211746 Oshikiri et al. Sep 2007 A1
20070242611 Archer et al. Oct 2007 A1
20070268825 Corwin et al. Nov 2007 A1
20080013453 Chiang et al. Jan 2008 A1
20080013549 Okagawa et al. Jan 2008 A1
20080071757 Ichiriu et al. Mar 2008 A1
20080084864 Archer et al. Apr 2008 A1
20080091915 Moertl et al. Apr 2008 A1
20080147881 Krishnamurthy et al. Jun 2008 A1
20080155154 Kenan Jun 2008 A1
20080159138 Shepherd et al. Jul 2008 A1
20080253289 Naven et al. Oct 2008 A1
20090003212 Kwan et al. Jan 2009 A1
20090010157 Holmes et al. Jan 2009 A1
20090013175 Elliott Jan 2009 A1
20090055496 Garg et al. Feb 2009 A1
20090092046 Naven et al. Apr 2009 A1
20090141621 Fan et al. Jun 2009 A1
20090198958 Arimilli et al. Aug 2009 A1
20090259713 Blumrich et al. Oct 2009 A1
20090285222 Hoover et al. Nov 2009 A1
20100061241 Sindhu et al. Mar 2010 A1
20100169608 Kuo et al. Jul 2010 A1
20100172260 Kwan et al. Jul 2010 A1
20100183024 Gupta Jul 2010 A1
20100220595 Petersen Sep 2010 A1
20100274876 Kagan et al. Oct 2010 A1
20100302942 Shankar et al. Dec 2010 A1
20100316053 Miyoshi et al. Dec 2010 A1
20110051724 Scott et al. Mar 2011 A1
20110066824 Bestler Mar 2011 A1
20110072179 Lacroute et al. Mar 2011 A1
20110099326 Jung et al. Apr 2011 A1
20110110383 Yang et al. May 2011 A1
20110128959 Bando et al. Jun 2011 A1
20110158096 Leung et al. Jun 2011 A1
20110158248 Vorunganti et al. Jun 2011 A1
20110164496 Loh et al. Jul 2011 A1
20110173370 Jacobs et al. Jul 2011 A1
20110264822 Ferguson et al. Oct 2011 A1
20110276699 Pedersen Nov 2011 A1
20110280125 Jayakumar Nov 2011 A1
20110320724 Mejdrich et al. Dec 2011 A1
20120093505 Yeap et al. Apr 2012 A1
20120102506 Hopmann et al. Apr 2012 A1
20120117423 Andrade et al. May 2012 A1
20120137075 Vorbach May 2012 A1
20120144064 Parker et al. Jun 2012 A1
20120144065 Parker et al. Jun 2012 A1
20120147752 Ashwood-Smith et al. Jun 2012 A1
20120170462 Sinha Jul 2012 A1
20120170575 Mehra Jul 2012 A1
20120213118 Lindsay et al. Aug 2012 A1
20120250512 Jagadeeswaran et al. Oct 2012 A1
20120287821 Godfrey et al. Nov 2012 A1
20120297083 Ferguson et al. Nov 2012 A1
20120300669 Zahavi Nov 2012 A1
20120307838 Manula Dec 2012 A1
20120314707 Epps et al. Dec 2012 A1
20130010636 Regula Jan 2013 A1
20130039169 Schlansker et al. Feb 2013 A1
20130060944 Archer et al. Mar 2013 A1
20130103777 Kagan et al. Apr 2013 A1
20130121178 Mainaud et al. May 2013 A1
20130136090 Liu et al. May 2013 A1
20130182704 Jacobs et al. Jul 2013 A1
20130194927 Yamaguchi et al. Aug 2013 A1
20130203422 Masputra et al. Aug 2013 A1
20130205002 Wang et al. Aug 2013 A1
20130208593 Nandagopal Aug 2013 A1
20130246552 Underwood et al. Sep 2013 A1
20130290673 Archer et al. Oct 2013 A1
20130301645 Bogdanski et al. Nov 2013 A1
20130304988 Totolos et al. Nov 2013 A1
20130311525 Neerincx et al. Nov 2013 A1
20130329577 Suzuki et al. Dec 2013 A1
20130336164 Yang et al. Dec 2013 A1
20140019661 Hormuth et al. Jan 2014 A1
20140032695 Michels et al. Jan 2014 A1
20140036680 Lih et al. Feb 2014 A1
20140064082 Yeung et al. Mar 2014 A1
20140095753 Crupnicoff et al. Apr 2014 A1
20140098675 Frost et al. Apr 2014 A1
20140119367 Han et al. May 2014 A1
20140122560 Ramey et al. May 2014 A1
20140129664 Mcdaniel et al. May 2014 A1
20140133292 Yamatsu et al. May 2014 A1
20140136646 Tamir et al. May 2014 A1
20140169173 Naouri et al. Jun 2014 A1
20140185621 Decusatis et al. Jul 2014 A1
20140189174 Ajanovic et al. Jul 2014 A1
20140207881 Nussle et al. Jul 2014 A1
20140211804 Makikeni et al. Jul 2014 A1
20140226488 Shamis et al. Aug 2014 A1
20140241164 Cociglio et al. Aug 2014 A1
20140258438 Ayoub Sep 2014 A1
20140301390 Scott et al. Oct 2014 A1
20140307554 Basso et al. Oct 2014 A1
20140325013 Tamir et al. Oct 2014 A1
20140328172 Kumar et al. Nov 2014 A1
20140347997 Bergamasco et al. Nov 2014 A1
20140362698 Arad Dec 2014 A1
20140369360 Carlstrom Dec 2014 A1
20140379847 Williams Dec 2014 A1
20150003247 Mejia et al. Jan 2015 A1
20150006849 Xu et al. Jan 2015 A1
20150009823 Ganga et al. Jan 2015 A1
20150026361 Matthews et al. Jan 2015 A1
20150029848 Jain Jan 2015 A1
20150055476 Decusatis et al. Feb 2015 A1
20150055661 Boucher et al. Feb 2015 A1
20150067095 Gopal et al. Mar 2015 A1
20150089495 Persson et al. Mar 2015 A1
20150103667 Elias et al. Apr 2015 A1
20150124826 Edsall et al. May 2015 A1
20150146527 Kishore et al. May 2015 A1
20150154004 Aggarwal Jun 2015 A1
20150161064 Pope Jun 2015 A1
20150180782 Rimmer et al. Jun 2015 A1
20150186318 Kim et al. Jul 2015 A1
20150193262 Archer et al. Jul 2015 A1
20150195388 Snyder et al. Jul 2015 A1
20150208145 Parker et al. Jul 2015 A1
20150220449 Stark et al. Aug 2015 A1
20150237180 Swartzentruber et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150244804 Warfield et al. Aug 2015 A1
20150261434 Kagan et al. Sep 2015 A1
20150263955 Talaski et al. Sep 2015 A1
20150263994 Haramaty et al. Sep 2015 A1
20150288626 Aybay Oct 2015 A1
20150365337 Pannell Dec 2015 A1
20150370586 Cooper et al. Dec 2015 A1
20160006664 Sabato et al. Jan 2016 A1
20160012002 Arimilli et al. Jan 2016 A1
20160028613 Haramaty et al. Jan 2016 A1
20160065455 Wang et al. Mar 2016 A1
20160094450 Ghanwani et al. Mar 2016 A1
20160134518 Callon et al. May 2016 A1
20160134535 Callon May 2016 A1
20160134559 Abel et al. May 2016 A1
20160134573 Gagliardi et al. May 2016 A1
20160142318 Beecroft May 2016 A1
20160154756 Dodson et al. Jun 2016 A1
20160182383 Pedersen Jun 2016 A1
20160205023 Janardhanan Jul 2016 A1
20160226797 Aravinthan et al. Aug 2016 A1
20160254991 Eckert et al. Sep 2016 A1
20160259394 Ragavan Sep 2016 A1
20160283422 Crupnicoff et al. Sep 2016 A1
20160285545 Schmidtke et al. Sep 2016 A1
20160285677 Kashyap et al. Sep 2016 A1
20160294694 Parker et al. Oct 2016 A1
20160294926 Zur et al. Oct 2016 A1
20160301610 Amit et al. Oct 2016 A1
20160342567 Tsirkin Nov 2016 A1
20160344620 G. Santos et al. Nov 2016 A1
20160381189 Caulfield et al. Dec 2016 A1
20170024263 Verplanken Jan 2017 A1
20170039063 Gopal et al. Feb 2017 A1
20170041239 Goldenberg et al. Feb 2017 A1
20170048144 Liu Feb 2017 A1
20170054633 Underwood et al. Feb 2017 A1
20170091108 Arellano et al. Mar 2017 A1
20170097840 Bridgers Apr 2017 A1
20170103108 Datta et al. Apr 2017 A1
20170118090 Pettit et al. Apr 2017 A1
20170118098 Littlejohn et al. Apr 2017 A1
20170153852 Ma et al. Jun 2017 A1
20170177541 Berman et al. Jun 2017 A1
20170220500 Tong Aug 2017 A1
20170237654 Turner et al. Aug 2017 A1
20170237671 Rimmer et al. Aug 2017 A1
20170242753 Sherlock et al. Aug 2017 A1
20170250914 Caulfield et al. Aug 2017 A1
20170251394 Johansson et al. Aug 2017 A1
20170270051 Chen et al. Sep 2017 A1
20170272331 Lissack Sep 2017 A1
20170272370 Ganga et al. Sep 2017 A1
20170286316 Doshi et al. Oct 2017 A1
20170289066 Haramaty et al. Oct 2017 A1
20170295098 Watkins et al. Oct 2017 A1
20170324664 Xu et al. Nov 2017 A1
20170371778 Mckelvie et al. Dec 2017 A1
20180004705 Menachem et al. Jan 2018 A1
20180019948 Patwardhan et al. Jan 2018 A1
20180026878 Zahavi et al. Jan 2018 A1
20180077064 Wang Mar 2018 A1
20180083868 Cheng Mar 2018 A1
20180097645 Rajagopalan et al. Apr 2018 A1
20180097912 Chumbalkar et al. Apr 2018 A1
20180113618 Chan et al. Apr 2018 A1
20180115469 Erickson et al. Apr 2018 A1
20180131602 Civanlar et al. May 2018 A1
20180131678 Agarwal et al. May 2018 A1
20180150374 Ratcliff May 2018 A1
20180152317 Chang et al. May 2018 A1
20180152357 Natham et al. May 2018 A1
20180173557 Nakil et al. Jun 2018 A1
20180183724 Callard et al. Jun 2018 A1
20180191609 Caulfield et al. Jul 2018 A1
20180198736 Labonte et al. Jul 2018 A1
20180212876 Bacthu et al. Jul 2018 A1
20180212902 Steinmacher-Burow Jul 2018 A1
20180219804 Graham et al. Aug 2018 A1
20180225238 Karguth et al. Aug 2018 A1
20180234343 Zdornov et al. Aug 2018 A1
20180254945 Bogdanski et al. Sep 2018 A1
20180260324 Marathe et al. Sep 2018 A1
20180278540 Shalev et al. Sep 2018 A1
20180287928 Levi et al. Oct 2018 A1
20180323898 Dods Nov 2018 A1
20180335974 Simionescu et al. Nov 2018 A1
20180341494 Sood et al. Nov 2018 A1
20190007349 Wang et al. Jan 2019 A1
20190018808 Beard et al. Jan 2019 A1
20190036771 Sharpless et al. Jan 2019 A1
20190042337 Dinan et al. Feb 2019 A1
20190042518 Marolia Feb 2019 A1
20190044809 Willis et al. Feb 2019 A1
20190044827 Ganapathi et al. Feb 2019 A1
20190044863 Mula et al. Feb 2019 A1
20190044872 Ganapathi et al. Feb 2019 A1
20190044875 Murty et al. Feb 2019 A1
20190052327 Motozuka et al. Feb 2019 A1
20190058663 Song Feb 2019 A1
20190068501 Schneider et al. Feb 2019 A1
20190081903 Kobayashi et al. Mar 2019 A1
20190095134 Li Mar 2019 A1
20190104057 Goel et al. Apr 2019 A1
20190104206 Goel et al. Apr 2019 A1
20190108106 Aggarwal et al. Apr 2019 A1
20190108332 Glew et al. Apr 2019 A1
20190109791 Mehra et al. Apr 2019 A1
20190121781 Kasichainula Apr 2019 A1
20190140979 Levi et al. May 2019 A1
20190146477 Cella et al. May 2019 A1
20190171612 Shahar et al. Jun 2019 A1
20190196982 Rozas et al. Jun 2019 A1
20190199646 Singh et al. Jun 2019 A1
20190253354 Caulfield et al. Aug 2019 A1
20190280978 Schmatz et al. Sep 2019 A1
20190294575 Dennison et al. Sep 2019 A1
20190306134 Shanbhogue et al. Oct 2019 A1
20190332314 Zhang et al. Oct 2019 A1
20190334624 Bernard Oct 2019 A1
20190356611 Das et al. Nov 2019 A1
20190361728 Kumar et al. Nov 2019 A1
20190379610 Srinivasan et al. Dec 2019 A1
20200036644 Belogolovy et al. Jan 2020 A1
20200084150 Burstein et al. Mar 2020 A1
20200145725 Eberle et al. May 2020 A1
20200177505 Li Jun 2020 A1
20200177521 Blumrich et al. Jun 2020 A1
20200259755 Wang et al. Aug 2020 A1
20200272579 Humphrey et al. Aug 2020 A1
20200274832 Humphrey et al. Aug 2020 A1
20200334195 Chen et al. Oct 2020 A1
20200349098 Caulfield et al. Nov 2020 A1
20210081410 Chavan et al. Mar 2021 A1
20210152494 Johnsen et al. May 2021 A1
20210263779 Haghighat et al. Aug 2021 A1
20210334206 Colgrove et al. Oct 2021 A1
20210377156 Michael et al. Dec 2021 A1
20210409351 Das et al. Dec 2021 A1
20220131768 Ganapathi et al. Apr 2022 A1
20220166705 Froese May 2022 A1
20220200900 Roweth Jun 2022 A1
20220210058 Bataineh et al. Jun 2022 A1
20220217078 Ford et al. Jul 2022 A1
20220217101 Yefet et al. Jul 2022 A1
20220245072 Roweth et al. Aug 2022 A1
20220278941 Shalev et al. Sep 2022 A1
20220309025 Chen et al. Sep 2022 A1
20230035420 Sankaran et al. Feb 2023 A1
20230046221 Pismenny et al. Feb 2023 A1
Foreign Referenced Citations (32)
Number Date Country
101729609 Jun 2010 CN
102932203 Feb 2013 CN
110324249 Oct 2019 CN
110601888 Dec 2019 CN
0275135 Jul 1988 EP
2187576 May 2010 EP
2219329 Aug 2010 EP
2947832 Nov 2015 EP
3445006 Feb 2019 EP
2003-244196 Aug 2003 JP
3459653 Oct 2003 JP
10-2012-0062864 Jun 2012 KR
10-2012-0082739 Jul 2012 KR
10-2014-0100529 Aug 2014 KR
10-2015-0026939 Mar 2015 KR
10-2015-0104056 Sep 2015 KR
10-2017-0110106 Oct 2017 KR
10-1850749 Apr 2018 KR
2001069851 Sep 2001 WO
0247329 Jun 2002 WO
2003019861 Mar 2003 WO
2004001615 Dec 2003 WO
2005094487 Oct 2005 WO
2007034184 Mar 2007 WO
2009010461 Jan 2009 WO
2009018232 Feb 2009 WO
2014092780 Jun 2014 WO
2014137382 Sep 2014 WO
2014141005 Sep 2014 WO
2018004977 Jan 2018 WO
2018046703 Mar 2018 WO
2019072072 Apr 2019 WO
Non-Patent Literature Citations (74)
Entry
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024248, dated Jul. 8, 2020, 11 pages.
Awerbuch, B., et al.; “An On-Demand Secure Routing Protocol Resilient to Byzantine Failures”; Sep. 2002; 10 pages.
Belayneh L.W., et al.; “Method and Apparatus for Routing Data in an Inter-Nodal Communications Lattice of a Massively Parallel Computer System by Semi-Randomly Varying Routing Policies for Different Packets”; 2019; 3 pages.
Bhatele, A., et al.; “Analyzing Network Health and Congestion in Dragonfly-based Supercomputers”; May 23-27, 2016; 10 pages.
Blumrich, M.A., et al.; “Exploiting Idle Resources in a High-Radix Switch for Supplemental Storage”; Nov. 2018; 13 pages.
Chang, F., et al.; “PVW: Designing Vir PVW: Designing Virtual World Ser orld Server Infr er Infrastructur astructure”; 2010; 8 pages.
Chang, F., et al.; “PVW: Designing Virtual World Server Infrastructure”; 2010; 8 pages.
Chen, F., et al.; “Requirements for RoCEv3 Congestion Management”; Mar. 21, 2019; 8 pages.
Cisco Packet Tracer; “packet-tracer;—ping”; https://www.cisco.com/c/en/us/td/docs/security/asa/asa-command-reference/I-R/cmdref2/p1.html; 2017.
Cisco; “Understanding Rapid Spanning Tree Protocol (802.1w)”; Aug. 1, 2017; 13 pages.
Eardley, ED, P; “Pre-Congestion Notification (PCN) Architecture”; Jun. 2009; 54 pages.
Escudero-Sahuquillo, J., et al.; “Combining Congested-Flow Isolation and Injection Throttling in HPC Interconnection Networks”; Sep. 13-16, 2011; 3 pages.
Hong, Y.; “Mitigating the Cost, Performance, and Power Overheads Induced by Load Variations in Multicore Cloud Servers”; Fall 2013; 132 pages.
Huawei; “The Lossless Network For Data Centers”; Nov. 7, 2017; 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/024332, dated Jul. 8, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24243, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24253, dated Jul. 6, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24256, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24257, dated Jul. 7, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24258, dated Jul. 7, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24259, dated Jul. 9, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24260, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24268, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24269, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24339, dated Jul. 8, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024125, dated Jul. 10, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024129, dated Jul. 10, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024237, dated Jul. 14, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024239, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024241, dated Jul. 14, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024242, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024244, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024245, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024246, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024250, dated Jul. 14, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024254, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024262, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024266, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024270, dated Jul. 10, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024271, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024272, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024276, dated Jul. 13, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024304, dated Jul. 15, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024311, dated Jul. 17, 2020, 8 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024321, dated Jul. 9, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024324, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024327, dated Jul. 10, 2020, 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24158, dated Jul. 6, 2020, 18 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24251, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24267, dated Jul. 6, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24303, dated Oct. 21, 2020, 9 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20809930.9, dated Mar. 2, 2023, 9 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20810784.7, dated Mar. 9, 2023, 7 pages.
Ramakrishnan et al., RFC 3168, “The addition of Explicit Congestion Notification (ECN) to IP”, Sep. 2001 (Year: 2001).
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24340, dated Oct. 26, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24342, dated Oct. 27, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024192, dated Oct. 23, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024221, dated Oct. 26, 2020, 9 pages.
International Search Report cited in PCT/US2020/024170 dated Dec. 16, 2020; 3 pages.
Maabi, S., et al.; “ERFAN: Efficient reconfigurable fault-tolerant deflection routing algorithm for 3-D Network-on-Chip”; Sep. 6-9, 2016.
Maglione-Mathey, G., et al.; “Scalable Deadlock-Free Deterministic Minimal-Path Routing Engine for InfiniBand-Based Dragonfly Networks”; Aug. 21, 2017; 15 pages.
Mamidala, A.R., et al.; “Efficient Barrier and Allreduce on Infiniband clusters using multicast and adaptive algorithms”; Sep. 20-23, 2004; 10 pages.
Mammeri, Z; “Reinforcement Learning Based Routing in Networks: Review and Classification of Approaches”; Apr. 29, 2019; 35 pages.
Mollah; M. A., et al.; “High Performance Computing Systems. Performance Modeling, Benchmarking, and Simulation: 8th International Workshop”; Nov. 13, 2017.
Open Networking Foundation; “OpenFlow Switch Specification”; Mar. 26, 2015; 283 pages.
Prakash, P., et al.; “The TCP Outcast Problem: Exposing Unfairness in Data Center Networks”; 2011; 15 pages.
Ramakrishnan, K., et al.; “The Addition of Explicit Congestion Notification (ECN) to IP”; Sep. 2001; 63 pages.
Roth, P. C., et al.; “MRNet: A Software-Based Multicast/Reduction Network for Scalable Tools1”; Nov. 15-21, 2003; 16 pages.
Silveira, J., et al.; “Preprocessing of Scenarios for Fast and Efficient Routing Reconfiguration in Fault-Tolerant NoCs”; Mar. 4-6, 2015.
Tsunekawa, K .; “Fair bandwidth allocation among LSPs for AF class accommodating TCP and UDP traffic in a Diffserv-capable MPLS network”; Nov. 17, 2005; 9 pages.
Underwood, K.D., et al.; “A hardware acceleration unit for MPI queue processing”; Apr. 18, 2005; 10 pages.
Wu, J .; “Fault-tolerant adaptive and minimal routing in mesh-connected multicomputers using extended safety levels”; Feb. 2000; 11 pages.
Xiang, D., et al.; “Fault-Tolerant Adaptive Routing in Dragonfly Networks”; Apr. 12, 2017; 15 pages.
Xiang, D., et al.; “Deadlock-Free Broadcast Routing in Dragonfly Networks without Virtual Channels”, submission to IEEE transactions on Parallel and Distributed Systems, 2015, 15 pages.
Related Publications (1)
Number Date Country
20220197838 A1 Jun 2022 US
Provisional Applications (3)
Number Date Country
62852289 May 2019 US
62852203 May 2019 US
62852273 May 2019 US