System and method for facilitating self-managing reduction engines

Information

  • Patent Grant
  • 11929919
  • Patent Number
    11,929,919
  • Date Filed
    Monday, March 23, 2020
    4 years ago
  • Date Issued
    Tuesday, March 12, 2024
    9 months ago
Abstract
A switch equipped with a self-managing reduction engine is provided. During operation, the reduction engine can use a timeout mechanism to manage itself in different latency-induced or error scenarios. As a result, the network can facilitate an efficient and scalable environment for high performance computing.
Description
BACKGROUND
Field

This is generally related to the technical field of networking. More specifically, this disclosure is related to systems and methods for facilitating self-managing reduction engines in a network.


Related Art

As network-enabled devices and applications become progressively more ubiquitous, various types of traffic as well as the ever-increasing network load continue to demand more performance from the underlying network architecture. For example, applications such as high-performance computing (HPC), media streaming, and Internet of Things (JOT) can generate different types of traffic with distinctive characteristics. As a result, in addition to conventional network performance metrics such as bandwidth and delay, network architects continue to face challenges such as scalability, versatility, and efficiency.


SUMMARY

A switch equipped with a self-managing reduction engine is provided. During operation, the reduction engine can use a timeout mechanism to manage itself in different latency-induced or error scenarios. As a result, the network can facilitate an efficient and scalable environment for high performance computing.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 shows an exemplary network.



FIG. 2 shows an exemplary multicast tree for a reduction process.



FIG. 3A shows a flow chart of an exemplary reduction process.



FIG. 3B shows a flow chart of an exemplary reduction operation by a reduction engine.



FIG. 4 shows an example where one leaf endpoint is late joining the reduction process.



FIG. 5A shows an example where one leaf endpoint fails to supply a contribution because of an error.



FIG. 5B shows a flow chart of an exemplary timer-based reduction process.



FIG. 6 shows an example where a reduction engine on a leaf switch is unavailable.



FIG. 7 shows exemplary reduction operations.



FIG. 8 shows a set of MINMAXLOC operands that can be used in a reduction process.



FIG. 9 shows rounding modes that can be used in a reduction process.



FIG. 10 shows a Portals-formatted reduction frame.



FIG. 11 shows a reduction header.



FIG. 12 shows the endianness of operands that can be used for MINMAXLOC reproducible sum operators in a reduction process.



FIG. 13 shows exemplary reduction result codes



FIG. 14 shows an example where a Portals packet can be prepended with an Ethernet header.



FIG. 15 shows an exemplary switching system that facilitates a reduction engine.





In the figures, like reference numerals refer to the same figure elements.


DETAILED DESCRIPTION

Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown.


Embodiments of the present invention solve the problem of accommodating a large number of computing endpoints in a network by providing a self-managed reduction engine that can handle various latency-induced or error scenarios, which allows traffic resulting from large-scale computing to be reduced in a timely, flexible, and scalable manner. Allocation and management of any shared resource within the body of a network can be difficult, especially if errors occur while the unit is processing data. The systems and methods described herein can significantly simplify the allocation, deallocation, and error handling of a dynamically allocated switch/router resource.


In general, a network can support thousands of users. If a function is provided, and expected to be used by any user, it is important to manage the resource efficiently. One approach can be to use a system call to a network-based server that has been authorized to manage the function. Although this may appear to be a simple solution, in practice the management could become complicated, especially if the function provided is widely distributed throughout the network and thousands of users may be trying to gain access. The time it takes to setup an operation for a single use can run into many seconds, and a similar amount of time may be required to release the function after use. Any error condition may also require significant support in both software development and real time analysis during the error condition.


The type of function being used may only be active for a few microseconds or even a few nanoseconds. Even if the function is repeatedly used by the same application, the setup and teardown cost could dwarf the amount of time the function is being useful. For computing features with such an enormous overhead, using software to gain and release access to such functions may not be able to accommodate a large number of users.


Embodiments of the present invention can provide reasonably fair access to all users of the network while reducing the setup cost and ensuring the resource can be released quickly on a successful completion of the operation. It can also ensure the resource is released reasonably quickly (e.g., a few milliseconds) when an error occurs, without the need for any management software intervention.


Specifically, a reduction engine can be provided within a switch. The reduction engine can take packets from a number of endpoints and combine them to generate a single packet that can be returned to a node. The reduction engine can also perform a synchronization function, often referred to as a barrier, or can perform some mathematical function that combines or sorts the values provided by the endpoints into a single value. By placing the reduction engine within the body of the network, the latency, i.e., the time it takes to complete the operation, can be reduced by an order of magnitude because typically a single round of communication across the network is usually sufficient to complete the whole reduction.


The reduction process can use a multicast session, issued from a reduction root node's edge port and sent to all the edge leaf ports, to setup or arm each of the reduction engines within the network. Each port of the switches within the network can have an instance of the reduction engine. The arming multicast (i.e., the multicast setup packet) can start at the root node's ingress edge port. When the setup packet is received, it can arm the local reduction engine associated with the corresponding port. The packet can then be multicast to a number of output ports where it is forwarded to the output's link partner ingress port, which can reside on another switch. The downstream switch can further multicast the setup packet to a set of output ports while at the same time arming an instance of a reduction engine associated with the ingress port.


The above process can repeat, arming reduction engines along the multicast data path until the setup packet arrives at the egress edge ports of the leaf switches where it is passed to the compute node. At this point, all the reduction engines can be armed and ready to receive the reduction packets, which can travel upstream along the multicast tree and be reduced to a single packet that represents a reduced result.


After a computation operation, the leaf nodes can be ready to inject their result packets back into the network. They can do so in a way that causes the packet to retrace on the reverse path taken by the original multicast packet. This ensures the result packets can each be intercepted by the now armed reduction engines where the reduction function can take place.


The multicast tree can be traversed by the result packets in the reverse direction through the network. This reduction process does not require any software intervention, other than setting up the initial multicast tree. Modern switching devices typically can accommodate a large number of separate multicast trees, which allows many reduction configurations to be simultaneously configured in a network. As a result, the setup and teardown cost can be significantly amortized and parallelized, which allows the reduction function to be scaled to a large number of users.


It is also possible to add a count value to each reduction packet to represent the number of inputs used to construct the reduction result held within the packet. This allows the acceleration provided by a reduction engine to be skipped, if necessary, without affecting the reduction function. This is possible because the receiving node is then given enough information to complete the operation itself.


A timeout mechanism can also be added to the reduction engines to ensure the reduction engine resource can eventually become free, even in the presence of errors. If an error occurs, or if the reduction is not able to complete because one of the inputs to the reduction function is not present or is delayed for some reason, the timeout can ensure that the resource is released with the available input information that can be used for the reduction computation, up to that point. The root node of the reduction can receive this partial result and recognize that this result is not complete. The root node can optionally wait for the missing result to arrive without blocking the shared reduction resource.



FIG. 1 shows an exemplary network. In this example, a network 100 of switches, which can also be referred to as a “switch fabric,” can include switches 102, 104, 106, 108, and 110. Each switch can have a unique address or ID within switch fabric 100. Various types of devices and networks can be coupled to a switch fabric. For example, a storage array 112 can be coupled to switch fabric 100 via switch 110; an InfiniBand (IB) based HPC network 114 can be coupled to switch fabric 100 via switch 108; a number of end hosts, such as host 116, can be coupled to switch fabric 100 via switch 104; and an IP/Ethernet network 118 can be coupled to switch fabric 100 via switch 102. In general, a switch can have edge ports and fabric ports. An edge port can couple to a device that is external to the fabric. A fabric port can couple to another switch within the fabric via a fabric link. Typically, traffic can be injected into switch fabric 100 via an ingress port of an edge switch, and leave switch fabric 100 via an egress port of another (or the same) edge switch. An ingress link can couple a network interface controller (NIC) of an edge device (for example, an HPC end host) to an ingress edge port of an edge switch. Switch fabric 100 can then transport the traffic to an egress edge switch, which in turn can deliver the traffic to a destination edge device via another NIC.


In one embodiment, each port of a switch can include a reduction engine that is used to accelerate reduction operations. Reductions can be performed using a multicast tree. Each reduction engine in the multicast tree can be armed by a reduction arm frame sent by a root switch through the multicast tree. After receiving the reduction arm frame, leaf nodes of the multicast tree can send reduction data frames containing their contributions up the multicast tree to the root node. Each reduction engine in the tree can intercept the reduction data frames and perform reduction on them. When a reduction engine receives the expected number of contributions or times out, it can forward the reduced result up the multicast tree. The root node may receive a single, fully reduced data frame, or, if any reduction engine times out, it may receive multiple, partially reduced data frames. In either case, the root node can complete the reduction, incorporating its own contribution. The final result of the reduction can then be sent down the multicast tree to leaf nodes. The result frame can carry another round of reduction arming instruction, which can then re-arm the reduction engines at the same time.


The reduction engine can reduce latency in critical network operations including reduce, all-reduce, and barrier. Reduction operations can be performed over a spanning tree embedded within the network. FIG. 2 shows an exemplary multicast tree for a reduction process. In this example, a multicast tree for the reduction process can include a root endpoint 202, a root switch 204, a number of leaf switches such as leaf switch 206, and a number of leaf endpoints such as endpoint 208. Root switch 204 is responsible for initiating the multicast tree for the reduction process. Each switch can include a reduction engine that can be armed when the multicast session is setup. The leaf endpoints can inject frames, which can be combined as they flow up the tree, with the result being delivered to a process running at the root of the tree. As described below, the root process may need to complete the reduction in software. This is the ready phase of a reduction. The result of a reduction can be then multicast back down the tree to processes at the leaf endpoints and the reduction engines can be re-armed, ready for the next round reduction. This is the multicast phase of a reduction process.


The multicast phase of a reduction process can provide synchronization for a barrier operation, during which no data is required and a null reduction operation is used. Each node can join the reduction tree and wait for the result. When the root node receives the result, it can then issue a multicast down the reduction tree. In one embodiment, no endpoint is allowed to leave the barrier before all endpoints have entered.


Reduction engines can be provided on the output side of each link. They can operate on data held in the reduction buffers. In one embodiment, each reduction engine can support eight active reduction trees. Other numbers of reduction trees can also be supported. The reduction engines can perform on-the-fly combining of data frames. The reduction engines are armed during the multicast phase. They can combine upstream frames for a given amount of time. The reduction engine can be disarmed either when the current operation has been completed, or after a timeout period. In the event of a reduction timing out, any partial results can be forwarded up the tree towards the root. The purpose of the timeout is to ensure that no reduction state remains in the event of error, device failure, or frame loss within the reduction tree.



FIG. 3A shows a flow chart of an exemplary reduction process. During operation, a root process first initializes the reduction tree (operation 302). In HPC programming models, initialization can be a collective operation involving a number of processes that are to participate in a reduction. One process, which in this case can be the root process, can communicate with the network management software to create a spanning tree (the multicast tree), which can be represented by a multicast address. The network can use a multicast protocol to establish the multicast tree topology, and store the forwarding information in a data structure, such as a multicast table. This data structure typically stores topological and forwarding information, such as for a given multicast address, what output ports should a multicast packet be forwarded to. The root process can then arm the reduction engines in the spanning tree by sending a frame to the multicast address (operation 304). Other processes can wait until they receive this frame. Once this frame has reached all the participating processes, the reduction tree is now ready for use.


Subsequently, the participating processes can perform the computation task that results in their contribution to the reduction operation. Processes other than the root process can each construct a reduction frame and send it to the multicast address of the reduction tree (operation 306). The reduction engines residing in the switches participating in the reduction tree can perform reduction on the received frames and each send a reduced frame upstream toward the root switch of the reduction tree (operation 308). The root process can consume the data reduction frames. It can receive the contributions from the leaf nodes in one or more data reduction frames and complete the ready phase by performing the reduction operation to these frames, including its own contribution (operation 310). Optionally, the root process can then determine whether the computation task is complete (operation 312). If it complete, the root process can send the result to the multicast address and release the reduction engines (operation 316). If the computation task is not complete, the root process subsequently constructs a reduction frame containing the result and sends it to the multicast address, which in turn can re-arm all the reduction engines in the reduction tree (operation 314). This operation can prepare the reduction engines for the next round of reduction. A similar reduction process can then be repeated until the computation task is complete.


The root node, or more generally a process on the root node, can perform a special role. It first completes the reduction process. As described later, loops are usually not allowed in the multicast tree; hence the root typically does not send its own contribution to itself. Assuming that the reduction engine at the root node of the reduction tree is able to accumulate all of the contributions from the leaf nodes before timing out, the root node can receive a single data reduction frame from the leaf nodes. In this case, the root node can combine this result with its own contribution. On the other hand, if the reduction engine at the root node times out or cannot be allocated, the root node may receive a number of data reduction frames that are to be combined in software. Once the root node has computed the final reduction, it can multicasts this result to the leaf nodes. The root process can also have additional responsibilities in terms of handling errors.



FIG. 3B shows a flow chart of an exemplary reduction operation by a reduction engine. During operation, a reduction engine residing on a switch can first receive a barrier frame sent to the multicast address (operation 322). Note that the barrier frame is the initial frame that is used to arm the reduction engines for a reduction tree for the first time. After receiving the barrier frame, the reduction engine can record the parent node (i.e., the switch from which the barrier frame is received), and perform a lookup in the multicast table to determine the downstream node to which the barrier is to be forwarded (operation 324). In addition, the reduction engine also sets a wait count, which corresponds to the number of reduction contributions that are expected to be received from the endpoints based on the multicast table entry.


Subsequently, the reduction engine forwards the barrier frame to the child nodes (operation 326). As the barrier frame travels down the reduction tree, all the reduction engines participating in this reduction tree are armed. As a result, the reduction engine at the local switch begins to receive reduction frames returned from the child nodes or endpoints (operation 328). Next, the reduction engine can aggregate the contributions and forward a reduction frame to the parent node (operation 330). At this point, the reduction engine is now ready for reduction operation. Next, the reduction engine can receive a result frame from the root node, which arms all the reduction engines for a reduction operation which involves actual data.


The examples shown in this description assume that one process per node contributes to the reduction, which is not required. There can be multiple contributions per node. In some embodiments, a local shared memory reduction and a network reduction can both be performed. Furthermore, the reduction engine described herein can support multiple concurrent non-blocking reduction operations on the same reduction tree.


The computational operations supported by a reduction engine can include, but are not limited to:

    • Null (i.e., the barrier operation which does not involve any payload data);
    • MIN, MAX, and SUM operations on integer or floating point data types;
    • MINMAXLOC operation (which returns the locations of minimum and maximum values found in an array) on integer or floating point values and integer indices;
    • Bitwise AND, OR, and XOR operation on integer data types;
    • Reproducible sum operations on floating point data types.


The data types supported by a reduction engine can include, but are not limited to 64-bit integer and 64-bit IEEE 754 floating point.


In one embodiment, the MINMAXLOC operator can follow Message Passing Interface (MPI) conventions for MINLOC and MAXLOC operators when the values being compared are equal. In one embodiment, the lower of the two index values is returned.


For compatibility with a commonly used modern instruction set, rounding modes and exception behavior can follow the definitions in the Advanced RISC Machine (ARM) Architecture Reference Manual, ARMvS. For example, if any operand of a floating point operation is not a number (NaN), the result can be a quiet NaN with its sign=0.


The reproducible sum and MINMAXLOC operators can use one operand per endpoint. Other reductions can be performed on four 64-bit operands at a time with the same operation being applied to each of the operands.


The sum of a set of IEEE floating point values may depend on the order in which the operands are added. This can be an important issue when a reduction includes operands of widely varying magnitudes. The publication “Efficient Reproducible Floating Point Reduction Operations on Large Scale Systems,” available at https://bebop.cs.berkeley.edu/reproblas/docs/talks/SIAM_AN13.pdf describes one technique that can be used to achieve the desired level of precision for a given number of elements.


A deterministic reduction can be performed using a global maximum followed by a global sum using standard floating point arithmetic. A single global sum using integer arithmetic can also be used. With the second approach to reduction, the host software is to perform the same operation when multiple contributions are delivered to the root node.


In general, each reduction engine can support multiple, independent reduction trees, each identified by a globally unique multicast address. Each point in the tree can be initialized with a local wait count value denoted as rt_waitcount. This count value is normally equal to the number of endpoints beneath that stage of the tree (i.e., the number of children nodes of a given node in the tree).


Reduction trees can be initialized by creating an entry in a multicast table which specifies the wait count and the set of output ports. This static state, which varies between locations in the tree, can be initialized by the management software in the same way as a multicast tree.


A single multicast address can be used for each reduction tree. At a parent port, the multicast table entry can specify the set of child ports. At each of the child ports, the multicast table entry can specify the parent port, i.e., the reverse path pointing back towards the root node. In general, loops are not allowed within a reduction tree. Unlike typical multicast entries, where any member of the multicast group is able to multicast to all other members of the multicast group, the multicast entries set up for reduction are one-sided and only the reduction root is able to multicast to all members of the multicast set. When any other member of the reduction tree sends a frame to the multicast address, this frame is only forwarded back to root node of the reduction tree. In addition, the forwarding of this frame typically follows exactly the reverse of the downstream multicast path from the root node. This forwarding mechanism guarantees reduction frames can be correctly intercepted by the reduction engines that have been set up for them.


In one embodiment, one or more fields in a frame's header can be used as a protection key to ensure that all contributors to a given reduction are from the same application or service. For example, a virtual network identifier (VNI) field from the frame header can be used as a protection key. In addition, a frame's reduction header can contain a 32-bit cookie. All frames in a reduction can be required to have the same protection key and cookie as the frame used to arm all the reduction engines in the same reduction tree.


As mentioned above, reduction trees are armed before they can be used. A multicast session can be used to arm the tree. In a global reduction process, the multicast phase that distributes the result of a reduction can re-arm the tree. In one embodiment, a reduction_arm request can include the state that is constant for all points in the reduction tree. Reduction operations on a given tree can be identified by their multicast address, a cookie rt_cookie, and a sequence number rt_seqno. All contributors can be required to provide the same protection key (which can be the VNI value), cookie value, and the correct sequence number. The reduction engine can confirm that these conditions are met. The cookie value can be used to help prevent accidental or malicious interference in the reduction. It may be a random value generated by the root process.


The process of arming a reduction tree can create a dynamic state in the reduction engines for a given tree. The wait count value can be copied from the multicast table, which indicates the number of output ports for a given multicast address. A timeout value can be determined by comparing the value of the rt_waitcount value with values programmed by the management software. The protection key, cookie value, and the sequence number can be copied from the multicast frame. A local counter rt_count, which tracks the number of received reduction frames, can be initialized to zero.


All contributions to a given reduction can specify the same reduction operation, which can be identified by an rt_op value. The reduction hardware can generate an error if frames with the same sequence number specify different operations.


Partial result frames can include a count of the accumulated number of contributions. The leaf endpoints can inject frames with a count of one. The reduction engine can increment the local counter by the partial counts from each frame as it performs the reduction operation. The reduction operation is complete in a given reduction engine when the local count reaches the wait count. On completion of a reduction or expiry of the timeout, the reduction engine forwards the partial result and frees up the dynamic state for the reduction tree. The static state remains in the multicast table until the multicast table entry is deleted. The reduction tree must be rearmed before it can be used again.


The result of the reduction is completed by a process at the root node. In a global reduction the result can be distributed to the leaf nodes using a multicast down the reduction tree. This operation can also re-arm the tree. In one embodiment, the system can supply both the sequence number for the result being distributed, rt_resno, and the sequence number for the reduction being armed, rt_seqno. A management software can increment the sequence number from one reduction to the next. In normal operation, rt_seqno is typically one higher than rt_resno modulo the size of the counter, which may not be the case in the event of error. For upstream reduction data frames, rt_resno does not need to be set by the management software and hence can be ignored by the hardware. The reduction engine can set rt_resno equal to rt_seqno when sending frames upstream.


In the example shown in FIG. 2, the reduction tree has a branching ratio of four. Endpoints such as leaf endpoint 208 supply their contributions with a count of one. Each first stage switch in this example, such as switch 206, has a wait count of four, which corresponds to the four leaf endpoints connected to each leaf switch. These switches can combine frames from their four children. Partial result frames with a contribution count of four are forwarded up the tree to the second stage switch, which in this example is switch 204. Switch 204 can have a wait count of 16. This second stage switch then applies the reduction operator to four frames (one from each of its children) and forwards a result frame with count 16 to root endpoint 202. In practice, the multicast tree may not be completely balanced, where some leaf nodes can have different contribution counts than others, and some later-stage switches can have different contribution counts from other switches at the same level.


Note that the reduction engine in each switch can accelerate a component of the reduction, which may not always be necessary for proper functionality. A reduction arm command may be unable to allocate a descriptor because perhaps all of the descriptors are busy, or a reduction descriptor may have timed out before all of the results are received. In either case, data frames for this reduction may fail to find a matching descriptor and then be forwarded along the multicast path. A reduction engine in a switch higher in the multicast tree may reduce these frames or they may reach the root where they can be reduced in software.


In general, a single barrier or reduction operation proceeds with each node other than the root node providing a contribution and then waiting for a result to be returned from the root node. The root node gathers contributions, completes the reduction, and multicasts the result. The sequence number is incremented from one such reduction to the next. It may be desirable to pipeline multiple reductions over the same set of nodes at the same time, for example, to offload progression of non-blocking reductions or to increase bandwidth on multi-element floating point reductions. Software can perform multiple concurrent reductions on the same tree by distinguishing such reductions using high bits of the multicast address. For example, bits 15-3 of the multicast format Destination Fabric Address (DFA), which in one embodiment is used as an inter-switch address for routing traffic within a switch fabric, can distinguish 8K multicast trees. Bits 20-18 of the same multicast address can then be used to distinguish up to 8 reductions being performed concurrently on the same tree. When a reduction engine forms part of more than one reduction tree, probability of contention can increase when multiple reductions are performed concurrently. As a result, more of the reduction operations may be performed higher in the tree.


In one embodiment, a reduction engine can use a timer to limit the amount of time it spends waiting for contributions. As a result, reduction operations may time out. On expiry of the timer, the partial result of a reduction can be sent up the tree towards the root endpoint. Any further data frames associated with a reduction that has timed out can also be sent up the tree.



FIG. 4 shows an example where one leaf endpoint 402 (shown in gray) is late joining the reduction process. All the rest leaf endpoints send packets with a count of one. One of the first stage switches, switch 404 (shown in gray) has received frames from three of its children when the timeout expires. It can then forward a result frame with a count of three up the tree. Sometime later, endpoint 402 supplies its contribution to the reduction. This frame is forwarded up the tree. In this example, second stage switch 406 accumulates five frames, three with counts of four, one with a count of three (from switch 404), and the late frame with a count of one from endpoint 402. Switch 406 subsequently sends its result to root endpoint 408 with a count of 16. In one embodiment, second stage switch 406 can have a longer timeout period to accommodate the late arrival. If all switches had the same timeout values, second stage switch 406 may forward two frames to root endpoint 408, one with a count of 15 and the late frame from endpoint 402 with a count of one. The root endpoint can then complete the reduction.


In one embodiment, the value for a reduction timeout can be set based on the expected time between reduction operations plus twice the expected variation in arrival time. High timeout values (seconds) do not alter the error free operation, but may delay the arrival of the partial results in the event of error. High timeout values can also cause reduction engine resources to be tied up for longer in the case of a dropped frame or the delayed arrival of a partial result. On the other hand, low timeout values may cause problems with scalability.


The purpose of the reduction engine is to accelerate latency-sensitive operations, e.g., those in which all processes arrive at a reduction or barrier at approximately the same time. Where there is significant load imbalance and one or more endpoints arrive late, the root node can receive multiple frames. The root node can complete the reduction quickly as the last frame arrives. If the timeout is set correctly (or conservatively), the time spent processing these frames can be small in comparison with the time spent waiting.


In some systems, such as Exascale systems, errors such as frame drops are usually expected to occur. The reduction mechanism can be designed to still function well in the presence of errors. Two classes of error can be of importance for reduction operations: link errors that cause frames to be corrupted; and device errors that cause frames to be dropped. Where available, link level retry can protect against common link errors; therefore, the dominant error case can be expected to be dropped frames arising from a switch or cable failure. Where forward error correction (FEC) is used without link level retry, a small proportion of link errors can result in frame drops. Most such frame drops can occur on bulk data transfer rather than reductions. However, the likelihood of an error causing reduction frames to be lost can increase with the job size.


The time required to detect errors in reduction operations arising from frame loss can generally be counted in seconds. Under normal operation, this period can be set to be longer than the expected spread of arrival times or the time that nodes might spend in computation while waiting for a non-blocking reduction to complete. After this period, the root process can reasonably be assumed to be blocked waiting for the reduction to complete. However, there are many examples where processes wait for long periods of time in reduction or barrier operations while other processes complete sequential work. A long time spent in a reduction does not always imply that an error has occurred.



FIG. 5A shows an example where one leaf endpoint fails to supply a contribution because of an error. In this example, leaf endpoint 502 experiences an error and does not supply its contribution to the reduction process. As a result, intermediate switch 504 times out and forwards a partial result with a count of three. Root switch 506 again times out (because a total count of 16 is not received before switch 506's timer expires) and forwards a frame with a count of 15, or perhaps a frame with a count of 12 and a second, subsequent frame with a count of three. In either case, root endpoint 508 then consumes the frame(s) and determines that there has been an error.



FIG. 5B shows a flow chart of an exemplary timer-based reduction process. In this example, a reduction engine first receives a result frame from the root node, which arms the reduction engine (operation 522). The reduction engine then becomes armed for the reduction tree identified in the result frame, and forwards the result frame to its child nodes in the reduction tree (operation 524). Next, the reduction engine determines whether a sufficient number of reduction contributions have been received (operation 526). This determination is based on the corresponding wait count, which has been set up during the ready phase when the reduction tree is initiated for the first time. If the wait count has been met, the reduction engine then performs the reduction operation on the received contributions, generates its own reduction frame, and forwards the reduction frame toward the root (operation 532). If the wait count is not satisfied, the reduction engine further determines whether the timer has expired (operation 528). If the timer has not expired, the reduction engine continues to wait for reduction frames to arrive from child nodes (operation 530). If the timer has expired, the reduction engine then performs reduction operation on the received reduction frames, if any, and forwards its own reduction frame toward the root node (operation 532). If there are any late contribution frames that arrive after the timer expires, the reduction engine can also forward them toward the root node (operation 534).


In one embodiment, reliable reductions can be implemented at the transport layer. The network hardware can be designed to accelerate the common case and to free all hardware resources in the case of error. Software can protect against device failure in the reduction tree by performing the same reduction operation over two independent trees. The probability of uncorrelated double error is small. When the root endpoint receives a result (or sufficient frames to construct the result) from one tree the host software can multicast the result on both trees. The sequence number from the successful operation can be used as the result number.


In the case where the second tree is left with a partial result, potentially several steps back, or stranded on a congested link or a busy queue, delivery of the reduction multicast frame with the rt_arm bit set can advance the sequence number and clear the state. Frames with an old sequence number can be dropped. One or more frames may still be in flight up the tree.


Reductions are typically latency sensitive, and not bandwidth intensive. The additional network load created by performing two simultaneous reductions is usually negligible. An added benefit of performing reductions twice, on distinct trees, is that the first result can be used, potentially reducing the time to complete the operation if the network is congested. The drawback to this approach, however, is that twice as many Reduction resources (e.g., multicast table entries and reduction engines/IDs) are used, potentially reducing the number of simultaneous reductions by a factor of two.


A correlated double error may arise as a result of chassis power failure, but such an error may likely cause node failures as well. In a multi-slice network, software can create reductions trees on different slices. Where there is only a single network slice, trees on the same slice can share a common-chassis switch. The reduction is vulnerable to loss of this switch, but such an error can disconnect the nodes as well.


Allowing reductions to time out ensures that no reduction state is left behind in the event of error. There does not need to be requirement to issue management requests to search and release hardware resources after a fault. In some embodiments, reduction state can be left in the network in the event of error and fault recovery can be complex.


In some embodiments, a flexible reduction protocol can be provided where a missing reduction engine, or one where all resources have been consumed, does not render the protocol dysfunctional. A missing early leaf reduction computation can be completed by a reduction engine higher in the tree, as shown in FIG. 6. In this example, the reduction engine on a leaf switch 604 is unavailable for the reduction process. As a result, leaf switch 604 forwards all four contribution frames generated by the four children leaf endpoints to a root switch 606. Meanwhile, other three leaf switches 601, 602, and 603 each send their respective reduction frame (with a count of four) to root switch 606. Root switch 606 can process all these seven frames and forwards a single reduction frame with a count of 16 to root endpoint 608. In general, a reduction computation high in the tree can be completed by a process running on the root node. The acceleration offered by the reduction engines in those cases may be different but the calculated result remains the same.


If a reduction tree is armed, but no contributions are provided before all of the active reduction engines time out, then all contributions can be forwarded to the root endpoint. The root process can subsequently perform the entire computation. The time required to perform this computation can be much shorter than the timeout period. This reduction scenario does not require acceleration.


The maximum number of reduction trees that can be supported on a given system can be determined by the product of the number of endpoints that can act as root and the number of active trees supported by a given reduction engine. There is usually limited value in accelerating reductions over small numbers of nodes, because each node can simply send its contribution to the root node. Suppose the goal is to accelerate reductions of sixteen nodes and above. A system might run a thousand or more such jobs, while large systems tend to run a mix of job sizes, which in turn may reduce the number of active reduction trees. A large application might use multiple reduction trees at the same time. In the canonical example, the processes can be arranged in a 2D mesh and reductions are performed over the rows, the columns, and over the entire mesh. For P processes, the number of active reductions can be twice the square root of P plus one. For implementation considerations, the multicast table is relatively expensive in terms of die area. In one embodiment, a switch chip can support 8192 multicast addresses, while other numbers are also possible. Note that non-intersecting trees may use the same multicast address.


In some embodiments, software can decide which reductions to offload. A request can be sent to the network management system to create a reduction tree. If this request fails, the network application interface (API) can perform the operation in software. However, running different instances of the same job with software-based reductions or accelerated reductions can potentially lead to performance variation, which is undesirable. The reduction offload strategy can be configured such that running out of reduction trees is unlikely.


In one embodiment, a 6-bit command field is used to indicate different reduction operations, as shown in FIG. 7. This 6-bit command field can provide scope for expansion if needed. Multiple concurrent reductions may operate on the same tree. Each reduction can use a different reduction ID in the DFA. The wait count can be set to be sufficient for one contribution per node in a maximally sized system. In one embodiment, the wait count can be a 20-bits value.


In addition, four configurable timeout values can be provided. These 24-bit values can be given in units of 1024 clock cycles. At 850 MHz, this provides a range from 1.20 us to 20.2 s. In addition, a 10-bit sequence number can be used, which can avoid being reused while an old reduction frame with the same multicast address and the same cookie value is still in the network. Note that this problem does not occur on the same reduction tree, because in-order delivery prevents an old reduction frame from being delivered after frames transmitted later on the same tree. Therefore, for a late-arriving frame to cause a problem, the process it belongs to needs to exit first so that a newly launched process could request a new multicast tree, which can be built using the same multicast address. The chance that the old reduction frame survives the reprogramming of the multicast table is low. If it did survive, it still needs to intersect the new tree after the new tree performs 1024 reductions. Note that the 32-bit cookie value can also provide an extra layer of protection.



FIG. 7 shows an exemplary list of reduction operations, which are explained below. The FLT_REP SUM and the MINMAXLOC operations support one reduction at a time per tree. For all other operations, four reductions per tree can be performed in parallel. The floating point sum operations can have four rounding modes and flush-to-zero (FTZ), which can be encoded in three bits as eight different commands. Floating point MIN and MAX have two modes for handling NaNs (mirroring the ARM MIN and MINNUM operations). FTZ only applies to denormalized results. If a result is to be denormalized, it is set to 0 instead and flt_inexact is raised. The operation sometimes known as DAZ may be supported in software at the leaf nodes.


BARRIER: The data returned by BARRIER operations is always 0.


MINMAXLOC: The MINMAXLOC operators are used to support the MINLOC and MAXLOC. Operands 0 and 1 compute MINLOC, and operands 2 and 3 compute MAXLOC. FIG. 8 shows a set of MINMAXLOC operands. Note that when more than one index contains the minimum/maximum value, the lowest such index is recorded in the MINLOC/MAXLOC field.


FLT MIN and FLT MAX: When all inputs of FLT_MIN or FLT_MAX are floating point numbers, the minimum or maximum value is returned. When any operand of FLT_MIN or FLT_MAX is a NaN, NaN is returned. In a given pairwise reduction, if one operand is a signaling NaN and one operand is a quiet NaN, the signaling NaN is selected to be returned. The NaN returned can be turned into a quiet NaN with its sign bit cleared.


FLT MINNUM and FLT MAXNUM: These operations are similar to FLT_MIN and FLT_MAX but handle operands that are NaNs differently. In the absence of a signaling NaN, FLT_MINNUM and FLT_MAXNUM can return the smallest/largest numbers in the reduction. A quiet NaN is only returned when all of the operands in one of these reductions are quiet NaNs.


The behavior of signaling NaNs with regard to these operators can be controlled by a R_TF_RED_CFG_MODE register. In standard IEEE mode, a signaling NaN (SNaN) as an operand of a pairwise reduction always produces a quiet NaN as a result. This produces indeterminate results for the complete reduction. In the recommended associative mode, when one operand is SNaN and the other is a number, the result is the number. Thus, in associative mode, if at least one operand in the reduction is a floating point number, the minimum or maximum floating point operand is returned. In either mode, if any operand is a signaling NaN, flt_invalid is returned.


FLT_MINMAXNUMLOC: This operation computes both FLT_MINNUM and FLT_MAXNUM.


FLT_SUM: This floating point sum operation has a flush-to-zero option and four rounding modes. When flush-to-zero is enabled, if the sum is denormalized, it is set to 0. The sign of the denormalized result is preserved. The four rounding modes match the ARM rounding modes and are shown in FIG. 9.


FLT_REPSUM: The reproducible floating point sum is accomplished by splitting each floating point operand into up to four integer components, each of which has limited precision. The number of significant bits (W) in each component is selected such that integer overflow cannot occur. The value of W is selected by software and is not observable in the hardware. W is used to compute the integer values IX to load into the reduction operand rt_data. When the reduction is complete, W is used to construct the floating point result. The software may choose to set W to 40; in this case, up to 2{circumflex over ( )}24 operands may be reduced. A floating point number can be represented by up to four W-bit integers as follows: Σj=MM+4IX[j−M]×2W×j.


For each floating point operand, the software chooses the largest value of M such that the least significant bit of the operand appears in IX[0]. Software is responsible for loading the four IX values into rt_data and the value of M in rt_repsum_m, which is an eight-bit signed integer. IX[0], the least significant operand, is loaded into rt_data[0].


When two operands in this format are added by the reduction engine, if one operand's M, M′, is larger than the other, the hardware can discard any IX[j] in the smaller operand where j<M′ because these values may not have significance in the final result. If this occurs during the course of the reduction and any nonzero operands are dropped, repsum_inexact can be returned.


When the reduction is complete, the root process can convert the resulting operands and rt_repsum_m into a floating point number. If there are more operands than are supported by the chosen W, int_overtflow may be returned. In this case, the result is not valid. Note that int_overflow is only reported if the overflow occurs in one of the significant values returned in the result. The rt_repsum_oflow_id identifies the most significant operand to overflow. When a partial result with int_overflow is reduced with another partial result, if (M+rt_repsum_oflow_id) of the overflow partial result is less than M′ of the other partial result, the int_overflow result code is dropped.


In some embodiments, static state for reduction operations can be programmed in the multicast table. This state can vary between devices in the same reduction tree. The state can be created by the management agent when a job starts or a new reduction tree is created. It can use the same mechanism as is used for setting up standard multicast entries. The multicast reduction trees are distinguished from multicast trees by a non-zero wait count. The timeout value for a reduction in a particular reduction engine can be determined by its wait count value and the configuration.


The reduction frame format is the same for reduction_arm and reduction_data frames. They can be distinguished by the rt_arm field which can be set on all frames descending the tree. The 84-byte Portals-formatted reduction frame is shown in FIG. 10.


The Portals header and command fields are stored once in the reduction state. The 12-byte reduction header is shown in FIG. 11. The endianness of the operands, which is used for the MINMAXLOC reproducible sum operators, is shown in in FIG. 12.


Errors or inexact results encountered during the course of a reduction are reported in rt_rc. In general, these events do not prevent the reduction from completing. However, in the event of an opcode mismatch, the reduction cannot be performed. In this case, the Source Fabric Addresses (SFA) of two operands with differing opcodes are returned in rt_data[0]. The reduction result codes are summarized in FIG. 13.


There is only one result code shared by the four parallel reductions. Result codes are defined in priority order. If a reduction encounters more than one exception condition, the largest is retained. For example, flt_invalid is the highest priority FLT_SUM result code.


In some embodiments, the reduction engine can be utilized from an Ethernet NIC which is generally not equipped to operate with the HPC Ethernet specification and utilize the Portals packet format. To facilitate reduction using reduction engines, the system can use the Soft Portals encapsulation, in which the Portals packet can be prepended with an Ethernet header constructed to be consistent with the configuration for Portals on the port that the NIC is connected to. This is illustrated in FIG. 14.


Within the Linux operating system, it is possible to construct these packets using a raw socket; this is suitable for functional testing since it requires the process to execute as root or with CAP_NET_RAW. The socket can be opened to receive only the specified ether type that a switch is configured to use in the Ethernet header it prepends to the Portals packets. It should be noted that the VNI field of the Portals packet is the protection mechanism, this can be inserted in a privileged domain and for production usage this can be performed in a kernel module.



FIG. 15 shows an exemplary switching system that facilitates a reduction engine. In this example, a switch 1502 can include a number of communication ports, such as port 1520. Each port can include a transmitter and a receiver. Switch 1502 can also include a processor 1504, a storage device 1506, and a switching logic block 1508. Switching logic block 1508 can be coupled to all the communication ports and can further include a crossbar switch 1510 and a reduction engine logic block 1514.


Crossbar switch 1510 can include one or more crossbar switch chips, which can be configured to forward data packets and control packets among the communication ports. Reduction engine logic block 1514 can be configured to perform various dynamic reduction functions as described above. Also included in switching logic block 1508 is a multicast table 1516, which can store the reduction tree topology and state information to facilitate the reduction operations performed by reduction engine logic block 1514. Other types of data structure can also be used to store the topology and state information.


In summary, the present disclosure describes a switch capable of facilitating a self-managing reduction engine in a network. The reduction engine is capable of handling various latency-induced and error scenarios while performing on-the-fly reduction. As a result, the network can facilitate an efficient and scalable environment for high performance computing.


The methods and processes described above can be performed by hardware logic blocks, modules, logic blocks, or apparatus. The hardware logic blocks, modules, logic blocks, or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware logic blocks, modules, or apparatus are activated, they perform the methods and processes included within them.


The methods and processes described herein can also be embodied as code or data, which can be stored in a storage device or computer-readable storage medium. When a processor reads and executes the stored code or data, the processor can perform these methods and processes.


The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.

Claims
  • 1. A switch, comprising: a number of ports; anda reduction engine coupled to at least one port and to: be armed for a reduction process based on a received frame, wherein the received frame identifies the reduction process;set a wait count for the reduction process associated with a number of reduction contributions to be received by the reduction engine;set a timer for the reduction process;send the frame to one or more output ports toward endpoints associated with the reduction process, thereby allowing additional reduction engines to be armed for the reduction process;receive a reduction contribution;determine whether the reduction engine is available and whether the timer has expired; andresponsive to determining that the reduction engine is not available or that the timer has expired, forward the received contribution, without performing a reduction process on the received contribution, to a root node of the reduction process, wherein the received contribution is used by the root node to perform a reduction operation.
  • 2. The switch of claim 1, wherein while setting the wait count, the reduction engine is further to determine a number of endpoints expected to send reduction contributions to the reduction engine.
  • 3. The switch of claim 1, wherein the reduction engine is further to: determine that a number of received contributions is less than the wait count;determine that the timer has expired; andperform a reduction operation on one or more contributions received before expiry of the timer to generate a reduction frame to be forwarded to a root node of the reduction process.
  • 4. The switch of claim 1, wherein the reduction engine is further to: determine that a number of received contributions is less than the wait count;determine that the timer has not expired; andwait for one or more additional contributions to arrive.
  • 5. The switch of claim 1, wherein the reduction engine is further to store the wait count in an entry of a multicast table.
  • 6. The switch of claim 1, wherein reduction engine is further to release its resources with respect to the reduction process.
  • 7. The switch of claim 1, wherein the reduction engine is further to: responsive to determining that the reduction engine is not available, forward the received contribution, without performing a reduction process on the received contribution, to a reduction engine of a node between the switch and the root node, wherein the received contribution is used by the node to perform a reduction operation.
  • 8. A method, comprising: arming a reduction engine for a reduction process based on a received frame, wherein the received frame identifies the reduction process;setting a wait count for the reduction process associated with a number of reduction contributions to be received by the reduction engine;setting a timer for the reduction process;sending the frame to one or more output ports toward endpoints associated with the reduction process, thereby allowing additional reduction engines to be armed for the reduction process;receiving a reduction contribution;determining whether the reduction engine is available and whether the timer has expired; andresponsive to determining that the reduction engine is not available or that the timer has expired, forwarding the received contribution, without performing a reduction process on the received contribution, to a root node of the reduction process, wherein the received contribution is used by the root node to perform a reduction operation.
  • 9. The method of claim 8, wherein setting the wait count comprises determining a number of endpoints expected to send reduction contributions to the reduction engine.
  • 10. The method of claim 8, further comprising: determining that a number of received contributions is less than the wait count;determining that the timer has expired; andperforming a reduction operation on one or more contributions received before expiry of the timer to generate a reduction frame to be forwarded to a root node of the reduction process.
  • 11. The method of claim 8, further comprising: determining that a number of received contributions is less than the wait count;determining that the timer has not expired; andwaiting for one or more additional contributions to arrive.
  • 12. The method of claim 8, further comprising storing the wait count in an entry of a multicast table.
  • 13. The method of claim 8, further comprising releasing the reduction engine's resources with respect to the reduction process.
  • 14. The method of claim 8, further comprising: responsive to determining that the reduction engine is not available, forwarding the received contribution, without performing a reduction process on the received contribution, to a reduction engine of a node between the switch and the root node, wherein the received contribution is used by the node to perform a reduction operation.
  • 15. A network system, comprising: a number of interconnected switches, wherein a respective switch comprises:a number of ports; anda reduction engine coupled to at least one port and to: be armed for a reduction process based on a received frame, wherein the received frame identifies the reduction process;set a wait count for the reduction process associated with a number of reduction contributions to be received by the reduction engine;set a timer for the reduction process;send the frame to one or more output ports toward endpoints associated with the reduction process, thereby allowing additional reduction engines to be armed for the reduction process;receive a reduction contribution;determine whether the reduction engine is available and whether the timer has expired; andresponsive to determining that the reduction engine is not available or that the timer has expired, forward the received contribution, without performing a reduction process on the received contribution, to a root node of the reduction process, wherein the received contribution is used by the root node to perform a reduction operation.
  • 16. The network system of claim 15, wherein while setting the wait count, the reduction engine is further to determine a number of endpoints expected to send reduction contributions to the reduction engine.
  • 17. The network system of claim 15, wherein the reduction engine is further to: determine that a number of received contributions is less than the wait count;determine that the timer has expired; andperform a reduction operation on one or more contributions received before expiry of the timer to generate a reduction frame to be forwarded to a root node of the reduction process.
  • 18. The network system of claim 15, wherein the reduction engine is further to: determine that a reduction frame is received after the timer has expired; andforward the received reduction frame to a root node of the reduction process.
  • 19. The network system of claim 15, wherein the reduction engine is further to: determine that a number of received contributions is less than the wait count;determine that the timer has not expired; andwait for one or more additional contributions to arrive.
  • 20. The network system of claim 15, wherein the reduction engine is further to store the wait count in an entry of a multicast table.
  • 21. The network system of claim 15, wherein reduction engine is further to release its resources with respect to the reduction process.
PCT Information
Filing Document Filing Date Country Kind
PCT/US2020/024243 3/23/2020 WO
Publishing Document Publishing Date Country Kind
WO2020/236270 11/26/2020 WO A
US Referenced Citations (428)
Number Name Date Kind
4807118 Lin et al. Feb 1989 A
5138615 Lamport et al. Aug 1992 A
5457687 Newman Oct 1995 A
5937436 Watkins Aug 1999 A
5960178 Cochinwala et al. Sep 1999 A
5970232 Passint et al. Oct 1999 A
5983332 Watkins Nov 1999 A
6112265 Harriman et al. Aug 2000 A
6230252 Passint et al. May 2001 B1
6246682 Roy et al. Jun 2001 B1
6493347 Sindhu et al. Dec 2002 B2
6545981 Garcia et al. Apr 2003 B1
6633580 Toerudbakken et al. Oct 2003 B1
6674720 Passint et al. Jan 2004 B1
6714553 Poole et al. Mar 2004 B1
6728211 Peris et al. Apr 2004 B1
6732212 Sugahara et al. May 2004 B2
6735173 Lenoski et al. May 2004 B1
6894974 Aweva et al. May 2005 B1
7023856 Washabaugh et al. Apr 2006 B1
7133940 Blightman et al. Nov 2006 B2
7218637 Best et al. May 2007 B1
7269180 Bly et al. Sep 2007 B2
7305487 Blumrich et al. Dec 2007 B2
7337285 Tanoue Feb 2008 B2
7397797 Alfieri et al. Jul 2008 B2
7430559 Lomet Sep 2008 B2
7441006 Biran et al. Oct 2008 B2
7464174 Ngai Dec 2008 B1
7483442 Torudbakken et al. Jan 2009 B1
7562366 Pope et al. Jul 2009 B2
7593329 Kwan et al. Sep 2009 B2
7596628 Aloni et al. Sep 2009 B2
7620791 Wentzlaff et al. Nov 2009 B1
7633869 Morris et al. Dec 2009 B1
7639616 Manula et al. Dec 2009 B1
7734894 Wentzlaff et al. Jun 2010 B1
7774461 Tanaka et al. Aug 2010 B2
7782869 Chitlur Srinivasa Aug 2010 B1
7796579 Bruss Sep 2010 B2
7856026 Finan et al. Dec 2010 B1
7933282 Gupta et al. Apr 2011 B1
7953002 Opsasnick May 2011 B2
7975120 Sabbatini, Jr. et al. Jul 2011 B2
8014278 Subramanian et al. Sep 2011 B1
8023521 Woo et al. Sep 2011 B2
8050180 Judd Nov 2011 B2
8077606 Litwack Dec 2011 B1
8103788 Miranda Jan 2012 B1
8160085 Voruganti et al. Apr 2012 B2
8175107 Yalagandula et al. May 2012 B1
8249072 Sugumar et al. Aug 2012 B2
8281013 Mundkur et al. Oct 2012 B2
8352727 Chen et al. Jan 2013 B2
8353003 Noehring et al. Jan 2013 B2
8443151 Tang et al. May 2013 B2
8473783 Andrade et al. Jun 2013 B2
8543534 Alves et al. Sep 2013 B2
8619793 Lavian et al. Dec 2013 B2
8626957 Blumrich et al. Jan 2014 B2
8650582 Archer et al. Feb 2014 B2
8706832 Blocksome Apr 2014 B2
8719543 Kaminski et al. May 2014 B2
8811183 Anand et al. Aug 2014 B1
8948175 Bly et al. Feb 2015 B2
8971345 Mccanne et al. Mar 2015 B1
9001663 Attar et al. Apr 2015 B2
9053012 Northcott et al. Jun 2015 B1
9088496 Vaidya et al. Jul 2015 B2
9094327 Jacobs et al. Jul 2015 B2
9178782 Matthews et al. Nov 2015 B2
9208071 Talagala et al. Dec 2015 B2
9218278 Talagala et al. Dec 2015 B2
9231876 Mir et al. Jan 2016 B2
9231888 Bogdanski et al. Jan 2016 B2
9239804 Kegel et al. Jan 2016 B2
9269438 Nachimuthu et al. Feb 2016 B2
9276864 Vincent Mar 2016 B1
9436651 Underwood et al. Sep 2016 B2
9455915 Sinha et al. Sep 2016 B2
9460178 Bashyam et al. Oct 2016 B2
9479426 Munger et al. Oct 2016 B2
9496991 Plamondon et al. Nov 2016 B2
9544234 Markine Jan 2017 B1
9548924 Pettit et al. Jan 2017 B2
9594521 Blagodurov et al. Mar 2017 B2
9635121 Mathew et al. Apr 2017 B2
9742855 Shuler et al. Aug 2017 B2
9762488 Previdi et al. Sep 2017 B2
9762497 Kishore et al. Sep 2017 B2
9830273 Bk et al. Nov 2017 B2
9838500 Ilan et al. Dec 2017 B1
9853900 Mula et al. Dec 2017 B1
9887923 Chorafakis et al. Feb 2018 B2
10003544 Liu et al. Jun 2018 B2
10009270 Stark et al. Jun 2018 B1
10031857 Menachem et al. Jul 2018 B2
10050896 Yang et al. Aug 2018 B2
10061613 Brooker et al. Aug 2018 B1
10063481 Jiang et al. Aug 2018 B1
10089220 Mckelvie et al. Oct 2018 B1
10169060 Vincent et al. Jan 2019 B1
10178035 Dillon Jan 2019 B2
10200279 Aljaedi Feb 2019 B1
10218634 Aldebert et al. Feb 2019 B2
10270700 Burnette et al. Apr 2019 B2
10305772 Zur et al. May 2019 B2
10331590 Macnamara et al. Jun 2019 B2
10353833 Hagspiel et al. Jul 2019 B2
10454835 Contavalli et al. Oct 2019 B2
10498672 Graham et al. Dec 2019 B2
10567307 Fairhurst et al. Feb 2020 B2
10728173 Agrawal et al. Jul 2020 B1
10802828 Volpe et al. Oct 2020 B1
10817502 Talagala et al. Oct 2020 B2
11128561 Matthews et al. Sep 2021 B1
11271869 Agrawal et al. Mar 2022 B1
11416749 Bshara et al. Aug 2022 B2
11444886 Stawitzky et al. Sep 2022 B1
20010010692 Sindhu et al. Aug 2001 A1
20010047438 Forin Nov 2001 A1
20020174279 Wynne et al. Nov 2002 A1
20030016808 Hu et al. Jan 2003 A1
20030041168 Musoll Feb 2003 A1
20030110455 Baumgartner et al. Jun 2003 A1
20030174711 Shankar Sep 2003 A1
20030200363 Futral Oct 2003 A1
20030223420 Ferolito Dec 2003 A1
20040008716 Stiliadis Jan 2004 A1
20040059828 Hooper et al. Mar 2004 A1
20040095882 Hamzah et al. May 2004 A1
20040133634 Luke et al. Jul 2004 A1
20040223452 Santos et al. Nov 2004 A1
20050021837 Haselhorst et al. Jan 2005 A1
20050047334 Paul et al. Mar 2005 A1
20050088969 Carlsen et al. Apr 2005 A1
20050091396 Nilakantan et al. Apr 2005 A1
20050108444 Flauaus et al. May 2005 A1
20050108518 Pandya May 2005 A1
20050152274 Simpson Jul 2005 A1
20050182854 Pinkerton et al. Aug 2005 A1
20050270974 Mayhew Dec 2005 A1
20050270976 Yang et al. Dec 2005 A1
20060023705 Zoranovic et al. Feb 2006 A1
20060067347 Naik et al. Mar 2006 A1
20060075480 Noehring et al. Apr 2006 A1
20060174251 Pope et al. Aug 2006 A1
20060203728 Kwan et al. Sep 2006 A1
20070061433 Reynolds et al. Mar 2007 A1
20070070901 Aloni et al. Mar 2007 A1
20070198804 Moyer Aug 2007 A1
20070211746 Oshikiri et al. Sep 2007 A1
20070242611 Archer et al. Oct 2007 A1
20070268825 Corwin et al. Nov 2007 A1
20070288935 Tannenbaum Dec 2007 A1
20080013453 Chiang et al. Jan 2008 A1
20080013549 Okagawa et al. Jan 2008 A1
20080071757 Ichiriu et al. Mar 2008 A1
20080084864 Archer et al. Apr 2008 A1
20080091915 Moertl et al. Apr 2008 A1
20080147881 Krishnamurthy et al. Jun 2008 A1
20080159138 Shepherd et al. Jul 2008 A1
20080253289 Naven et al. Oct 2008 A1
20090003212 Kwan et al. Jan 2009 A1
20090010157 Holmes et al. Jan 2009 A1
20090013175 Elliott Jan 2009 A1
20090055496 Garg et al. Feb 2009 A1
20090092046 Naven et al. Apr 2009 A1
20090141621 Fan et al. Jun 2009 A1
20090198958 Arimilli et al. Aug 2009 A1
20090259713 Blumrich et al. Oct 2009 A1
20090285222 Hoover et al. Nov 2009 A1
20100061241 Sindhu et al. Mar 2010 A1
20100169608 Kuo et al. Jul 2010 A1
20100172260 Kwan et al. Jul 2010 A1
20100183024 Gupta Jul 2010 A1
20100220595 Petersen Sep 2010 A1
20100274876 Kagan et al. Oct 2010 A1
20100302942 Shankar et al. Dec 2010 A1
20100316053 Miyoshi et al. Dec 2010 A1
20110051724 Scott et al. Mar 2011 A1
20110066824 Bestler Mar 2011 A1
20110072179 Lacroute et al. Mar 2011 A1
20110099326 Jung et al. Apr 2011 A1
20110110383 Yang et al. May 2011 A1
20110128959 Bando et al. Jun 2011 A1
20110158096 Leung et al. Jun 2011 A1
20110158248 Vorunganti et al. Jun 2011 A1
20110164496 Loh et al. Jul 2011 A1
20110173370 Jacobs et al. Jul 2011 A1
20110238949 Archer Sep 2011 A1
20110264822 Ferguson et al. Oct 2011 A1
20110276699 Pedersen Nov 2011 A1
20110280125 Jayakumar Nov 2011 A1
20110320724 Mejdrich et al. Dec 2011 A1
20120093505 Yeap et al. Apr 2012 A1
20120102506 Hopmann et al. Apr 2012 A1
20120117423 Andrade et al. May 2012 A1
20120137075 Vorbach May 2012 A1
20120144064 Parker et al. Jun 2012 A1
20120144065 Parker et al. Jun 2012 A1
20120147752 Ashwood-Smith et al. Jun 2012 A1
20120170462 Sinha Jul 2012 A1
20120170575 Mehra Jul 2012 A1
20120213118 Lindsay et al. Aug 2012 A1
20120250512 Jagadeeswaran et al. Oct 2012 A1
20120287821 Godfrey et al. Nov 2012 A1
20120297083 Ferguson et al. Nov 2012 A1
20120300669 Zahavi Nov 2012 A1
20120314707 Epps et al. Dec 2012 A1
20130010636 Regula Jan 2013 A1
20130039169 Schlansker et al. Feb 2013 A1
20130060944 Archer et al. Mar 2013 A1
20130103777 Kagan et al. Apr 2013 A1
20130121178 Mainaud et al. May 2013 A1
20130136090 Liu et al. May 2013 A1
20130182704 Jacobs et al. Jul 2013 A1
20130194927 Yamaguchi et al. Aug 2013 A1
20130203422 Masputra et al. Aug 2013 A1
20130205002 Wang et al. Aug 2013 A1
20130208593 Nandagopal Aug 2013 A1
20130246552 Underwood et al. Sep 2013 A1
20130290673 Archer Oct 2013 A1
20130301645 Bogdanski et al. Nov 2013 A1
20130304988 Totolos et al. Nov 2013 A1
20130311525 Neerincx et al. Nov 2013 A1
20130329577 Suzuki et al. Dec 2013 A1
20130336164 Yang et al. Dec 2013 A1
20140019661 Hormuth et al. Jan 2014 A1
20140032695 Michels et al. Jan 2014 A1
20140036680 Lih et al. Feb 2014 A1
20140064082 Yeung et al. Mar 2014 A1
20140095753 Crupnicoff et al. Apr 2014 A1
20140098675 Frost et al. Apr 2014 A1
20140119367 Han et al. May 2014 A1
20140122560 Ramey et al. May 2014 A1
20140129664 Mcdaniel et al. May 2014 A1
20140133292 Yamatsu et al. May 2014 A1
20140136646 Tamir et al. May 2014 A1
20140169173 Naouri et al. Jun 2014 A1
20140185621 Decusatis et al. Jul 2014 A1
20140189174 Ajanovic et al. Jul 2014 A1
20140207881 Nussle et al. Jul 2014 A1
20140211804 Makikeni et al. Jul 2014 A1
20140226488 Shamis et al. Aug 2014 A1
20140241164 Cociglio et al. Aug 2014 A1
20140258438 Ayoub Sep 2014 A1
20140301390 Scott et al. Oct 2014 A1
20140307554 Basso et al. Oct 2014 A1
20140325013 Tamir et al. Oct 2014 A1
20140328172 Kumar et al. Nov 2014 A1
20140347997 Bergamasco et al. Nov 2014 A1
20140362698 Arad Dec 2014 A1
20140369360 Carlstrom Dec 2014 A1
20140379847 Williams Dec 2014 A1
20150003247 Mejia et al. Jan 2015 A1
20150006849 Xu et al. Jan 2015 A1
20150009823 Ganga et al. Jan 2015 A1
20150026361 Matthews et al. Jan 2015 A1
20150029848 Jain Jan 2015 A1
20150055476 Decusatis et al. Feb 2015 A1
20150055661 Boucher et al. Feb 2015 A1
20150067095 Gopal et al. Mar 2015 A1
20150089495 Persson et al. Mar 2015 A1
20150103667 Elias et al. Apr 2015 A1
20150124826 Edsall et al. May 2015 A1
20150146527 Kishore et al. May 2015 A1
20150154004 Aggarwal Jun 2015 A1
20150161064 Pope Jun 2015 A1
20150180782 Rimmer et al. Jun 2015 A1
20150186318 Kim et al. Jul 2015 A1
20150193262 Archer et al. Jul 2015 A1
20150195388 Snyder et al. Jul 2015 A1
20150208145 Parker et al. Jul 2015 A1
20150220449 Stark et al. Aug 2015 A1
20150237180 Swartzentruber et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150244804 Warfield et al. Aug 2015 A1
20150261434 Kagan et al. Sep 2015 A1
20150263955 Talaski et al. Sep 2015 A1
20150263994 Haramaty et al. Sep 2015 A1
20150288626 Aybay Oct 2015 A1
20150365337 Pannell Dec 2015 A1
20150370586 Cooper et al. Dec 2015 A1
20160006664 Sabato et al. Jan 2016 A1
20160012002 Arimilli et al. Jan 2016 A1
20160028613 Haramaty et al. Jan 2016 A1
20160065455 Wang et al. Mar 2016 A1
20160094450 Ghanwani et al. Mar 2016 A1
20160134518 Callon et al. May 2016 A1
20160134535 Callon May 2016 A1
20160134559 Abel et al. May 2016 A1
20160134573 Gagliardi et al. May 2016 A1
20160142318 Beecroft May 2016 A1
20160154756 Dodson et al. Jun 2016 A1
20160182383 Pedersen Jun 2016 A1
20160205023 Janardhanan Jul 2016 A1
20160226797 Aravinthan et al. Aug 2016 A1
20160254991 Eckert et al. Sep 2016 A1
20160259394 Ragavan Sep 2016 A1
20160283422 Crupnicoff et al. Sep 2016 A1
20160285545 Schmidtke et al. Sep 2016 A1
20160285677 Kashyap et al. Sep 2016 A1
20160294694 Parker et al. Oct 2016 A1
20160294926 Zur et al. Oct 2016 A1
20160301610 Amit et al. Oct 2016 A1
20160344620 G. Santos et al. Nov 2016 A1
20160381189 Caulfield et al. Dec 2016 A1
20170024263 Verplanken Jan 2017 A1
20170039063 Gopal et al. Feb 2017 A1
20170041239 Goldenberg et al. Feb 2017 A1
20170048144 Liu Feb 2017 A1
20170054633 Underwood et al. Feb 2017 A1
20170063613 Bloch Mar 2017 A1
20170091108 Arellano et al. Mar 2017 A1
20170097840 Bridgers Apr 2017 A1
20170103108 Datta et al. Apr 2017 A1
20170118090 Pettit et al. Apr 2017 A1
20170118098 Littlejohn et al. Apr 2017 A1
20170153852 Ma et al. Jun 2017 A1
20170177541 Berman et al. Jun 2017 A1
20170220500 Tong Aug 2017 A1
20170237654 Turner et al. Aug 2017 A1
20170237671 Rimmer et al. Aug 2017 A1
20170242753 Sherlock et al. Aug 2017 A1
20170250914 Caulfield et al. Aug 2017 A1
20170251394 Johansson et al. Aug 2017 A1
20170270051 Chen et al. Sep 2017 A1
20170272331 Lissack Sep 2017 A1
20170272370 Ganga et al. Sep 2017 A1
20170286316 Doshi et al. Oct 2017 A1
20170289066 Haramaty et al. Oct 2017 A1
20170295098 Watkins et al. Oct 2017 A1
20170324664 Xu et al. Nov 2017 A1
20170371778 Mckelvie et al. Dec 2017 A1
20180004705 Menachem et al. Jan 2018 A1
20180019948 Patwardhan et al. Jan 2018 A1
20180026878 Zahavi et al. Jan 2018 A1
20180077064 Wang Mar 2018 A1
20180083868 Cheng Mar 2018 A1
20180097645 Rajagopalan et al. Apr 2018 A1
20180097912 Chumbalkar et al. Apr 2018 A1
20180113618 Chan et al. Apr 2018 A1
20180115469 Erickson et al. Apr 2018 A1
20180131602 Civanlar et al. May 2018 A1
20180131678 Agarwal et al. May 2018 A1
20180150374 Ratcliff May 2018 A1
20180152317 Chang et al. May 2018 A1
20180152357 Natham et al. May 2018 A1
20180173557 Nakil et al. Jun 2018 A1
20180183724 Callard et al. Jun 2018 A1
20180191609 Caulfield et al. Jul 2018 A1
20180198736 Labonte et al. Jul 2018 A1
20180212876 Bacthu et al. Jul 2018 A1
20180212902 Steinmacher-Burow Jul 2018 A1
20180219804 Graham et al. Aug 2018 A1
20180225238 Karguth et al. Aug 2018 A1
20180234343 Zdornov et al. Aug 2018 A1
20180254945 Bogdanski et al. Sep 2018 A1
20180254976 Kauppinen Sep 2018 A1
20180260324 Marathe et al. Sep 2018 A1
20180278540 Shalev et al. Sep 2018 A1
20180287928 Levi et al. Oct 2018 A1
20180323898 Dods Nov 2018 A1
20180335974 Simionescu et al. Nov 2018 A1
20180341494 Sood et al. Nov 2018 A1
20190007349 Wang et al. Jan 2019 A1
20190018808 Beard et al. Jan 2019 A1
20190036771 Sharpless et al. Jan 2019 A1
20190042337 Dinan et al. Feb 2019 A1
20190042518 Marolia Feb 2019 A1
20190044809 Willis et al. Feb 2019 A1
20190044827 Ganapathi et al. Feb 2019 A1
20190044863 Mula et al. Feb 2019 A1
20190044872 Ganapathi et al. Feb 2019 A1
20190044875 Murty et al. Feb 2019 A1
20190052327 Motozuka et al. Feb 2019 A1
20190058663 Song Feb 2019 A1
20190068501 Schneider et al. Feb 2019 A1
20190081903 Kobayashi et al. Mar 2019 A1
20190095134 Li Mar 2019 A1
20190104057 Goel et al. Apr 2019 A1
20190104206 Goel et al. Apr 2019 A1
20190108106 Aggarwal et al. Apr 2019 A1
20190108332 Glew et al. Apr 2019 A1
20190109791 Mehra et al. Apr 2019 A1
20190121781 Kasichainula Apr 2019 A1
20190140979 Levi et al. May 2019 A1
20190146477 Cella et al. May 2019 A1
20190171612 Shahar et al. Jun 2019 A1
20190196982 Rozas et al. Jun 2019 A1
20190199646 Singh et al. Jun 2019 A1
20190253354 Caulfield et al. Aug 2019 A1
20190280978 Schmatz et al. Sep 2019 A1
20190294575 Dennison et al. Sep 2019 A1
20190306134 Shanbhogue et al. Oct 2019 A1
20190332314 Zhang et al. Oct 2019 A1
20190334624 Bernard Oct 2019 A1
20190356611 Das et al. Nov 2019 A1
20190361728 Kumar et al. Nov 2019 A1
20190379610 Srinivasan et al. Dec 2019 A1
20200036644 Belogolovy et al. Jan 2020 A1
20200084150 Burstein et al. Mar 2020 A1
20200145725 Eberle et al. May 2020 A1
20200177505 Li Jun 2020 A1
20200177521 Blumrich et al. Jun 2020 A1
20200259755 Wang et al. Aug 2020 A1
20200272579 Humphrey et al. Aug 2020 A1
20200274832 Humphrey et al. Aug 2020 A1
20200334195 Chen et al. Oct 2020 A1
20200349098 Caulfield et al. Nov 2020 A1
20210081410 Chavan et al. Mar 2021 A1
20210152494 Johnsen et al. May 2021 A1
20210263779 Haghighat et al. Aug 2021 A1
20210334206 Colgrove et al. Oct 2021 A1
20210377156 Michael et al. Dec 2021 A1
20210409351 Das et al. Dec 2021 A1
20220131768 Ganapathi et al. Apr 2022 A1
20220166705 Froese May 2022 A1
20220200900 Roweth Jun 2022 A1
20220210058 Bataineh et al. Jun 2022 A1
20220217078 Ford et al. Jul 2022 A1
20220217101 Yefet et al. Jul 2022 A1
20220245072 Roweth et al. Aug 2022 A1
20220278941 Shalev et al. Sep 2022 A1
20220309025 Chen et al. Sep 2022 A1
20230035420 Sankaran et al. Feb 2023 A1
20230046221 Pismenny et al. Feb 2023 A1
Foreign Referenced Citations (32)
Number Date Country
101729609 Jun 2010 CN
102932203 Feb 2013 CN
110324249 Oct 2019 CN
110601888 Dec 2019 CN
0275135 Jul 1988 EP
2187576 May 2010 EP
2219329 Aug 2010 EP
2947832 Nov 2015 EP
3445006 Feb 2019 EP
2003-244196 Aug 2003 JP
3459653 Oct 2003 JP
10-2012-0062864 Jun 2012 KR
10-2012-0082739 Jul 2012 KR
10-2014-0100529 Aug 2014 KR
10-2015-0026939 Mar 2015 KR
10-2015-0104056 Sep 2015 KR
10-2017-0110106 Oct 2017 KR
10-1850749 Apr 2018 KR
2001069851 Sep 2001 WO
0247329 Jun 2002 WO
2003019861 Mar 2003 WO
2004001615 Dec 2003 WO
2005094487 Oct 2005 WO
2007034184 Mar 2007 WO
2009010461 Jan 2009 WO
2009018232 Feb 2009 WO
2014092780 Jun 2014 WO
2014137382 Sep 2014 WO
2014141005 Sep 2014 WO
2018004977 Jan 2018 WO
2018046703 Mar 2018 WO
2019072072 Apr 2019 WO
Non-Patent Literature Citations (74)
Entry
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24342, dated Oct. 27, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024192, dated Oct. 23, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2020/024221, dated Oct. 26, 2020, 9 pages.
International Search Report cited in PCT/US2020/024170 dated Dec. 16, 2020; 3 pages.
Maabi, S., et al.; “ERFAN: Efficient reconfigurable fault-tolerant deflection routing algorithm for 3-D Network-on-Chip”; Sep. 6-9, 2016.
Maglione-Mathey, G., et al.; “Scalable Deadlock-Free Deterministic Minimal-Path Routing Engine for InfiniBand-Based Dragonfly Networks”; Aug. 21, 2017; 15 pages.
Mamidala, A.R., et al.; “Efficient Barrier and Allreduce on Infiniband clusters using multicast and adaptive algorithms”; Sep. 20-23, 2004; 10 pages.
Mammeri, Z; “Reinforcement Learning Based Routing in Networks: Review and Classification of Approaches”; Apr. 29, 2019; 35 pages.
Mollah; M. A., et al.; “High Performance Computing Systems. Performance Modeling, Benchmarking, and Simulation: 8th International Workshop”; Nov. 13, 2017.
Open Networking Foundation; “OpenFlow Switch Specification”; Mar. 26, 2015; 283 pages.
Prakash, P., et al.; “The TCP Outcast Problem: Exposing Unfairness in Data Center Networks”; 2011; 15 pages.
Ramakrishnan, K., et al.; “The Addition of Explicit Congestion Notification (ECN) to IP”; Sep. 2001; 63 pages.
Roth, P. C., et al.; “MRNet: A Software-Based Multicast/Reduction Network for Scalable Tools1”; Nov. 15-21, 2003; 16 pages.
Silveira, J., et al.; “Preprocessing of Scenarios for Fast and Efficient Routing Reconfiguration in Fault-Tolerant NoCs”; Mar. 4-6, 2015.
Tsunekawa, K.; “Fair bandwidth allocation among LSPs for AF class accommodating TCP and UDP traffic in a Diffserv-capable MPLS network”; Nov. 17, 2005; 9 pages.
Underwood, K.D., et al.; “A hardware acceleration unit for MPI queue processing”; Apr. 18, 2005; 10 pages.
Wu, J .; “Fault-tolerant adaptive and minimal routing in mesh-connected multicomputers using extended safety levels”; Feb. 2000; 11 pages.
Xiang, D., et al.; “Fault-Tolerant Adaptive Routing in Dragonfly Networks”; Apr. 12, 2017; 15 pages.
Xiang, D., et al.; “Deadlock-Free Broadcast Routing in Dragonfly Networks without Virtual Channels”, submission to IEEE transactions on Parallel and Distributed Systems, 2015, 15 pages.
Awerbuch, B., et al.; “An On-Demand Secure Routing Protocol Resilient to Byzantine Failures”; Sep. 2002; 10 pages.
Belayneh L.W., et al.; “Method and Apparatus for Routing Data in an Inter-Nodal Communications Lattice of a Massively Parallel Computer System by Semi-Randomly Varying Routing Policies for Different Packets”; 2019; 3 pages.
Bhatele, A., et al.; “Analyzing Network Health and Congestion in Dragonfly-based Supercomputers”; May 23-27, 2016; 10 pages.
Blumrich, M.A., et al.; “Exploiting Idle Resources in a High-Radix Switch for Supplemental Storage”; Nov. 2018; 13 pages.
Chang, F., et al.; “PVW: Designing Vir PVW: Designing Virtual World Ser orld Server Infr er Infrastructur astructure”; 2010; 8 pages.
Chang, F., et al.; “PVW: Designing Virtual World Server Infrastructure”; 2010; 8 pages.
Chen, F., et al.; “Requirements for RoCEv3 Congestion Management”; Mar. 21, 2019; 8 pages.
Cisco Packet Tracer; “packet-tracer;—ping”; https://www.cisco.com/c/en/us/td/docs/security/asa/asa-command-reference/I-R/cmdref2/p1.html; 2017.
Eardley, ED, P; “Pre-Congestion Notification (PCN) Architecture”; Jun. 2009; 54 pages.
Escudero-Sahuquillo, J., et al.; “Combining Congested-Flow Isolation and Injection Throttling in HPC Interconnection Networks”; Sep. 13-16, 2011; 3 pages.
Hong, Y.; “Mitigating the Cost, Performance, and Power Overheads Induced by Load Variations in Multicore Cloud Servers”; Fall 2013; 132 pages.
Huawei; “The Lossless Network For Data Centers”; Nov. 7, 2017; 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024248, dated Jul. 8, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/024332, dated Jul. 8, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24253, dated Jul. 6, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24256, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24257, dated Jul. 7, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24258, dated Jul. 7, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24259, dated Jul. 9, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24260, dated Jul. 7, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24268, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24269, dated Jul. 9, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24339, dated Jul. 8, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024125, dated Jul. 10, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024129, dated Jul. 10, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024237, dated Jul. 14, 2020, 5 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024239, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024241, dated Jul. 14, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024242, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024244, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024245, dated Jul. 14, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024246, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024250, dated Jul. 14, 2020, 12 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024254, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024262, dated Jul. 13, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024266, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024270, dated Jul. 10, 2020, 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024271, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024272, dated Jul. 9, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024276, dated Jul. 13, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024304, dated Jul. 15, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024311, dated Jul. 17, 2020, 8 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024321, dated Jul. 9, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024324, dated Jul. 14, 2020, 10 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/024327, dated Jul. 10, 2020, 15 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24158, dated Jul. 6, 2020, 18 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24251, dated Jul. 6, 2020, 11 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US2020/24267, dated Jul. 6, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24303, dated Oct. 21, 2020, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US20/24340, dated Oct. 26, 2020, 9 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20809930.9, dated Mar. 2, 2023, 9 pages.
Extended European Search Report and Search Opinion received for EP Application No. 20810784.7, dated Mar. 9, 2023, 7 pages.
Ramakrishnan et al., RFC 3168, “The addition of Explicit Congestion Notification (ECN) to IP”, Sep. 2001 (Year: 2001).
Cisco; “Understanding Rapid Spanning Tree Protocol (802.1w)”; Aug. 1, 2017; 13 pages.
International Search Report and Written Opinion received for PCT Application No. PCT/US20/24243, dated Jul. 9, 2020, 10 pages.
Related Publications (1)
Number Date Country
20220224628 A1 Jul 2022 US
Provisional Applications (3)
Number Date Country
62852273 May 2019 US
62852289 May 2019 US
62852203 May 2019 US