A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
1. Field of the Invention
This invention relates to transmission of digital information over data networks. More particularly, this invention relates to operations in the routing of packets in data switching networks.
2. Description of the Related Art
The meanings of certain acronyms and abbreviations used herein are given in Table 1.
Implementation of common architectures for networks switches and routers involves modular switches. A modular data switch serves a large number of network ports and is managed as a single entity. The architecture is built of two components: 1) line card switch; and 2) fabric switch. Line card switch ports are connected to network ports and fabric switches establish connectivity between the line card switches. One connection topology in which modular switches are used is the multistage clos network. Such data switches constitute the core of common Ethernet switches, IP routers, multi-service platforms, various transmission network elements and legacy communication equipment. Buffering within the switches normally serves to resolve problems like congestion and contention among switch ports.
For example, commonly assigned U.S. Patent Application Publication No. 2011/0058571 to Gil Bloch et al., entitled Data Switch with Shared Port Buffers, which is herein incorporated by reference, describes a system of switch ports, each switch port including one or more port buffers for buffering data that traverses the switch port. A switch fabric is coupled to transfer the data between the switch ports. A switch control unit is configured to reassign at least one port buffer of a given switch port to buffer a part of the data that does not enter or exit the apparatus via the given switch port, and to cause the switch fabric to forward the part of the data to a destination switch port via the at least one reassigned port buffer.
One of the main challenges in designing modular switches is providing high throughput. The throughput is dictated by collisions of traffic moving upstream and downstream. Sending traffic through the fabric on the upstream direction comprises selecting a fabric device for traffic received from a network port and targeting a network port on another line card.
Hash-based forwarding ensures flow ordering. The drawback is that hashing does not spread the traffic ideally among the fabric switches, with the result that congestion occur at the line card, known as head-of-line (HOL) congestion. Moreover, when the number of flows is low hash performance is particularly weak.
Possible dynamic forwarding schemes include random selection of switches, and adaptive switch selection based on link load. Dynamic forwarding schemes result in packets from a flow arriving out of order, due, for example, to variable latencies in the fabric switches. This problem is handled by maintaining a reorder buffer at every egress line card.
The reorder buffer size can be statically allocated. The static buffer size is calculated as the product of the number of ordering domains and a tolerance for packet disorder. A series of ordering domains can be defined, for example as tuples of a transmitting device and a receiving device in which all members of the series share a common identifier, such as an SSiD or DSiD. The tolerance can be expressed in terms of the switch latencies involved in a round trip, or as a maximum variation in packet arrival order, such as 20 packets. Static buffers of this sort require a large memory allocation.
In an alternative approach dynamic reorder buffer sizes can be used, in which each ordering domain is maintained as a linked list. While this requires much less memory than static buffer allocation, it incurs the complexity of link list management, which makes this approach less attractive.
Embodiments of the invention provide an optimized packet buffer architecture to provide high throughput through while maintaining a small buffer capacity in ASiCs of the network elements. A small reorder buffer can be economically supported by an embedded memory, which reduces the cost of the system for a given number of ports and bandwidth. Buffer memory requirements are kept small by implementing the reordering domain as hash tables rather than linked lists. Hash collisions can be practically avoided by a suitable choice of hashing function. Compared with known hash-based forwarding schemes, head-of-line congestion is lessened because at any point in time the hash table is mainly populated by packet descriptors having closely related incremental sequence numbers, which facilitates design of a suitable hash function in which collisions are rare.
A similar result may be obtained using external packet buffers. However this approach is less attractive because external memory accesses are slower than internal memory, which limits the bandwidth of switches connected to line cards. Hence, more resources are required to providing the same bandwidth than when internal reorder buffers are used.
There is provided according to embodiments of the invention a method of communication, which is carried out in a networked system of ingress nodes and egress nodes connected by a fabric by maintaining transmit queues of pending packets awaiting transmission through respective ports of the egress nodes. The transmit queues are associated with a hash table that stores packet descriptors including packet sequence numbers. The method is further carried out by receiving new packets in the ingress nodes, receiving credits from the egress nodes that reflect capacities of the transmit queues to accommodate the new packets, consuming the credits by transmitting at least a portion of the new packets from the ingress nodes to the egress nodes via the fabric, storing descriptors of the transmitted new packets in the hash table, determining by accessing the hash table that one of the descriptors in the hash table contains a desired packet sequence number, and thereafter forwarding the transmitted new packet described by the one descriptor from the egress nodes.
Yet another aspect of the method includes requesting transmission of the credits from the egress nodes to the ingress nodes.
Still another aspect of the method includes autonomously transmitting the credits from the egress nodes to the ingress nodes.
According to one aspect of the method, there is a single hash table that services all the transmit queues. Alternatively, there is a plurality of hash tables that service one or more of the transmit queues.
Yet another aspect of the method includes calculating a size of the hash table based on a round trip time between the ingress nodes and the egress nodes and a latency variation for passage through the fabric.
Still another aspect of the method includes calculating a size of the hash table according to a size of a packet memory in the egress nodes that is allocated for packet buffering.
Yet another aspect of the method includes storing the descriptors of the new packets in virtual output queues in order of packet sequence numbers of the new packets, and the credits are consumed by transmitting the new packets according to positions thereof in the virtual output queues.
According to a further aspect of the method transmitting the new packets includes selecting fabric ports of the ingress nodes according to a load-balancing algorithm and enqueuing the new packets in transmit queues of the selected fabric ports of the ingress nodes.
One aspect of the method includes storing the descriptors of the new packets in virtual output queues and limiting sizes of the transmit queues according to bandwidths of the egress nodes and a latency measured by a time required to conclude a handshake between the transmit queues and the virtual output queues.
According to an additional aspect of the method, a key of the hash table is a combination of the respective packet sequence numbers and an ordering domain of the transmitted new packets.
Another aspect of the method, requesting and receiving credits are performed by a bandwidth manager that is linked to the ingress nodes and the egress nodes.
There is further provided according to embodiments of the invention a system including a packet network having ingress nodes and egress nodes connected by a fabric, and a hash table stored in a memory of the egress nodes. The egress nodes are provided with ports and transmit queues of pending packets awaiting transmission through respective ports. The hash table stores packet descriptors of the pending packets including packet sequence numbers. The ingress nodes and the egress nodes are configured to intercommunicate via the fabric and are cooperative for receiving new packets in the ingress nodes, receiving credits from the egress nodes that reflect capacities of the transmit queues to accommodate the new packets, consuming the credits by transmitting at least a portion of the new packets from the ingress nodes to the egress nodes via the fabric, storing descriptors of the transmitted new packets in the hash table, determining by accessing the hash table that one of the descriptors in the hash table contains a desired packet sequence number, and thereafter forwarding the transmitted new packet described by the one descriptor from the egress nodes.
For a better understanding of the present invention, reference is made to the detailed description of the invention, by way of example, which is to be read in conjunction with the following drawings, wherein like elements are given like reference numerals, and wherein:
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the various principles of the present invention. It will be apparent to one skilled in the art, however, that not all these details are necessarily always needed for practicing the present invention. In this instance, well-known circuits, control logic, and the details of computer program instructions for conventional algorithms and processes have not been shown in detail in order not to obscure the general concepts unnecessarily.
Documents incorporated by reference herein are to be considered an integral part of the application except that, to the extent that any terms are defined in these incorporated documents in a manner that conflicts with definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
Definitions
A “switch fabric” or “fabric” refers to a network topology in which network nodes interconnect via one or more network switches (such as crossbar switches), typically through many ports. The interconnections are configurable such that data is transmitted from one node to another node via designated ports. A common application for a switch fabric is a high performance backplane. Typically the fabric is implemented by chassis-based modular switches and line cards.
In a system of ingress nodes, egress nodes, and a fabric therebetween, an “ingress node” or “ingress device” is a device that accepts traffic originating from outside the system and directs the traffic to a destination within the fabric.
An “egress node” or “egress device” is a device in the system that receives traffic from the fabric and directs the traffic to a destination outside the system.
A “credit” transferred by an egress node to an ingress node confers a right upon the ingress node to consume a specified portion of the memory of the egress node.
System Overview.
Turning now to the drawings, reference is now made to
In the pictured embodiment, decision logic 14 receives packet 16 containing a header 18 and payload data 20. A processing pipeline 22 in decision logic 14 extracts a classification key from each packet 16, typically (although not necessarily) including the contents of certain fields of header 18. For example, the key may comprise the source and destination addresses and ports and a protocol identifier. Pipeline 22 matches the key against a matching database 24 containing a set of rule entries, which is stored in an SRAM 26 in network element 10, as described in detail hereinbelow. SRAM 26 also contains a list of actions 28 to be performed when a key is found to match one of the rule entries. For this purpose, each rule entry typically contains a pointer to the particular action that logic 14 is to apply to packet 16 in case of a match.
In addition, network element 10 typically comprises a cache 30, which contains rules that have not been incorporated into the matching database 24 in SRAM 26. Cache 30 may contain, for example, rules that have recently been added to network element 10 and not yet incorporated into the data structure of matching database 24, and/or rules having rule patterns that occur with low frequency, so that their incorporation into the data structure of matching database 24 would be impractical. The entries in cache 30 likewise point to corresponding actions 28 in SRAM 26. Pipeline 22 may match the classification keys of all incoming packets 16 against both matching database 24 in SRAM 26 and cache 30. Alternatively, cache 30 may be addressed only if a given classification key does not match any of the rule entries in database 24 or if the matching rule entry indicates (based on the value of a designated flag, for example) that cache 30 should be checked, as well, for a possible match to a rule with higher priority.
Pipeline 22 typically comprises dedicated or programmable hardware logic, which is configured to carry out the functions described herein.
Reference is now made to
Packet Forwarding by an Ingress Device.
Reference is now made to
In this section the requirements for packet forwarding from an ingress device are detailed: Each of the ingress devices 54 has any number of ingress ports and maintains three types of queues. Each of the ingress ports is associated with a receive queue 58 (RQ) that absorbs incoming traffic. The virtual output queues 52 hold descriptors of traffic committed for transmission to specific egress devices 56 according to a destination port and egress priority. Transmit queues 60 (TQs) accept descriptors of pending traffic from the virtual output queues 52 for transmission from the ingress devices 54 through fabric switch 62 to the respective egress devices 56. The transmit queues 60 are generally chosen by a load-balancing algorithm in the ingress devices 54.
The number of VOQs is the product:
(Number of target ports)×(Number of Traffic Priorities),
where “ports” is the number of network ports supported by a fabric, e.g., a modular fabric.
The ingress device is required to support a three-stage handshake;
In some embodiments the credit may be transmitted automatically, e.g., periodically, from the egress device to the ingress device without an explicit request.
In
The transmit queues 60 are maintained very short, according to a bandwidth delay product: a product of the required bandwidth of the associated fabric port and a latency measured by the propagation delay involved in dequeuing the virtual output queues 52 and movement of packets into the transmit queues 60. The latency is largely due to propagation delay within the ASiC of the ingress device. Any congestion is managed using the VOQs. Transmit queues 60 are available as destinations only if the length of the transmit queues 60 do not exceed a shallow threshold, whose value is system-dependent, as noted above.
Virtual output queues 52 maintain packet descriptors in order of their packet sequence numbers. They provides packet descriptors only to available transmit queues 60 with the objective of load balancing among the transmit queues 60. As virtual output queues 52 are maintained for each of the possible egress devices 56, as shown in
Each of the virtual output queues 52 maintains a running counter of packet sequence numbers. Each transmitted packet is provided with its associated sequence number upon transmission. The packet also has to include its source device and its priority (for domain ordering identification).
Packet Forwarding by an Egress Device.
Reference is now made to
The egress devices 56 are required to be aware of ordering domains. The number of ordering domains is:
Number_of_source_devices×Number_of_local_Egress_ports×Number_of_priorities.
Descriptors having packet sequence numbers relatively close to the sequence number of a current packet in the TQ are stored in a hash table 76. While hash table 76 is shown within egress device 56, this is not necessarily the case. In a practical system, out-of-order packet descriptors are close enough to one another in their sequence numbers to be contained within a relatively small hash table and memory buffer, e.g., packets within 20 sequence numbers of one another. The key to the hash table 76 can be, for example:
Sequence number×Ordering Domain, or
Source Device identifier×Local Destination Egress port×Priority×Sequence number.
Upon receiving appropriate credit, the ingress devices 54 send the packet descriptors from virtual output queues 52 (
The egress devices 56 remove or pop the packet descriptors from the hash table 76 in order of their sequence numbers for insertion into transmit queues 72. This handles any necessary reordering for packets that have reached the egress devices 56 out of order. If a sequence number is missing, it is initially assumed that the packet has not yet arrived at the egress devices 56, and a delay results while arrival is awaited. However, after a predetermined period, if the sequence number is still missing, the associated packet is assumed to be lost, and removal from the hash table 76 continues with the next sequence number.
Hash Table of Descriptors.
Continuing to refer to
Credits, i.e., indications of available memory space in an egress device, are sent to all or the ingress devices 54 based on the state of the transmit queues 72, which can be measured by packet memory availability. The credits may be measured in bytes of available memory.
Ingress devices 54 transmits traffic to the ingress ports 68 of the egress devices 56 based on credit availability for the particular egress device.
In one approach, the hash table size is calculated based on round trip time (RTT)+(out-of-order degree×number of traffic priorities). The term “out-of-order degree” refers to latency variation for passage through the fabric. It is one measure of disorder in the transmission of packets. In one embodiment the limiting sizes of the transmit queues is determined according to bandwidths of fabric ports of the ingress nodes and a latency measured by a time required to conclude a handshake between the transmit queues and the virtual output queues.
The descriptor credit mechanism described above guarantees that there is available storage for each packet descriptor when it is received in the ingress fabric ports 68. As noted above, a suitable hash table implementation assures that space availability approaches 100%. This can be achieved by cuckoo hashing implemented in hardware. One technique of cuckoo hashing is described in commonly assigned application Ser. No. 15/086,095, entitled Cuckoo Hashing with Single and Double Size Entries, which is herein incorporated by reference. Cuckoo hashing can achieve 99% space availability. If the hash table memory is for example 110% or even 120% compared to the number of credits then only a few iterations are needed by cuckoo hashing. Since the traffic incoming from a single VOQ has an incremental sequence number a suitable hash function achieves better than random utilization of the hash table.
In one embodiment the hash table 76 is shared, with some entries reserved for each of the transmit queues 72 and some shared by all transmit queues 72, depending on the organization of the memory of the egress devices 56 and the allocation algorithm. This can reduce the size of the memory with the tradeoff that rarely (the probability depends on the relationship between reserved and shared space) HOL blockage between queues can occur. The same approach can be used to share hash table memory among the transmit queues 72.
Packet Forwarding within the Fabric.
Forwarding is based on tags provided by the ingress devices 54. The ingress devices 54 and egress devices 56 are otherwise indifferent to the internal flow of data within the fabric switches 62.
First Alternate Embodiment.
Reference is now made to
Moreover, the bandwidth manager 78 may store all the information necessary to accept all the requests from the ingress devices, allocate and distribute credits from the egress devices, thereby eliminating portions of the above-described three-way handshake.
The bandwidth manager 78 may be implemented as a dedicated processor, with memory and suitable interfaces, for carrying out the functions that are described herein in a centralized fashion. This processor may reside in one (or more) of the nodes of the fabric 32, or it may reside in a dedicated management unit. In some embodiments, communication between the bandwidth manager 78 and the switches 48, 50 may be carried out through an out-of-band channel and does not significantly impact the bandwidth of the fabric nor that of individual links.
Alternatively or additionally, although bandwidth manager 78 is shown in
Second Alternate Embodiment.
Reference is now made to
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.
Number | Name | Date | Kind |
---|---|---|---|
6108713 | Sambamurthy et al. | Aug 2000 | A |
6154446 | Kadambi et al. | Nov 2000 | A |
6178448 | Gray et al. | Jan 2001 | B1 |
6594263 | Martinsson et al. | Jul 2003 | B1 |
6678277 | Wils et al. | Jan 2004 | B1 |
6859435 | Lee et al. | Feb 2005 | B1 |
7321553 | Prasad et al. | Jan 2008 | B2 |
7346059 | Garner | Mar 2008 | B1 |
7738454 | Panwar | Jun 2010 | B1 |
7773621 | Jensen | Aug 2010 | B2 |
7778168 | Rodgers et al. | Aug 2010 | B1 |
7813348 | Gupta et al. | Oct 2010 | B1 |
7821939 | Decusatis et al. | Oct 2010 | B2 |
7872973 | Sterne et al. | Jan 2011 | B2 |
7894343 | Chao et al. | Feb 2011 | B2 |
8078743 | Sharp et al. | Dec 2011 | B2 |
8345548 | Gusat et al. | Jan 2013 | B2 |
8473693 | Muppalaneni et al. | Jun 2013 | B1 |
8565092 | Arumilli et al. | Oct 2013 | B2 |
8576715 | Bloch et al. | Nov 2013 | B2 |
8630294 | Keen | Jan 2014 | B1 |
8644140 | Bloch et al. | Feb 2014 | B2 |
8767561 | Gnanasekaran et al. | Jul 2014 | B2 |
8811183 | Anand et al. | Aug 2014 | B1 |
8879396 | Guay et al. | Nov 2014 | B2 |
8989017 | Naouri | Mar 2015 | B2 |
8995265 | Basso et al. | Mar 2015 | B2 |
9014006 | Haramaty et al. | Apr 2015 | B2 |
9325619 | Guay et al. | Apr 2016 | B2 |
9356868 | Tabatabaee et al. | May 2016 | B2 |
9385962 | Rimmer et al. | Jul 2016 | B2 |
9426085 | Anand et al. | Aug 2016 | B1 |
9648148 | Rimmer et al. | May 2017 | B2 |
9742683 | Vanini | Aug 2017 | B1 |
20020055993 | Shah | May 2002 | A1 |
20020191559 | Chen et al. | Dec 2002 | A1 |
20030108010 | Kim et al. | Jun 2003 | A1 |
20030223368 | Allen et al. | Dec 2003 | A1 |
20040008714 | Jones | Jan 2004 | A1 |
20050053077 | Blanc et al. | Mar 2005 | A1 |
20050169172 | Wang et al. | Aug 2005 | A1 |
20050204103 | Dennison | Sep 2005 | A1 |
20050216822 | Kyusojin et al. | Sep 2005 | A1 |
20050226156 | Keating et al. | Oct 2005 | A1 |
20050228900 | Stuart et al. | Oct 2005 | A1 |
20060008803 | Brunner et al. | Jan 2006 | A1 |
20060087989 | Gai et al. | Apr 2006 | A1 |
20060092837 | Kwan et al. | May 2006 | A1 |
20060092845 | Kwan et al. | May 2006 | A1 |
20070097257 | El-Maleh et al. | May 2007 | A1 |
20070104102 | Opsasnick | May 2007 | A1 |
20070104211 | Opsasnick | May 2007 | A1 |
20070201499 | Kapoor et al. | Aug 2007 | A1 |
20070291644 | Roberts et al. | Dec 2007 | A1 |
20080037420 | Tang et al. | Feb 2008 | A1 |
20080175146 | Van Leekwuck et al. | Jul 2008 | A1 |
20080192764 | Arefi et al. | Aug 2008 | A1 |
20090207848 | Kwan et al. | Aug 2009 | A1 |
20100220742 | Brewer et al. | Sep 2010 | A1 |
20130014118 | Jones | Jan 2013 | A1 |
20130039178 | Chen et al. | Feb 2013 | A1 |
20130250757 | Tabatabaee et al. | Sep 2013 | A1 |
20130250762 | Assarpour | Sep 2013 | A1 |
20130275631 | Magro et al. | Oct 2013 | A1 |
20130286834 | Lee | Oct 2013 | A1 |
20130305250 | Durant | Nov 2013 | A1 |
20140133314 | Mathews et al. | May 2014 | A1 |
20140269274 | Banavalikar | Sep 2014 | A1 |
20140269324 | Tietz et al. | Sep 2014 | A1 |
20140286349 | Kitada | Sep 2014 | A1 |
20150026361 | Matthews et al. | Jan 2015 | A1 |
20150124611 | Attar et al. | May 2015 | A1 |
20150127797 | Attar et al. | May 2015 | A1 |
20150180782 | Rimmer et al. | Jun 2015 | A1 |
20150200866 | Pope et al. | Jul 2015 | A1 |
20150381505 | Sundararaman et al. | Dec 2015 | A1 |
20160135076 | Grinshpun et al. | May 2016 | A1 |
20160191392 | Liu | Jun 2016 | A1 |
20160294696 | Gafni et al. | Oct 2016 | A1 |
20160337257 | Yifrach et al. | Nov 2016 | A1 |
20160344636 | Elias et al. | Nov 2016 | A1 |
20170118108 | Avci et al. | Apr 2017 | A1 |
20170142020 | Sundararaman et al. | May 2017 | A1 |
20170180261 | Ma et al. | Jun 2017 | A1 |
20170187641 | Lundqvist et al. | Jun 2017 | A1 |
20170295112 | Cheng et al. | Oct 2017 | A1 |
20170373989 | Gafni et al. | Dec 2017 | A1 |
20180205653 | Wang | Jul 2018 | A1 |
20180241677 | Srebro et al. | Aug 2018 | A1 |
Number | Date | Country |
---|---|---|
1720295 | Nov 2006 | EP |
2466476 | Jun 2012 | EP |
2009107089 | Sep 2009 | WO |
2013136355 | Sep 2013 | WO |
2013180691 | Dec 2013 | WO |
Entry |
---|
U.S. Appl. No. 14/967,403 office action dated Nov. 9, 2017. |
U.S. Appl. No. 15/081,969 office action dated Oct. 5, 2017. |
U.S. Appl. No. 15/063,527 office action dated Feb. 8, 2018. |
U.S. Appl. No. 15/161,316 office action dated Feb. 7, 2018. |
U.S. Appl. No. 14/994,164 office action dated Jul. 5, 2017. |
U.S. Appl. No. 15/075,158 office action dated Aug. 24, 2017. |
U.S. Appl. No. 15/081,969 office action dated May 17, 2018. |
U.S. Appl. No. 15/432,962 office action dated Apr. 26, 2018. |
U.S. Appl. No. 15/161,316 Office Action dated Jul. 20, 2018. |
European Application # 17172494.1 search report dated Oct. 13, 2017. |
European Application # 17178355 search report dated Nov. 13, 2017. |
Gran et al., “Congestion Management in Lossless Interconnection Networks”, Submitted to the Faculty of Mathematics and Natural Sciences at the University of Oslo in partial fulfillment of the requirements for the degree Philosophiae Doctor, 156 pages, Sep. 2013. |
Pfister et al., “Hot Spot Contention and Combining in Multistage Interconnect Networks”, IEEE Trans on Computers, vol. C-34, pp. 943-948, Oct. 1985. |
Zhu et al.,“Congestion control for large-scale RDMA deployments”, SIGCOMM, ACM, pp. 523-536, Aug. 17-21, 2015. |
Hahne et al., “Dynamic Queue Length Thresholds for Multiple Loss Priorities”, IEEE/ACM Transactions on Networking, vol. 10, No. 3, pp. 368-380, Jun. 2002. |
Choudhury et al., “Dynamic Queue Length Thresholds for Shared-Memory Packet Switches”, IEEE/ACM Transactions Networking, vol. 6, Issue 2 , pp. 130-140, Apr. 1998. |
CISCO Systems, Inc.,“Advantage Series White Paper Smart Buffering”, 10 pages, 2016. |
Ramakrishnan et al., “The Addition of Explicit Congestion Notification (ECN) to IP”, Request for Comments 3168, Network Working Group, 63 pages, Sep. 2001. |
IEEE Standard 802.1Q™—2005, “IEEE Standard for Local and metropolitan area networks Virtual Bridged Local Area Networks”, 303 pages, May 19, 2006. |
Infiniband TM Architecture, Specification vol. 1, Release 1.2.1, Chapter 12, pp. 657-716, Nov. 2007. |
IEEE Std 802.3, Standard for Information Technology—Telecommunications and information exchange between systems—Local and metropolitan area networks—Specific requirements; Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications Corrigendum 1: Timing Considerations for PAUSE Operation, Annex 31B (MAC Control PAUSE operation), pp. 763-772, year 2005. |
IEEE Std 802.1Qbb., IEEE Standard for Local and metropolitan area networks—“Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks—Amendment 17: Priority-based Flow Control”, 40 pages, Sep. 30, 2011. |
Hoeiland-Joergensen et al., “The FlowQueue-CoDel Packet Scheduler and Active Queue Management Algorithm”, Internet Engineering Task Force (IETF) as draft-ietf-aqm-fq-codel-06 , 23 pages, Mar. 18, 2016. |
Gafni et al., U.S. Appl. No. 15/075,158, filed Mar. 20, 2016. |
Shpiner et al., U.S. Appl. No. 14/967,403, filed Dec. 14, 2015. |
Elias et al., U.S. Appl. No. 14/994,164, filed Jan. 13, 2016. |
Aibester et al., U.S. Appl. No. 15/063,527, filed Mar. 8, 2016. |
Kriss et al., U.S. Appl. No. 15/161,316, filed May 23, 2016. |
Roitshtein et al., U.S. Appl. No. 14/961,923, filed Dec. 8, 2015. |
CISCO Systems, Inc., “Priority Flow Control: Build Reliable Layer 2 Infrastructure”, 8 pages, 2015. |
Elias et al., U.S. Appl. No. 15/081,969, filed Mar. 28, 2016. |
Gafni et al., U.S. Appl. No. 15/194,585, filed Jun. 28, 2016. |
Zdornov et al., U.S. Appl. No. 15/432,962, filed Feb. 15, 2017. |
Levy et al., U.S. Appl. No. 15/086,095, filed Mar. 31, 2016. |
U.S. Appl. No. 15/432,962 office action dated Nov. 2, 2018. |
U.S. Appl. No. 15/161,316 Office Action dated Dec. 11, 2018. |
U.S. Appl. No. 15/963,118 Office Action dated Aug. 21, 2019. |
Number | Date | Country | |
---|---|---|---|
20180278550 A1 | Sep 2018 | US |