The present invention relates to credit-based apparatus for controlling data communications, and is particularly concerned with credit recovery.
A known system for controlling data transmission employs a credit-based control approach that provides lossless transmission of data cells. Credits are generated starting at a destination node to reflect its ability to receive data. In an end-to-end implementation, this credit is transmitted back to the next upstream node where the credit is interpreted and modified based on that node's ability to receive data. The process continues through each intermediate node back to the source, where the credit at the source reflects all intermediate credits as well as the one from the destination. Typically, the credits reflect the unused buffer space at each node. The source then interprets the credit as an indication of the amount of data that it can transmit into the network without any data loss due to congestion or buffer overflow.
A variation on the end-to-end credit-based approach is a link-to-link implementation in which adjacent nodes in a switch network, for example, interact to control the flow of data from one node, a sender, to another node, the receiver. The sender supplies data segments for forwarding to the receiver, and the receiver has a finite data receive buffer into which received data segments from the sender are placed. The emptying of the data receive buffer is controlled by a buffer read signal from a downstream entity. In an ideal, uncongested communication fabric, each segment of data could be read from the data receive buffer the cycle after it is written therein from the sender. In such a case, the data receive buffer would never contain more than one data segment. When congestion causes the downstream entity to slow its rate of buffer reads below one per cycle, data segments accumulate in the receive buffer. This reduces the space available for storing future data segments from the sender.
Barley et al disclose in U.S. Pat. No. 6,044,406 issued Mar. 28, 2000, a credit-based flow control checking scheme for controlling data communications in a closed loop system comprising a sender, a receiver and a link coupling the sender and receiver. Their credit-based scheme includes automatically periodically transmitting a credit query from the receiver to the sender and upon return receipt of a credit acknowledge containing the available credit count maintained by the sender, determining whether credit gain or credit loss has occurred subsequent to initialization of the closed loop system. Along with automatically determining whether credit gain or credit loss has occurred, a method/system is presented for automatically correcting the loss or gain without requiring resetting of the closed loop system.
An object of the present invention is to provide an improved a credit-based method of controlling data communications.
In accordance with an aspect of the present invention there is provided a credit-based method of controlling the flow of data communications between a sender and a receiver coupled by a link, said method comprising the steps of:
(a) allocating a specified initial number of credits to said sender in an available credit count, a credit representing a predetermined amount of memory space in a credit-managed queue in a receiver reserved to store a data segment received from the sender;
(b) transmitting a data segment across the link from the sender to the receiver and decrementing the available credit count at the sender for the transmitted data segment;
(c) returning a credit from the receiver to the available credit count of the sender with each data segment received and transferred from the credit-managed queue within a multi-purpose physical queue at the receiver; and
(d) checking the number of credits in the closed loop system to ascertain whether credit loss or credit gain has occurred, said credit loss or credit gain potentially affecting control of data communications within the closed loop system.
In accordance with an aspect of the present invention there is provided a credit-based apparatus for controlling data communications between a sender and a receiver coupled by a link, the receiver comprising: a queue for receiving data units from the sender; a credit return module for returning credit in response to transferring a data unit from the credit-managed queue; and a pipe-clean module for assisting the sender in resetting the credits in the closed loop system in response to a message from the sender.
In accordance with an aspect of the present invention there is provided a credit-based apparatus for controlling data communications between a sender and a receiver coupled by a link, the receiver comprising: a credit-managed queue for receiving data units from the sender; a credit return module for returning credit in response to transferring a data unit out of the credit-managed queue; and a pipe-clean module for assisting the sender in resetting the credits in the closed loop system in response to a message from the sender and sending a response to the sender via a second link.
The present invention and embodiments thereof have several advantages over the state-of-the-art.
Firstly, the intelligence as to number of credits in the system is located at the sender end of the link, making it easy to couple the credit mechanisms to the Upstream Datapath Scheduler 12 behaviors.
The CQE circuit can share a physical data queue with other CQE circuits without knowledge of the system level partitioning of the physical data queue (the intelligence is in the CHE), leading to an economical, efficient, and scalable system.
The ability to manage only a portion of the physical data queue with the credit system, while still allowing another portion of the physical data queue for destination dependent tuning, allows for a low jitter, high efficiency tuning of the overall system behavior.
A major advantage is that the pipeclean (PC) does not have to flow through the queue before being returned. The queue could be blocked from dequeue in a real system (for instance someone pulled the fiber) and the PC can still be returned. The flow-thru method fails in that case because the PC gets stuck in the queue and is never returned. The CHE could periodically retry the pipeclean operation and add unnecessary traffic destined to a stalled queue. Similiarly if the queue is larger and/or slow moving there can be a large latency due to how long it takes the PC to flow through the queue. This latency can be critical if the CHE is servicing many queue ends.
The present invention will be further understood from the following detailed description with reference to the drawings in which:
The CQE has a finite data receive FIFO buffer 20 into which data segments received across data link 18 from CHE 14 are placed. The emptying or consuming of data segments from receive data FIFO buffer 20 is controlled by control logic in response to a FIFO read signal received from a downstream entity (not shown). In an ideal, i.e. uncongested communication fabric, each segment of data (D) would be read from receive data FIFO buffer 20 the cycle after the data is written therein. Thus, buffer 20 ideally would contain no more than one data segment. However, when congestion causes the downstream entity to slow its rate of FIFO reads below one per cycle, data segments can accumulate in the receive data FIFO buffer. This in turn reduces the available space for storing future data segments from CHE 14. The goal of credit-based flow control is to insure that data segments (D) are sent to the CQE 16 at a rate that does not cause overflowing of receive data FIFO buffer 20, while at the same time maximizing utilization of the physical link coupling sender and receiver.
At the time the communication link is established or initialized, the CHE is allocated a number of credits, n, which are stored in max credits 32. Each credit represents permission to transmit one segment of data over data link 18. The credit link 22 is used by the CQE 16 as described herein below to provide the CHE 14 with returned credits. These returned credits flow through credit count 24 to an upstream flow control 26. Because the credit link 22 is separate from the data link 18, transfer of credits from receiver to sender has no affect on data bandwidth. The CHE 14 increments the credit count 24 upon receipt of a credit from CQE 16 and decrements the credit count 24 when a data segment (D) is placed on data link 18 for transmission to the receiver.
The CQE 16 maintains a rolling count 40 and a delta count 42. Both are incremented when a data segment is sent from queue 20. The rolling count 40 is representative of credits associated with data segments having traversed the receive data FIFO buffer 20.
In operation, the Credit Head End (CHE) has a predetermined credit unit, for example a credit unit=4 data bytes; also count 4 byte segment header. The credit count 24 is initializes to a predetermined value=max credits based upon the size of the queue 20. The credit count 24 is decremented by the number of credit units transmitted on the data path 18 for the communications link 10. The credit count 24 is increment by credits returned in each credit return message (CRM) from the credit return 44 determined as equal to current rolling count−previous rolling count
The upstream flow control 26 sends the upstream datapath scheduler 12 an xoff when credit count 24 is greater than or equal to the xoff threshold 30, an xon when credit count 24 is less than the xoff threshold 30. The previous rolling count 28 is initialized to zero.
Basic credit operation of the credit queue end (CQE) 16 includes initializing the rolling count 40 to zero and then incrementing by number of credit units de-queued. The delta count 42 is also initialized to zero and then incremented by the number of credit units de-queued. An update threshold is, for example, set to 17 credits (one 64 B data segment).
The credit return 44 uses the logic, when delta count 42 is greater than or equal to the update threshold, then clear delta count 42 and send the rolling count 40 in a credit return message (CRM) via credit return link 22.
For a multiple channel CQE 16, one could FIFO queue CRM-ready channel IDs. Then read the rolling count 40 and clear delta count 42 on de-queuing from the CRM-ready queue (not shown in
The design of rolling count as a counter of credits dequeued from system startup instead of a count of credits dequeued since the last CRM adds an important resiliency feature to allow CRM message loss without system error. However, because the communications link 18 and the credit return link 22 are not error free in any real implementation, errors in credit count still occur. It is therefore necessary to reset the number of credits in the credit system to a known value periodically. An example of how this is done with regard to
Referring to
The local fill 58 tracks fill of the locally managed queue 54, which is not visible to the CHE 50:
The credit fill 56 tracks fill of the credit-managed queue 52:
The rest of the pipe-clean operation is as described with regard to the known system of
Referring to
Multiple split queues 70 are credit-managed by a single CHE 50′. These can be used, or example, for priority queuing at the CQE when there are not enough channels between CHE 14 and CQE 50′ to carry each flow-priority on a different channel. Credits are returned as data is de-queued from the credit part 52 of each split queue 20′. Each split queue must be able to absorb max credits, so that a satisfied queue does not block access to an un-satisfied queue.
Each split queue operates the same as a single split queue of
Credit Recovery (Pipe-clean) operation for multiple split queues CQE 50′ replaces step 3 of Table A with the following: 3. The pipe-clean message is returned when it is logically de-queued from the credit part of the split queue, as if there were a single credit queue:
Note that withheld credits 62 is zeroed when all local fill 58 are less than satisfied. The rest of the pipe-clean operation is as described with regard to the known system of
Referring to
The basic credit operation for the credit head end (CHE) 14 and credit queue end (CQE) 80 are as described with regard to the known system of
The pipe-clean operation of the communication link of
Referring to
Fast Response Compatible Flow Thru Credit Recovery
Another embodiment of interest is the ability to share a single queue between multiple CQE. The same mechanisms that allow a CQE to only manage a portion of the queue for split queues, allows a CQE to only manage a portion of a shared queue. In this particular embodiment, the queues tend not to be of the split queue variety because the queuing system must sort the enqueues and dequeues from the credit managed queue to determine which CQE must account for the segments.
Numerous modifications, variations and adaptations may be made to the particular embodiments of the present invention described above without departing from the scope of the invention as defined in the claims.
This application claims benefit and priority from U.S. Provisional Application No. 60/607,177, filed Sep. 9, 2004, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60607177 | Sep 2004 | US |