Not applicable.
Not applicable.
Not applicable.
Modern communications and data networks are comprised of nodes that transport data through the network. The nodes may include routers, switches, bridges, or combinations thereof that transport the individual data packets or frames through the network. A node can forward a plurality of packets that correspond to different sessions or flows in parallel. The packets of the different sessions or flows can be received over a plurality of ingress ports and forwarded over a plurality of egress ports of the node. Additionally, the packets of the different flows can be queued or buffered in corresponding queues or buffers for some time before sending the packets from the node. The packets in the different queues can be forwarded over the same egress link and as such share the bandwidth available or assigned to that link. A scheduler at the node is typically used to schedule and coordinate the forwarding of the buffered packets in the different queues on the same egress link, such as by selecting packets form the different queues at different time slots designated by the scheduler.
In one embodiment, the disclosure includes an apparatus comprising a plurality of queues configured to cache a plurality of packets that correspond to a plurality of sessions, a scheduler configured to schedule the packets from the different queues for forwarding based on a finish time for each packet at the egress of each corresponding queue, and an egress link coupled to the scheduler and configured to forward the scheduled packets from all the queues at a total bandwidth that is shared among the queues, wherein the finish time is calculated dynamically based on the amount of bandwidth allocated for the corresponding queue, and wherein the queues are assigned corresponding weights for sharing the total bandwidth.
In another embodiment, the disclosure includes a network component comprising a receiver configured to receive a plurality of packets that correspond to a plurality of sessions, one or more memory units for storing a plurality of queues configured to buffer the packets of the corresponding sessions, a logic unit configured to calculate a finish time for each detected packet at the head of a corresponding queue and assign the detected packet to a time slot of a calendar queue for forwarding the packet in ascending order of finish time, and a transmitter configured to send a plurality of packets assigned to the time slots in the order of time slots over an output link. In yet another embodiment, the disclosure includes a network apparatus implemented method comprising scanning a plurality of queues for a plurality of packet sessions to detect any backlogged packets in the queues, assigning to a plurality of time slots in a calendar table a plurality of packets detected at the head of the queues in ascending order of a plurality of finish times calculated for the packets in terms of bandwidth allocated for the packet sessions, scanning the time slots in the calendar table in sequence to detect the assigned packets, and forward the detected assigned packets in order on a shared egress link.
These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Disclosed herein is a system and method for improved packet scheduling and forwarding, e.g., at a network node. A scheduler may be configured to efficiently schedule the forwarding of a plurality of packets that correspond to a plurality of sessions and that are buffered in a plurality of corresponding queues at the network node. A packet scheduling policy or algorithm may be implemented to address the scalability issue of skipping idle queues, for instance when a substantial number of sessions are handled and where a substantial portion of the sessions are idle. The packet scheduling policy or algorithm may be used to skip a bounded number of idle queues for servicing or forwarding a packet in the queues. The algorithm may have an O(1) time complexity for all or a plurality of packet arrangements or conditions. The algorithm may also have fairness property for handling the packets of different sessions similar to other used algorithms, e.g., the Weighted Fair Queuing (WFQ) algorithm. The policy or algorithm may be effectively implemented using software only, using software with limited hardware support, or using hardware.
Typically, the scheduling algorithm may guarantee fairness in allocating portions of the total bandwidth of the egress link 130, such the WFQ scheduling algorithm. Based on the WFQ, n sessions (n is an integer) may share one output link with a bandwidth R, such that each session i has a weight wi. Each session may have a guaranteed rate
where
The WFQ may mimic a fluid model of Generalized Processing Sharing (GPS) and define a virtual time, where V(t)=0 if no packet backlog exists in a session. Otherwise, V(t)=
where B(t0,t) is the set of backlogged sessions at [t0,t]. The WFQ may also define a virtual start time Sik=max{Fik−1,V(aik)} and a corresponding virtual finish time
for the k-th packet on session i, where aik is the arrival time of the k-th packet on session i, and where Lik is the length of the k-th packet on session i. The virtual finish time is determined when queuing a packet, e.g., when adding the packet to the queue. The packets in the queues are serviced in ascending order of the virtual finish time for backlogged packets. Different schemes may also be used to reduce the complexity of the WFQ, such as using a calendar queue or using a Self Clocked Fair Queuing (SCFQ). The SCFQ may reduce the virtual time overhead of the WFQ and use the virtual finish time of the packet being transferred as the current virtual time.
Each time slot may be assigned one or more backlogged packets (e.g., P1, P2, P3, P4, etc.) using WFQ. A backlogged packet may be scheduled to be serviced (or forwarded) at an entry or time slot of the calendar queue 200 that is determined by a corresponding quantized finish time. The calendar queue 200 may be traversed repeatedly, where packets assigned to time slots may be serviced if found. At each current time, one of the time slots may be scanned for an assigned packet. If a packet is found, then the packet may be transmitted from its corresponding queue on the shared output link before moving to the next time slot in the calendar queue 200. Otherwise, the next time slot may be scanned for an assigned packet. This process may be repeated in a loop sequence, where all the time slots in the calendar queue 200 may be traversed in order multiple times. Using the calendar queue 200 may reduce on-the-fly (e.g., real-time) computational complexity of WFQ. However, the calendar queue 200 may be substantially sparse, e.g., comprise a substantially large number of unassigned or empty time slots for empty queues, which may still be scanned to arrive at an assigned time slot. This may cause poor scalability and reduce the overall performance.
To improve the scalability and performance for scheduling and forwarding packets for different sessions, an improved scheduling algorithm is needed that may assign a substantial portion of the time slots in the calendar queue and hence obtain a more dense calendar queue. This may lead to better use of the total bandwidth by scanning substantially assigned time slots in the calendar queue for non-empty queues that comprise backlogged packets.
Specifically, a new virtual arrival time for the k-th packet on session i may be defined as Vik=0 if no packet backlog exists at the queues. Otherwise, Vik is set equal to a finish time of the packet being serviced. The start time and finish time of a packet may be calculated as late as the packet is moved to the head or egress of its queue instead of the queuing time as in the case of WFQ. The improved algorithm may define a virtual start time Sik=max{Fik−1,Vik} and a corresponding virtual finish time
where B is the set of all active sessions when the packet is moved to the head of the queue. The remaining parameters are similar to the corresponding parameters described above.
According to the proposed algorithm above, only the packet at the head of its queue is assigned to an entry in the calendar queue or table that is determined by the finish time. Thus, most of the time slots in the calendar queue may be assigned a packet and hence the time slots may be traversed more quickly to find a packet to be serviced. The calendar queue may in some cases still include unassigned time slots for empty queues, but the number of such time slots may be substantially reduced, and thus the performance and scalability for packet scheduling and forwarding may be substantially improved. This may also substantially improve bandwidth utilization, where most or a substantial portion of the shared output link bandwidth may be used for transmitting the packets at the different time slots.
The improved algorithm may provide a work-conserving policy, as described above. The work-conserving policy may correspond to an O(1) work conserving schedule. Further, the packets may be assigned to the time slots of the calendar queue or table without using a physical timer. The finish time may also be calculated dynamically since the coefficient
for calculating the finish time may change in terms of the amount of bandwidth allocated for a session instead of using a fixed value as in WFQ. The dynamic change in the coefficient may reflect the change of sessions switching between active (comprising backlogged packets) and idle (comprising no backlogged packets). This may lead to a denser calendar table regardless of the number of sessions that are considered. In average or general, about S time slots may be scanned to service about S packets (S is an integer). The algorithm may also support Quality of Service (QoS) requirements, where different packet session may have different weights or priorities in sharing the egress link bandwidth.
The above algorithm may also be implemented in a multi-level queuing hierarchy, where a queue at one level may be coupled to a plurality of queues at a lower level. As such, the packets from the lower level queues may be scheduled and forwarded to the higher level queue using the improved scheme above. A scheduler at each level may implement the improved scheduling algorithm to forward packets from different queues via a shared output link.
The multi-level scheduler hierarchy 500 may comprise at least two levels of queues and corresponding schedulers, where each level may be based on the scheduler architecture 100. As such, a first level scheduler architecture 501 may comprise a plurality of first level queues or buffers 510 (queues 4, 5, and 6), a first level scheduling unit or scheduler 520 coupled to all the first level queues 510, and a first level output or egress link 530 coupled to the first level scheduler 520. Additionally, a second level scheduler architecture 502 coupled to the first level scheduler architecture 501 may comprise a plurality of second level queues or buffers 512 (queues 1, 2, and 3), a second level scheduling unit or scheduler 522 coupled to all the second level queues 512, and a second level output or egress link 532 coupled to the second level scheduler 522. The second level scheduler architecture 502 may be coupled to the first level scheduler architecture 501 by one of the second level queues 512 (queue 3) that may be coupled to the first level egress link 530. The components of the first level scheduler architecture 501 and similarly the components of the second level scheduler architecture 502 may be configured similar to the corresponding components of the scheduler architecture 100.
Both, the first level scheduler architecture 501 and the second level scheduler architecture 502 may be implemented in the same network node. Alternatively, the first level scheduler architecture 501 may be implemented in a first node in a tree, and the second level scheduler architecture 502 may be implemented in a second node coupled to the first node at a next higher level in the tree. The first level scheduler 520 may implement the improved scheduling algorithm above using a first level calendar queue 540 to schedule and forward packets efficiently from the first level queues 510 (queues 4, 5, and 6) to the second level queue 512 (queue 3) via the first level egress link 530. Similarly, the second level scheduler 522 may implement the improved scheduling algorithm above using a second level calendar queue 542 to schedule and forward packets efficiently from the second level queues 512 (queues 1, 2, and 3) on the second level egress link 532. As such, both the first level calendar queue 540 and the second level calendar queue 542 may be substantially dense, e.g. similar to the calendar queue 400.
The network components described above may be implemented on any general-purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
The secondary storage 1204 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 1208 is not large enough to hold all working data. Secondary storage 1204 may be used to store programs that are loaded into RAM 1208 when such programs are selected for execution. The ROM 1206 is used to store instructions and perhaps data that are read during program execution. ROM 1206 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage 1204. The RAM 1208 is used to store volatile data and perhaps to store instructions. Access to both ROM 1206 and RAM 1208 is typically faster than to secondary storage 1204.
At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, R1, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=R1+k*(Ru−R1), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.