In operation, as data packets are received at input ports 102 via input links 104, processor 112 will determine the appropriate output link 108 on which to output the data packet, and the processor will control switch module 110 in an appropriate manner so that the data packet is sent out on the appropriate output port 106 and output link 108. However, data packets may arrive at network node 100 at a rate, which is faster than the network node 100 can output the data packets. Therefore, at least a portion of memory 114 is configured as a buffer, so that received data packets may be stored in the buffer until ready to be output. However, it is possible that the rate of receipt of data packets will be high enough such that the buffer will fill up. In such a case, some data packets will be lost. The present invention provides a technique for managing a data packet buffer in a network node 100 for efficient use of allocated buffer memory.
The way this method with the spare buffer works is as follows. As packets 212 arrive they are assigned into various queue classes based on their type (or application type) and the queues are serviced by a scheduler 214 according to a scheduling scheme. For example, each queue can be assigned a relative weight (e.g., 35% real-time queue [class-1], 15% interactive queue [class-2], and 50% network control traffic queue [class-3]). The scheduler can then service queues in a round-robin fashion in proportion to the weights assigned to the queues.
In the normal mode of operation when no queue class is congested, the spare buffer 210 is empty. However, if a queue class gets congested, then the overflow packets, represented at 216, are tagged with their associated class and are assigned to the spare buffer. In a sense these overflow packets are linked with the tail of the congested queue. This is like increasing the size of a congested queue dynamically in real-time by the amount of the overflow packets. As packets in a congested queue class get serviced and space becomes available in the queue, the spare buffer 210 pushes the overflow packets out into the tail of the congested queue.
In the case that the spare buffer is full and overflow packets are still arriving, the arriving overflow packets are discarded.
In particular,
At step 306, the operation determines whether that queue class, to which the packet belongs, is congested (i.e., full). If that queue class is not congested, the packet is put in the queue class at step 310, and the routine returns to step 302. If the associated queue class is congested, the routine proceeds to step 312, where the routine determines if the spare buffer is full. If this spare buffer is not full, then at steps 314 and 316, the overflow packet is tagged with the associated queue class and put in the spare buffer, and the routine returns to step 302. However, if the spare buffer is full, the overflow packet is discarded at step 320, and the routine then returns to step 302.
If the spare buffer is empty, the routine returns to step 402. If the spare buffer is not empty, then at step 406, the routine checks to determine if the spare buffer contains a tagged packet indicating the same class as the departed packet. If there is no such packet, the routine returns to step 402. However, if there is such a tagged packet, then at step 410 that packet is pushed out from the spare buffer into the tail of the queue class from which the packet departed. (Note that the spare buffer operates in a FIFO manner for each packet class in order to preserve packet order for packets belonging to the same class. A selector logic, represented at 230 in
The packet arrival and departure operations are parallel processes, which are executed independently.
As will be readily apparent to those skilled in the art, the present invention can be realized in hardware, software, or a combination of hardware and software. Any kind of computer/server system(s)—or other apparatus adapted for carrying out the methods described herein—is suited. A typical combination of hardware and software could be a general-purpose computer system with a computer program that, when loaded and executed, carries out the respective methods described herein. Alternatively, a specific use computer, containing specialized hardware for carrying out one or more of the functional tasks of the invention, could be utilized.
The present invention, or aspects thereof, can also be embodied in a computer program product, which comprises all the respective features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods. Computer program, software program, program, or software, in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
While it is apparent that the invention herein disclosed is well calculated to fulfill the objects stated above, it will be appreciated that numerous modifications and embodiments may be devised by those skilled in the art, and it is intended that the appended claims cover all such modifications and embodiments as fall within the true spirit and scope of the present invention.