The present application is related to the following U.S. Patent Applications, each of which is hereby incorporated by reference herein in its entirety:
The present invention is concerned with data and storage communication systems and is more particularly concerned with a scheduler component of a network processor.
Data and storage communication networks are in widespread use. In many data and storage communication networks, data packet switching is employed to route data packets or frames from point to point between source and destination, and network processors are employed to handle transmission of data into and out of data switches.
The network processor 10 includes data flow chips 12 and 14. The first data flow chip 12 is connected to a data switch 15 (shown in phantom) via first switch ports 16, and is connected to a data network 17 (shown in phantom) via first network ports 18. The first data flow chip 12 is positioned on the ingress side of the switch 15 and handles data frames that are inbound to the switch 15.
The second data flow chip 14 is connected to the switch 15 via second switch ports 20 and is connected to the data network 17 via second network ports 22. The second data flow chip 14 is positioned on the egress side of the switch 15 and handles data frames that are outbound from the switch 15.
As shown in
The network processor 10 also includes a first processor chip 28 coupled to the first data flow chip 12. The first processor chip 28 supervises operation of the first data flow chip 12 and may include multiple processors. A second processor chip 30 is coupled to the second data flow chip 14, supervises operation of the second data flow chip 14 and may include multiple processors.
A control signal path 32 couples an output terminal of second data flow chip 14 to an input terminal of first data flow chip 12 (e.g., to allow transmission of data frames therebetween).
The network processor 10 further includes a first scheduler chip 34 coupled to the first data flow chip 12. The first scheduler chip 34 manages the sequence in which inbound data frames are transmitted to the switch 15 via first switch ports 16. A first memory 36 such as a fast SRAM is coupled to the first scheduler chip 34 (e.g., for storing data frame pointers and flow control information as described further below). The first memory 36 may be, for example, a QDR (quad data rate) SRAM.
A second scheduler chip 38 is coupled to the second data flow chip 14. The second scheduler chip 38 manages the sequence in which data frames are output from the second network ports 22 of the second data flow chip 14. Coupled to the second scheduler chip 38 are at least one and possibly two memories (e.g., fast SRAMs 40) for storing data frame pointers and flow control information. The memories 40 may, like the first memory 36, be QDRs. The additional memory 40 on the egress side of the network processor 10 may be needed because of a larger number of flows output through the second network ports 22 than through the first switch ports 16.
Flows with which the incoming data frames are associated are enqueued in a scheduling queue 42 maintained in the first scheduler chip 34. The scheduling queue 42 defines a sequence in which the flows enqueued therein are to be serviced. The particular scheduling queue 42 of interest in connection with the present invention is a weighted fair queue which arbitrates among flows entitled to a “best effort” or “available bandwidth” Quality of Service (QoS).
As shown in
Although not indicated in
The memory 36 associated with the first scheduler chip 34 holds pointers (“frame pointers”) to locations in the first data buffer 24 corresponding to data frames associated with the flows enqueued in the scheduling queue 42. The memory 36 also stores flow control information, such as information indicative of the QoS to which flows are entitled.
When the scheduling queue 42 indicates that a particular flow enqueued therein is the next to be serviced, reference is made to the frame pointer in the memory 36 corresponding to the first pending data frame for the flow in question and the corresponding frame data is transferred from the first data buffer 24 to an output queue 46 associated with the output port 44.
A more detailed representation of the scheduling queue 42 is shown in
More specifically, the queue slot in which a flow is placed upon enqueuing is calculated according to the formula CP+((WF×FS)/SF), where CP is a pointer (“current pointer”) that indicates a current position (the slot currently being serviced) in the scheduling queue 42; WF is a weighting factor associated with the flow to be enqueued, the weighting factor having been determined on the basis of the QoS to which the flow is entitled; FS is the size of the current frame associated with the flow to be enqueued; and SF is a scaling factor chosen to scale the product (WF×FS) so that the resulting quotient falls within the range defined by the scheduling queue 42. (In accordance with conventional practice, the scaling factor SF is conveniently defined as a integral power of 2—i.e., SF=2n, with n being a positive integer—so that scaling the product (WF×FS) is performed by right shifting.) With this known weighted fair queuing technique, the weighting factors assigned to the various flows in accordance with the QoS assigned to each flow govern how close to the current pointer of the queue each flow is enqueued. In addition, flows which exhibit larger frame sizes are enqueued farther from the current pointer of the queue, to prevent such flows from appropriating an undue proportion of the available bandwidth of the queue. Upon enqueuement, data that identifies a flow (the “Flow ID”) is stored in the appropriate queue slot 48.
As noted above, each scheduler may include a plurality of scheduling queues. For example, 64 scheduling queues may be supported in each scheduler. Each scheduling queue services a respective output port, or a group of output ports as taught in the above-referenced co-pending patent application Ser. No. 10/015,994.
The scheduling queues may be accessed one after another in accordance with a round robin process, to search the scheduling queues for respective flows to be dequeued. One scheduling queue may be searched during each operating cycle of the scheduler. However, if the scheduling queue that is searched during a given cycle turns out to be empty, then the cycle may be wasted.
It is known to provide a counter for each scheduling queue to keep track of whether or not the scheduling queue is empty. However, operation of each counter may entail two increment operations (one for attachment of a new flow, and one for reattachment of a previously attached flow) and one decrement operation (reflecting detachment of a winning flow) during each cycle. Thus using a counter to track whether or not a scheduling queue is empty may adversely affect the performance of the scheduler. Moreover, providing a counter for each queue adds to the complexity and space requirements of the scheduler design.
An improved technique for determining whether or not a scheduling queue is empty would therefore be desirable.
According to an aspect of the invention, a scheduler for a network processor is provided. The scheduler includes one or more scheduling queues, each scheduling queue adapted to define a respective sequence in which flows are to be serviced. The scheduler further includes one or more empty indicators, with each empty indicator being associated with a respective scheduling queue to indicate whether the respective scheduling queue is empty. Each empty indicator may be a bit in a register.
According to another aspect of the invention, a method of dequeuing a flow from a scheduling queue is provided. The method includes examining an empty indicator associated with the scheduling queue, and refraining from searching the scheduling queue if the empty indicator indicates that the scheduling queue is empty. The method further includes searching the scheduling queue if the empty indicator indicates that the scheduling queue is not empty, and detaching from the scheduling queue a winning flow found in the searching step. The examining step may include checking a bit in a register.
According to still another aspect of the invention, a method of enqueuing a flow to a scheduling queue includes attaching a flow to the scheduling queue, and placing an empty indicator associated with the scheduling queue in a condition to indicate that the scheduling queue is not empty. The placing step may include setting or resetting a bit in a register.
According to still a further aspect of the invention, a method of dequeuing a flow from a scheduling queue is provided. The method includes examining an empty indicator associated with the scheduling queue, and refraining from searching the scheduling queue if the empty indicator indicates that the scheduling queue is empty. The method further includes searching the scheduling queue if the empty indicator indicates that the scheduling queue is not empty. According to a further step of the method, if a winning flow is found in the searching step, the winning flow is detached from the scheduling queue. According to still a further step, if no flow is found in the searching step, the empty indicator is placed in a condition to indicate that the scheduling queue is empty. The examining step may include checking a bit in a register.
With the present invention, the empty status of scheduling queues is tracked while minimizing the expenditure of processing and hardware resources.
Other objects, features and advantages of the present invention will become more fully apparent from the following detailed description of exemplary embodiments, the appended claims and the accompanying drawings.
Exemplary embodiments of the invention will now be described with reference to
It will be appreciated that the drawing of
During initialization of the scheduler 49, all of the empty indicators 50 may be initially placed in a condition to indicate that the respective scheduling queues 42 are empty. Thenceforward, each time a flow is attached or reattached to a scheduling queue 42, the corresponding empty indicator 50 is forced to a condition which indicates that the corresponding scheduling queue 42 is not empty.
Placement of the empty indicators 50 into a condition which indicates that the respective scheduling queue 42 is empty, and dequeuing of flows from the scheduling queues 42, will now be described, initially with reference to
A scheduling queue 42 is considered to be “active” if it is not empty, and if at least one output port assigned to the scheduling queue 42 is not in a backpressure condition. (The concept of backpressure is well known to those who are skilled in the art, and need not be explained herein.) Thus, for the scheduling queue 42 which follows the most recently searched scheduling queue 42, the corresponding empty indicator 50 is examined to determine whether the empty indicator 50 indicates that the associated scheduling queue 42 is empty. If the empty indicator 50 indicates that the scheduling queue 42 is empty, then the scheduling queue 42 is not searched, and the empty indicator 50 of the following scheduling queue 42 is examined. However, if the empty indicator 50 indicates that the scheduling queue 42 is not empty (and assuming that at least one output port assigned to the scheduling queue 42 is not in a backpressure condition), then the scheduling queue 42 is selected for searching. Searching of the scheduling queue (ring) 42 is indicated at block 62 in
Following block 62 is decision block 64. In decision block 64, it is determined whether the search of the scheduling queue 42 (selected in block 60 and searched in block 62) has indicated that the scheduling queue 42 is empty. If so, then block 66 follows decision block 64. At block 66, the empty indicator 50 associated with the scheduling queue 42 is placed in a condition to indicate that the scheduling queue 42 is empty. This may be done, for example, by setting or resetting an appropriate bit in a register. Following block 66, the procedure of
If at decision block 64 it is determined that the scheduling queue 42 (selected in block 60 and searched in block 62) was not found to be empty, then decision block 68 follows decision block 64. At decision block 68 it is determined whether a flow that is entitled to scheduled service, or another higher priority flow, is to be serviced from the output port corresponding to the winning flow found at block 62. In other words, it is determined whether a higher priority flow preempts servicing of the winning flow from the scheduling queue 42 searched at block 62. If such is not the case, then block 70 follows decision block 68. At block 70 the winning flow from the scheduling queue 42 is detached from the scheduling queue and serviced in accordance with conventional practice. The procedure of
However, if at decision block 68 it is found that the winning flow from the scheduling queue 42 searched at block 62 is to lose out to a higher priority flow, then the procedure of
In accordance with the procedure of
An alternative procedure is provided in accordance with another aspect of the invention for situations in which a flow detached from a scheduling queue is the only flow that was enqueued in the scheduling queue. This alternative procedure is illustrated by the flow chart of
Following block 72 is a decision block 74. If it is determined at decision block 74 that the scheduling queue 42 was empty but for the flow that was just detached, then block 66 follows block 74. As noted before in conjunction with
However, if it is determined at decision block 74 that the flow detached at block 70 was not the only flow enqueued in the scheduling queue 42, then the procedure of
The procedure of
In at least one embodiment of the invention, the processes of
The empty indicator arrangement of the present invention provides an efficient and cost effective way of identifying empty scheduling queues before they are searched. Consequently, operating cycles of a scheduler employing the inventive empty indicators are less likely to be wasted in searching a scheduling queue that is empty.
The foregoing description discloses only exemplary embodiments of the invention; modifications of the above disclosed apparatus and methods which fall within the scope of the invention will be readily apparent to those of ordinary skill in the art. For example, in the embodiments described above, scheduling queues are maintained in a separate scheduler chip associated with a network processor. However, it is also contemplated that scheduling queues may be maintained in a scheduler circuit that is implemented as part of a data flow chip or as part of a processor chip in a network processor.
Accordingly, while the present invention has been disclosed in connection with exemplary embodiments thereof, it should be understood that other embodiments may fall within the spirit and scope of the invention as defined by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4621359 | McMillen | Nov 1986 | A |
5249184 | Woest et al. | Sep 1993 | A |
5363485 | Nguyen et al. | Nov 1994 | A |
5490141 | Lai et al. | Feb 1996 | A |
5548590 | Grant et al. | Aug 1996 | A |
5629928 | Calvignac et al. | May 1997 | A |
5650993 | Lakshman et al. | Jul 1997 | A |
5742772 | Sreenan | Apr 1998 | A |
5790545 | Holt et al. | Aug 1998 | A |
5831971 | Bonomi et al. | Nov 1998 | A |
5835494 | Hughes et al. | Nov 1998 | A |
5844890 | Delp et al. | Dec 1998 | A |
5850399 | Ganmukhi et al. | Dec 1998 | A |
5905730 | Yang et al. | May 1999 | A |
5926459 | Lyles et al. | Jul 1999 | A |
5926481 | Wang et al. | Jul 1999 | A |
5946297 | Calvignac et al. | Aug 1999 | A |
5999963 | Bruno et al. | Dec 1999 | A |
6014367 | Joffe | Jan 2000 | A |
6018527 | Yin et al. | Jan 2000 | A |
6028842 | Chapman et al. | Feb 2000 | A |
6028843 | Delp et al. | Feb 2000 | A |
6031822 | Wallmeier | Feb 2000 | A |
6038217 | Lyles | Mar 2000 | A |
6041059 | Joffe et al. | Mar 2000 | A |
6052751 | Runaldue et al. | Apr 2000 | A |
6064650 | Kappler et al. | May 2000 | A |
6064677 | Kappler et al. | May 2000 | A |
6067301 | Aatresh | May 2000 | A |
6072772 | Charny et al. | Jun 2000 | A |
6072800 | Lee | Jun 2000 | A |
6078953 | Vaid et al. | Jun 2000 | A |
6081507 | Chao et al. | Jun 2000 | A |
6092115 | Choudhury et al. | Jul 2000 | A |
6094435 | Hoffman et al. | Jul 2000 | A |
6101193 | Ohba | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6108307 | McConnell et al. | Aug 2000 | A |
6115807 | Grochowski | Sep 2000 | A |
6122673 | Basak et al. | Sep 2000 | A |
6144669 | Williams et al. | Nov 2000 | A |
6157614 | Pasternak et al. | Dec 2000 | A |
6157649 | Peirce et al. | Dec 2000 | A |
6157654 | Davis | Dec 2000 | A |
6160812 | Bauman et al. | Dec 2000 | A |
6169740 | Morris et al. | Jan 2001 | B1 |
6188698 | Galand et al. | Feb 2001 | B1 |
6226267 | Spinney et al. | May 2001 | B1 |
6229812 | Parruck et al. | May 2001 | B1 |
6229813 | Buchko et al. | May 2001 | B1 |
6236647 | Amalfitano | May 2001 | B1 |
6246692 | Dai et al. | Jun 2001 | B1 |
6259699 | Opalka et al. | Jul 2001 | B1 |
6266702 | Darnell et al. | Jul 2001 | B1 |
6314478 | Etcheverry | Nov 2001 | B1 |
6356546 | Beshai | Mar 2002 | B1 |
6389019 | Fan et al. | May 2002 | B1 |
6389031 | Chao et al. | May 2002 | B1 |
6404768 | Basak et al. | Jun 2002 | B1 |
6469982 | Henrion et al. | Oct 2002 | B1 |
6481251 | Meier et al. | Nov 2002 | B1 |
6563829 | Lyles et al. | May 2003 | B1 |
6608625 | Chin et al. | Aug 2003 | B1 |
6611522 | Zheng et al. | Aug 2003 | B1 |
6646986 | Beshai | Nov 2003 | B1 |
6647505 | Dangi et al. | Nov 2003 | B1 |
6721325 | Duckering et al. | Apr 2004 | B1 |
6775287 | Fukano et al. | Aug 2004 | B1 |
6804249 | Bass et al. | Oct 2004 | B1 |
6810012 | Yin et al. | Oct 2004 | B1 |
6810043 | Naven et al. | Oct 2004 | B1 |
6810426 | Mysore et al. | Oct 2004 | B2 |
6813274 | Suzuki et al. | Nov 2004 | B1 |
6832261 | Westbrook et al. | Dec 2004 | B1 |
6850490 | Woo et al. | Feb 2005 | B1 |
6885664 | Ofek et al. | Apr 2005 | B2 |
6888830 | Snyder, II et al. | May 2005 | B1 |
6891835 | Kalkunte et al. | May 2005 | B2 |
7020137 | Kadambi et al. | Mar 2006 | B2 |
7027394 | Gupta et al. | Apr 2006 | B2 |
20010004363 | Usukura | Jun 2001 | A1 |
20010012294 | Kadambi et al. | Aug 2001 | A1 |
20020003795 | Oskouy et al. | Jan 2002 | A1 |
20020023168 | Bass et al. | Feb 2002 | A1 |
20020024830 | Yoneda | Feb 2002 | A1 |
20020136230 | Dell et al. | Sep 2002 | A1 |
20020163922 | Dooley et al. | Nov 2002 | A1 |
20020181455 | Norman et al. | Dec 2002 | A1 |
20030050954 | Tayyar et al. | Mar 2003 | A1 |
20030058875 | Arndt et al. | Mar 2003 | A1 |
20030058879 | Rumph | Mar 2003 | A1 |
20030079080 | DeMoney | Apr 2003 | A1 |
Number | Date | Country |
---|---|---|
0859492 | Aug 1998 | EP |
0957602 | Nov 1999 | EP |
0 989 770 | Mar 2000 | EP |
1 049 352 | Nov 2000 | EP |
1061763 | Dec 2000 | EP |
04-094240 | Mar 1992 | JP |
2000183886 | Jun 2000 | JP |
2000295247 | Oct 2000 | JP |
2001007822 | Dec 2000 | JP |
9935792 | Jul 1999 | WO |
WO9953647 | Oct 1999 | WO |
WO9953648 | Oct 1999 | WO |
WO0120876 | Mar 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20030081542 A1 | May 2003 | US |