Claims
- 1. A scheduling method for use with a switch element operating to switch at least one input data channel of an input port to an output port, said at least one input data channel including data bursts of a plurality of packets each, comprising the steps:
for each time slot associated with said switch element, determining whether a buffer structure provided with said switch element contains previously received data packets that can be transmitted from said output port on an output data channel; if so, forwarding said previously received data packets for transmission on said output data channel; determining whether a currently received data packet on said at least one input data channel is part of a data burst previously accepted by said switch element for transmission; if so, processing said currently received data packet on said at least one input data channel; determining whether a currently received initial data packet of a new data burst on said at least one input data channel can be processed for scheduling by said switch element; and if so, processing said currently received initial data packet for scheduling by said switch element.
- 2. The scheduling method for use with a switch element as set forth in claim 1, further comprising the step of applying a packet drop policy with respect to at least one of said currently received data packet and said currently received initial data packet if said switch element is unable to process said data packets.
- 3. The scheduling method for use with a switch element as set forth in claim 1, wherein said step of processing said currently received data packet on said at least one input data channel comprises forwarding said currently received data packet for transmission on an output data channel associated with said output port.
- 4. The scheduling method for use with a switch element as set forth in claim 1, wherein said step of processing said currently received data packet on said at least one input data channel comprises storing said currently received data packet in said buffer structure.
- 5. The scheduling method for use with a switch element as set forth in claim 4, wherein said buffer structure is associated with said input port.
- 6. The scheduling method for use with a switch element as set forth in claim 4, wherein said buffer structure is associated with said output port.
- 7. The scheduling method for use with a switch element as set forth in claim 1, wherein said step of processing said currently received initial data packet of a new data burst on said at least one input data channel comprises forwarding said currently received initial data packet for transmission on an output data channel associated with said output port.
- 8. The scheduling method for use with a switch element as set forth in claim 1, wherein said step of processing said currently received initial data packet of a new data burst on said at least one input data channel comprises storing said currently received initial data packet in said buffer structure.
- 9. The scheduling method for use with a switch element as set forth in claim 8, wherein said buffer structure is associated with at least one of said input port and said output port.
- 10. The scheduling method for use with a switch element as set forth in claim 1, wherein said previously received data packets contained in said buffer structure are forwarded based on a numerical sequential order established for a plurality of buffer elements that form said buffer structure.
- 11. A scheduling system for use with a switch element operating to switch at least one input data channel of an input port to an output port, said at least one input data channel including data bursts of a plurality of packets each, said system comprising:
means for forwarding data packets that are stored in a buffer structure provided with said switch element upon determining that there exists an output data channel available on said output port for transmitting said data packets; means for processing a currently received data packet on said at least one input data channel upon determining that said currently received data packet is part of a data burst previously accepted by said switch element for processing; and means for processing a currently received initial data packet of a new data burst upon determining that said currently received initial data packet of a new data burst can be processed for scheduling by said switch element.
- 12. The scheduling system for use with a switch element as set forth in claim 11, further including means for applying a packet drop policy with respect to at least one of said currently received data packet and said currently received initial data packet if said switch element is unable to process said data packets.
- 13. The scheduling system for use with a switch element as set forth in claim 11, wherein said means for processing said currently received data packet on said at least one input data channel comprises means for forwarding said currently received data packet for transmission on an output data channel associated with said output port.
- 14. The scheduling system for use with a switch element as set forth in claim 11, wherein said means for processing said currently received data packet on said at least one input data channel comprises means for determining whether said currently received data packet needs to be stored in said buffer structure.
- 15. The scheduling system for use with a switch element as set forth in claim 14, wherein said buffer structure is associated with said input port.
- 16. The scheduling system for use with a switch element as set forth in claim 14, wherein said buffer structure is associated with said output port.
- 17. The scheduling system for use with a switch element as set forth in claim 11, wherein said means for processing said currently received initial data packet of a new data burst comprises means for forwarding said currently received initial data packet for transmission on an output data channel associated with said output port.
- 18. The scheduling system for use with a switch element as set forth in claim 11, wherein said means for processing said currently received initial data packet of a new data burst comprises means for determining whether said currently received initial data packet needs to be stored in said buffer structure.
- 19. The scheduling system for use with a switch element as set forth in claim 18, wherein said buffer structure is associated with at least one of said input port and said output port.
- 20. The scheduling system for use with a switch element as set forth in claim 11, wherein said data packets stored in said buffer structure are forwarded based on a numerical sequential order established for a plurality of buffer elements that form said buffer structure.
- 21. A method of scheduling data packets for transmission on a plurality of output data channels, wherein said data packets are preceded by at least one Burst Header having control information relating to said data packets, comprising the steps:
mapping said output data channels to a plurality of memory locations, each output data channel comprising a portion of said memory locations, wherein said portion is organized into a number of sections that correspond to a plurality of future time slots; provisioning a plurality of arbiters, each arbiter corresponding to a future time slot; determining, with respect to each future time slot, which future packets can be sent on said output data channels based on said control information in a Burst Header, wherein each arbiter determines assignment of packets with respect to a particular future time slot for a set of data channels associated with a single output port; storing packet indications relating to said future packets in appropriate memory locations on per-channel and per-slot basis to provide a slot-by-slot channel assignment map; and for each current time slot, forwarding a corresponding channel assignment map to a switching station for transmitting data packets on output data channels in accordance with a manner identified by said channel assignment map.
- 22. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, wherein said arbiters comprise Round Robin Arbiters.
- 23. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, wherein said arbiters comprise Binary Tree Arbiters.
- 24. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, wherein said arbiters comprise counters.
- 25. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, further comprising the steps of:
determining if a Burst Header includes indication of a future data burst that is longer than an offset parameter, wherein said offset parameter equals a delay between said Burst Header and said future data burst; and if so, ignoring said future data burst by not assigning its packets to said output data channels.
- 26. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, further comprising the steps of:
provisioning a plurality of delay buffers wherein each delay buffer is mapped to a portion of memory locations on a per-slot basis; determining if a future data packet can be buffered in a delay buffer with respect to a particular future time slot; and if so, storing in a memory location corresponding to said particular future time slot an indication relating to said future data packet operable to buffered.
- 27. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 26, further comprising the steps of:
determining if said future data packet operable to be buffered can be kicked back to a next future time slot; and if so, re-assigning said indication relating to said future data packet to a memory location corresponding to said next future time slot.
- 28. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 26, further comprising the steps of:
determining if a future data packet cannot be assigned to an output data channel; if so, determining whether said future data packet can be buffered in a delay buffer; and discarding said future data packet upon determining that it cannot be buffered.
- 29. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 28, further comprising the step of discarding future data packets related to said future data packet in a data burst.
- 30. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 28, wherein said future data packet determined be discarded comprises an initial data packet of a future data burst.
- 31. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, wherein said output data channels are mapped to a non-volatile memory block.
- 32. The method of scheduling data packets for transmission on a plurality of output data channels as set forth in claim 21, wherein said output data channels are mapped to a Flash memory block.
- 33. A system for scheduling data packets for transmission on a plurality of output data channels, wherein said data packets are preceded by at least one Burst Header having control information relating to said data packets, comprising:
a mapped-memory structure for mapping said output data channels to a plurality of memory locations, each output data channel comprising a portion of said memory locations, wherein said portion is organized into a number of sections that correspond to a plurality of future time slots; a plurality of arbiters each corresponding to a future time slot, wherein each arbiter is operable to determine, with respect to its corresponding future time slot, which future packets can be sent on said output data channels based on said control information in a Burst Header, whereby packet indications relating to said future packets are stored in appropriate memory locations on per-channel and per-slot basis to provide a slot-by-slot channel assignment map; and means for forwarding, for each current time slot, a corresponding channel assignment map to a switching station for transmitting data packets on output data channels in accordance with a manner identified by said channel assignment map.
- 34. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, wherein said arbiters comprise Round Robin Arbiters.
- 35. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, wherein said arbiters comprise Binary Tree Arbiters.
- 36. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, wherein said arbiters comprise counters.
- 37. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, further comprising means for determining if a Burst Header includes indication of a future data burst that is longer than an offset parameter, wherein said offset parameter equals a delay between said Burst Header and said future data burst.
- 38. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, further comprising a plurality of delay buffers wherein each delay buffer is mapped to a portion of memory locations on a per-slot basis, said delay buffers operating to buffer future data packets with respect to a particular future time slot.
- 39. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, wherein said mapped-memory structure comprises a non-volatile memory block.
- 40. The system for scheduling data packets for transmission on a plurality of output data channels as set forth in claim 33, wherein said mapped-memory structure comprises a Flash memory block.
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application discloses subject matter related to the subject matter disclosed in the following commonly owned co-pending patent application(s): (i) “Multiserver Scheduling System And Method For A Fast Switching Element,” application Ser. No. 10/059,641, filed Jan. 28, 2002, in the names of: Prasad Golla, Gerard Damm, John Blanton, Mei Yang, Dominique Verchere, Hakki Candan Cankaya, and Yijun Xiong; (ii) “Look-Up Table Arbitration System And Method For A Fast Switching Element,” application Ser. No. 10/075,176, filed Feb. 14, 2002, in the names of: Prasad Golla, Gerard Damm, John Blanton, and Dominique Verchere; (iii) “Binary Tree Arbitration System And Method,” application Ser. No. 10/109,423, filed Mar. 28, 2002, in the names of: Prasad Golla, Gerard Damm, Timucin Ozugur, John Blanton, and Dominique Verchere; and (iv) “Look-Ahead Contention Resolution Method For A Burst Switching Network, Application No.: ______ Atty. Docket No. 139069), filed ______, in the name(s) of Farid Farahmand, John Blanton, and Dominique Verchere.