Claims
- 1. A method of scheduling protocol data units stored in a plurality of queues, where said plurality of queues are organized into sub-divisions, each of said subdivisions comprising a subset of said plurality of queues storing protocol data units having a per hop behavior in common, said method comprising:
further subdividing at least one of said subsets of said queues into (i) a group of queues storing protocol data units having a common destination and (ii) at least one further queue storing protocol data units having a differing destination; scheduling said protocol data units from said group of queues to produce an initial scheduling output; and scheduling said protocol data units from said initial scheduling output along with said protocol data units from said at least one further queue.
- 2. The method of claim 1 wherein said protocol data unit conforms to an Open Systems Interconnection layer 2 protocol.
- 3. The method of claim 2 wherein said layer 2 protocol is Asynchronous Transfer Mode.
- 4. The method of claim 2 wherein said layer 2 protocol is Ethernet.
- 5. The method of claim 1 wherein said protocol data unit conforms to an Open Systems Interconnection layer 3 protocol.
- 6. The method of claim 5 wherein said layer 3 protocol is the Internet protocol.
- 7. The method of claim 1 wherein said protocol data units having said common destination share a label switched path in a multi-protocol label switching network.
- 8. The method of claim 7 wherein said label switched path is a traffic engineering label switched path.
- 9. The method of claim 1 wherein said protocol data units having said common destination share a virtual circuit in an asynchronous transfer mode network.
- 10. The method of claim 9 wherein said protocol data units having a per hop behavior in common share a virtual path in said asynchronous transfer mode network.
- 11. The method of claim 1 wherein said protocol data units having said common destination have an asynchronous transfer mode permanent virtual circuit in common.
- 12. The method of claim 1 wherein a given one of said plurality of queues is subject to active queue management.
- 13. The method of claim 1 wherein said sub-divisions into which said plurality of queues are organized are based on service type.
- 14. The method of claim 1 wherein said sub-divisions into which said plurality of queues are organized are based on transport type.
- 15. The method of claim 1 wherein said sub-divisions into which said plurality of queues are organized are based on application type.
- 16. The method of claim 1 wherein a given queue provides per hop behavior traffic management.
- 17. The method of claim 11 wherein said active queue management comprises discarding protocol data units with a first marking as long as said given one of said plurality of queues stores more than a first threshold of protocol data units.
- 18. The method of claim 17 wherein said active queue management comprises discarding protocol data units with a second marking as long as said given one of said plurality of queues stores more than a second threshold of protocol data units.
- 19. The method of claim 18 wherein said active queue management comprises discarding protocol data units with a third marking as long as said given one of said plurality of queues stores more than a third threshold of protocol data units.
- 20. The method of claim 19 wherein said active queue management comprises discarding all protocol data units as long as said given one of said plurality of queues stores more than a fourth threshold of protocol data units.
- 21. The method of claim 20 wherein said first threshold, second threshold, third threshold and fourth threshold are defined in a drop profile.
- 22. The method of claim 21 wherein said drop profile is associated with a particular service type.
- 23. The method of claim 22 wherein said drop profile is a first drop profile and a second drop profile defines a further set of thresholds.
- 24. The method of claim 23 wherein said second drop profile is associated with a particular transport type.
- 25. An egress interface including a plurality of queues storing protocol data units, where said plurality of queues are organized into sub-divisions, each of said subdivisions comprising a subset of said plurality of queues having a per hop behavior in common, said egress interface comprising:
a first scheduler adapted to produce an initial scheduling output including protocol data units having a common destination, where said protocol data units having said common destination are stored in a subdivision of said plurality of queues; and a second scheduler adapted to schedule said protocol data units from said initial scheduling output along with protocol data units from at least one further queue, where said protocol data units from said at least one further queue have a destination different from said common destination and said protocol data units from said at least one further queue share per hop behavior with said protocol data units from said initial scheduling output.
- 26. An egress interface including a plurality of queues storing protocol data units, where said plurality of queues are organized into sub-divisions, each of said subdivisions comprising a subset of said plurality of queues having a per hop behavior in common, said egress interface comprising:
a first scheduler adapted to produce an initial scheduling output including protocol data units having a common destination, where said protocol data units having said common destination are stored in a subdivision of said plurality of queues; and a second scheduler adapted to schedule said protocol data units from said initial scheduling output along with protocol data units from at least one further queue, where said protocol data units from said at least one further queue have a destination different from said common destination and said protocol data units from said at least one further queue are predetermined to share a given partition of bandwidth available on a channel with said protocol data units from said initial scheduling output.
- 27. A computer readable medium containing computer-executable instructions which, when performed by processor in an egress interface storing protocol data units in a plurality of queues, where said plurality of queues are organized into sub-divisions, each of said subdivisions comprising a subset of said plurality of queues having a per hop behavior in common, cause the processor to:
subdivide at least one of said subsets of said queues into a group of queues storing protocol data units having a common destination and at least one further queue storing protocol data units having a differing destination; schedule said protocol data units from said group of queues to produce an initial scheduling output; and schedule said protocol data units from said initial scheduling output along with said protocol data units from said at least one further queue.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of prior provisional application Ser. No. 60/465,265, filed Apr. 25, 2003.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60465265 |
Apr 2003 |
US |