Claims
- 1. A buffering system in a communication switch, comprising:
a multiprocessor control block that includes:
a plurality of distributed processors that include ingress and egress queuing points corresponding to data units communicated within the communication switch, wherein when a congestion condition exists at selected queuing points within the a distributed processor, a congestion indication is generated; and a resource routing processor operably coupled to the plurality of distributed processors, wherein the resource routing processor controls routing functionality within the communication switch, wherein the resource routing processor receives congestion indications and preferentially selects uncongested routes for subsequent connections within the communication switch based on the congestion indications.
- 2. The buffering system of claim 1, wherein the resource routing processor performs resource allocation amongst connections supported by the switch.
- 3. The buffering system of claim 2, wherein the plurality of distributed processors includes:
a plurality of intermediate processors operably coupled to the resource routing processor, wherein each intermediate processor of the plurality of intermediate processors performs call processing for a corresponding portion of the connections supported by the switch, wherein call processing includes issuing resource allocation requests to the resource routing processor, wherein each intermediate processor of the plurality of intermediate processors performs functions associated with a signaling layer portion of a protocol stack; and a link layer processor operably coupled to the plurality of intermediate processors, wherein the link layer processor is operable to couple to a switching fabric of the communication switch, wherein the link layer processor receives ingress data units from the switching fabric and selectively forwards each ingress data unit received to at least one of the plurality of intermediate processors, wherein the link layer processor receives egress data units from the plurality of intermediate processors and forwards each of the egress data units to the switching fabric.
- 4. The buffering system of claim 3, wherein the link layer processor includes a widowing function such that the link layer processor controls the rate of receipt of ingress data units.
- 5. The buffering system of claim 4, wherein each intermediate processor of the plurality of intermediate processors includes an ingress buffer that buffers ingress data units received from the link layer processor, wherein when a threshold level of the ingress buffer is exceeded, a threshold violation indication is generated that is provided to the link layer processor such that the link layer processor reduces flow of ingress data units to the ingress buffer whose threshold has been exceeded.
- 6. The buffering system of claim 5, wherein each intermediate processor of the plurality of intermediate processors receives egress data units from the resource routing processor, wherein each intermediate processor of the plurality of intermediate processors preferentially processes egress data units with respect to ingress data units such that congestion in the intermediate processor is isolated to the ingress buffer.
- 7. The buffering system of claim 6, wherein the link layer processor receives egress data units from the plurality of intermediate processors, wherein the link layer preferentially processes egress data units with respect to ingress data units.
- 8. The buffering system of claim 7, wherein the link layer processor includes a transmit queue that buffers egress data units prior to transmission, wherein the transmit queue is a selected queuing point such that when the transmit queue becomes congested, the link layer processor generates a congestion indication that is provided to the resource routing processor.
- 9. The buffering system of claim 8, wherein when capacity of the transmit queue is exceeded, the link layer processor selectively discards egress data units based on a predetermined discard scheme.
- 10. The buffering system of claim 8, wherein when capacity of the transmit queue is exceeded, at least one intermediate processor of the plurality of intermediate processors selectively discards egress data units based on a predetermined discard scheme.
- 11. The buffering system of claim 8, wherein when capacity of the transmit queue is exceeded, the resource routing processor performs at least one of: rejecting a call attempt and selecting an alternate route for a call.
- 12. The buffering system of claim 2 further comprises:
a plurality of line cards operably coupled to the multiprocessor control block, wherein the plurality of line cards include ingress and egress queuing points for line card data unit s, wherein when a co ngestion condition exists at a queuing point within a line card, a line card congestion indication is generated and provided to the resource routing processor such that the resource routing processor selects routes at least partially based on line card congestion indications received.
- 13. The buffering system of claim 12 further comprises:
a message processor operably coupled to the multiprocessor control block and the plurality of line cards, wherein the message processor supports messaging between the plurality of intermediate processors and the plurality of line cards.
- 14. The buffering system of claim 13, wherein the message processor includes an egress buffer that buffers egress line card data units received from the plurality of intermediate processors, wherein when a threshold level of the egress buffer is exceeded, an messaging threshold violation is generated.
- 15. The buffering system of claim 14, wherein the message processor includes a plurality of line card transmission queues, wherein each line card transmission queue of the plurality of line card transmission queues corresponds to one line card of the plurality of line card transmission queues, wherein each line card transmission queue buffers egress line cards data units directed to a corresponding line card, wherein when a line card transmission queue become congested, a line card congestion indication is generated and provided to the resource routing processor.
- 16. The buffering system of claim 15, wherein each line card of the plurality of egress line cards includes a windowing function such that the line card controls the rate of receipt of egress line card data units from a corresponding line card transmission queue in the message processor.
- 17. A communication switch, comprising:
a routing control block that performs call processing operations within the communication switch; a plurality of line cards operably couple to the routing control block, wherein each of the line cards includes at least one transmit queue, wherein when congestion is detected on a transmit queue, a congestion indication is provided to the routing control block such that calls are routed away from the congestion.
- 18. The communication switch of claim 17, wherein the routing control block includes a plurality of processors, wherein each processor of the plurality of processors is responsible for a portion of the protocol stack used in call processing operations, wherein each processor includes queuing points, wherein a first set of queuing points of the queuing points in the communication switch are rate controlled in a manner that ensures that congestion at the first set of queuing points does not occur, wherein when congestion occurs and is detected at queuing points included in a second set of queuing points, notification is provided to a routing processor of the plurality of processors, wherein the routing processor performs subsequent routing operations based on congestion notifications.
- 19. A method for call processing in a communication switch, comprising:
detecting congestion in a transmit queue corresponding to a line card of the communication switch; and providing an indication of the congestion to a central control block that performs call processing and routing for a plurality of line cards included in the communication switch, wherein the central control block performs subsequent routing operations in a manner that avoids the congestion corresponding to the line card.
- 20. The method of claim 19, wherein the central control block includes a resource routing processor, a plurality of intermediate processors, and a link layer processor, wherein the resource routing processor performs the subsequent routing operations.
- 21. The method of claim 19, wherein performing subsequent routing operations includes maintaining status of a plurality of transmit queues corresponding to a plurality of line cards in the switch, wherein the status is used to determine a non-congested compatible transmit queues for the subsequent routing operations.
- 22. The method of claim 21 further comprises prioritizing data flow in the switch such that congestion is concentrated at the plurality of transmit queues.
- 23. The method of claim 19, wherein the congestion in the transmit queue is a result of a buildup of messages corresponding to programming commands that are directed towards the line card.
RELATED APPLICATIONS
[0001] This application is claims priority to a provisional application No. 60/224,441 filed Aug. 10, 2000 having the same title as the present application. The present application is related to a co-pending application entitled “MULTIPROCESSOR CONTROL BLOCK FOR USE IN A COMMUNICATION SWITCH AND METHOD THEREFORE”, which has an attorney docket number of 1400.4100220 and which was filed on the same date as the present application.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60224441 |
Aug 2000 |
US |