Claims
- 1. A network processor for processing information elements, wherein each information element is associated with a flow and comprises at least one information element segment, the network processor comprising:
a policy controller for storing an information element into at least one information segment storage unit within a memory, and determining whether an information element segment conforms to a predetermined quality of service (“QoS”); a traffic processor for selecting the information element segment for forwarding based on at least one QoS parameter; and a forwarding processor for forwarding the selected information element segment to an egress port.
- 2. The network processor of claim 1, wherein the QoS relates to a peak cell rate or a committed rate.
- 3. The network processor of claim 1, wherein the policy controller determines a storage occupancy level within the memory of a class to which the flow associated with the information element belongs.
- 4. The network processor of claim 3, wherein the class includes only one flow.
- 5. The network processor of claim 1, wherein the policy controller further determines whether to discard the information element based upon its QoS conformance.
- 6. The network processor of claim 3, wherein the policy controller further determines whether to discard the information element based upon its storage occupancy level.
- 7. The network processor of claim 6, wherein the policy controller employs a weighted random early discard algorithm to determine whether to discard the information element based upon its storage occupancy level.
- 8. The network processor of claim 5, wherein the policy controller determines a storage occupancy level within the memory of a class to which the flow associated with the information element belongs.
- 9. The network processor of claim 8, wherein the class includes only one flow.
- 10. The network processor of claim 8, wherein the policy controller further determines whether to discard the information element based upon its storage occupancy level.
- 11. The network processor of claim 1, wherein the policy controller marks the information element based upon its QoS conformance.
- 12. The network processor of claim 11, wherein the QoS relates to a peak cell rate and a committed rate, the policy controller for marking the information element a first type if it does not conform to the peak cell rate, a second type if it does not conform to the committed rate, and a third type if it conforms to both the peak cell rate and the committed rate.
- 13. The network processor of claim 3, wherein the policy controller marks the information element based upon its storage occupancy level.
- 14. The network processor of claim 13, wherein the information element is categorized into at least high, medium and low ranges of storage occupancy, the policy controller for marking the information element a first type if its storage occupancy level is in the high range, a second type if its storage occupancy level is in the medium range, and a third type if its storage occupancy level is in the low range.
- 15. The network processor of claim 1, the traffic processor including:
at least one shaper, each shaper associated with an egress port and at least one flow, wherein the shaper is governed by at least one QoS parameter; at least one group, wherein each group includes at least one shaper; a group arbiter for arbitrating among the at least one group to select a group; a shaper arbiter for arbitrating among the at least one shaper within the selected group to select a shaper; and a scheduler for scheduling for forwarding an information element segment associated with the selected shaper.
- 16. The network processor of claim 15, wherein the at least one QoS parameter is priority.
- 17. The network processor of claim 15, wherein the at least one QoS parameter includes priority and rate.
- 18. The network processor of claim 15, wherein the scheduling is based upon service category.
- 19. The network processor of claim 18, wherein the service categories include variable bit rate and constant bit rate.
- 20. The network processor of claim 15, wherein an information element segment is not scheduled for forwarding from the corresponding port if the port is congested.
- 21. The network processor of claim 20, wherein the port is a physical egress port, the port being congested if the number of information element segments already scheduled for forwarding from the port exceeds an occupancy threshold for the port.
- 22. The network processor of claim 20, wherein the port is a logical egress port, the logical port assignment to a flow being based upon the corresponding physical egress port and a priority.
- 23. The network processor of claim 15, wherein the shaper arbitration employs one of the following algorithms: strict priority; round robin; weighted round robin; weighted fair queuing; or an arbitration algorithm based on one or more the foregoing.
- 24. The network processor of claim 15, wherein each port is associated with a subgroup of shapers within a group, and the shapers within the subgroup are arbitrated together during shaper arbitration.
- 25. The network processor of claim 15, wherein a shaper can service flows of different service categories during an arbitration cycle.
- 26. The network processor of claim 15, wherein an information element segment is not scheduled for forwarding if, for the corresponding flow, a credit value associated with the number of information element segments scheduled for forwarding from the corresponding port does not satisfy a burst tolerance for the flow.
- 27. The network processor of claim 26, wherein, if the burst tolerance is not satisfied, the credit value is adjusted to make it more likely than otherwise that the burst tolerance for the flow will be satisfied during a next scheduling cycle.
- 28. The network processor of claim 15, wherein a credit value is used to ensure that scheduling of an information element segment for forwarding meets a sustained cell rate (“SCR”) constraint, and, if the information element segment is not scheduled for forwarding during a current scheduling cycle, the information element segment is awarded a credit to make it more likely than otherwise that the SCR constraint will be satisfied during a next scheduling cycle.
- 29. The network processor of claim 28, wherein, if the information element segment is scheduled for forwarding during a current scheduling cycle, the information element segment is discredited for purposes of the next scheduling cycle.
- 30. The network processor of claim 17, further comprising a shaper counter associated with each shaper, wherein a shaper joins shaper arbitration if it is valid, a shaper being valid based in part on its shaper counter elapsing.
- 31. The network processor of claim 30, wherein the validity of the shaper is also based upon at least one flow associated with the shaper being in a first service category (even if no such flow is active), or all flows associated with the shaper being in a second service category and at least one such flow being active.
- 32. The network processor of claim 31, wherein the first service category is variable bit rate, and the second service category is constant bit rate.
- 33. The network processor of claim 31, wherein the valid shaper having a highest priority will win the arbitration.
- 34. The network processor of claim 33, wherein, if no flow in the first service category is active, then a credit is awarded to all such flows associated with the valid shaper, so that when one of such flows subsequently becomes active and valid, it will be more favored than otherwise to be scheduled for forwarding.
- 35. The network processor of claim 1, further comprising a packet parser, coupled to the policy controller, to determine the flow to which an information element belongs.
- 36. The network processor of claim 1 wherein each information segment storage unit is a fixed-size, wherein the fixed size is configurable.
- 37. The network processor of claim 1 wherein the information element segment is the entire information element.
- 38. A method for processing information elements in a network processor, wherein each information element is associated with a flow and comprises at least one information element segment, the method comprising:
storing an information element into at least one information segment storage unit within a memory; determining whether an information element segment conforms to a predetermined quality of service (“QoS”); selecting the information element segment for forwarding based on at least one QoS parameter; and forwarding the selected information element segment to an egress port.
- 39. The method of claim 38, wherein the QoS relates to a peak cell rate or a committed rate.
- 40. The method of claim 38, further comprising determining a storage occupancy level within the memory of a class to which the flow associated with the information element belongs.
- 41. The method of claim 40, wherein the class includes only one flow.
- 42. The method of claim 38, further comprising determining whether to discard the information element based upon its QoS conformance.
- 43. The method of claim 40, further comprising determining whether to discard the information element based upon its storage occupancy level.
- 44. The method of claim 43, further comprising employing a weighted random early discard algorithm to determine whether to discard the information element based upon its storage occupancy level.
- 45. The method of claim 42, further comprising determining a storage occupancy level within the memory of a class to which the flow associated with the information element belongs.
- 46. The method of claim 45, wherein the class includes only one flow.
- 47. The method of claim 45, further comprising determining whether to discard the information element based upon its storage occupancy level.
- 48. The method of claim 38, further comprising marking the information element based upon its QoS conformance.
- 49. The method of claim 48, wherein the QoS relates to a peak cell rate and a committed rate, the method further comprising marking the information element a first type if it does not conform to the peak cell rate, a second type if it does not conform to the committed rate, and a third type if it conforms to both the peak cell rate and the committed rate.
- 50. The method of claim 40, further comprising marking the information element based upon its storage occupancy level.
- 51. The method of claim 50, wherein the information element is categorized into at least high, medium and low ranges of storage occupancy, the method further comprising marking the information element a first type if its storage occupancy level is in the high range, a second type if its storage occupancy level is in the medium range, and a third type if its storage occupancy level is in the low range.
- 52. The method of claim 38, the network processor including at least one shaper, each shaper associated with an egress port and at least one flow, and at least one group, each group including at least one shaper, wherein the shaper is governed by at least one QoS parameter, the method further comprising:
arbitrating among the at least one group to select a group; arbitrating among the at least one shaper within the selected group to select a shaper; and scheduling for forwarding an information element segment associated with the selected shaper.
- 53. The method of claim 52, wherein the at least one QoS parameter is priority.
- 54. The method of claim 52, wherein the at least one QoS parameter includes priority and rate.
- 55. The method of claim 52, wherein the scheduling is based upon service category.
- 56. The method of claim 55, wherein the service categories include variable bit rate and constant bit rate.
- 57. The method of claim 52, wherein an information element segment is not scheduled for forwarding from the corresponding port if the port is congested.
- 58. The method of claim 57, wherein the port is a physical egress port, the port being congested if the number of information element segments already scheduled for forwarding from the port exceeds an occupancy threshold for the port.
- 59. The method of claim 57, wherein the port is a logical egress port, the logical port assignment to a flow being based upon the corresponding physical egress port and a priority.
- 60. The method of claim 52, wherein the shaper arbitration employs one of the following algorithms: strict priority; round robin; weighted round robin; weighted fair queuing; or an arbitration algorithm based on one or more the foregoing.
- 61. The method of claim 52, wherein each port is associated with a subgroup of shapers within a group, the method further comprising arbitrating the shapers within the subgroup together during shaper arbitration.
- 62. The method of claim 52, further comprising servicing flows of different service categories during an arbitration cycle.
- 63. The method of claim 52, wherein an information element segment is not scheduled for forwarding if, for the corresponding flow, a credit value associated with the number of information element segments scheduled for forwarding from the corresponding port does not satisfy a burst tolerance for the flow.
- 64. The method of claim 63, wherein, if the burst tolerance is not satisfied, the credit value is adjusted to make it more likely than otherwise that the burst tolerance for the flow will be satisfied during a next scheduling cycle.
- 65. The method of claim 52, wherein a credit value is used to ensure that scheduling of an information element segment for forwarding meets a sustained cell rate (“SCR”) constraint, and, if the information element segment is not scheduled for forwarding during a current scheduling cycle, the information element segment is awarded a credit to make it more likely than otherwise that the SCR constraint will be satisfied during a next scheduling cycle.
- 66. The method of claim 65, wherein, if the information element segment is scheduled for forwarding during a current scheduling cycle, the information element segment is discredited for purposes of the next scheduling cycle.
- 67. The method of claim 54, the network processor further comprising a shaper counter associated with each shaper, the method further comprising a shaper joining shaper arbitration if it is valid, a shaper being valid based in part on its shaper counter elapsing.
- 68. The method of claim 67, wherein the validity of the shaper is also based upon at least one flow associated with the shaper being in a first service category (even if no such flow is active), or all flows associated with the shaper being in a second service category and at least one such flow being active.
- 69. The method of claim 68, wherein the first service category is variable bit rate, and the second service category is constant bit rate.
- 70. The method of claim 68, further comprising a valid shaper winning the arbitration if it has a highest priority.
- 71. The method of claim 70, wherein, if no flow in the first service category is active, then a credit is awarded to all such flows associated with the valid shaper, so that when one of such flows subsequently becomes active and valid, it will be more favored than otherwise to be scheduled for forwarding.
- 72. The method of claim 38, further comprising determining the flow to which an information element belongs.
- 73. The method of claim 38 wherein each information segment storage unit is a fixed-size, wherein the fixed size is configurable.
- 74. The method of claim 38 wherein the information element segment is the entire information element.
- 75. A traffic processor for scheduling information elements for forwarding, wherein each information element is associated with a flow and comprises at least one information element segment, the traffic processor comprising:
at least one shaper, each shaper associated with an egress port and at least one flow of information elements, wherein each shaper is governed by at least one quality of service (“QoS”) parameter; at least one group, wherein each group includes at least one shaper; a group arbiter for arbitrating among the at least one group to select a group; a shaper arbiter for arbitrating among the at least one shaper within the selected group to select a shaper; and a scheduler for scheduling for forwarding an information element segment associated with the selected shaper.
- 76. The traffic processor of claim 75, wherein the at least one QoS parameter is priority.
- 77. The traffic processor of claim 75, wherein the at least one QoS parameter includes priority and rate.
- 78. The traffic processor of claim 75, wherein the scheduling is based upon service category.
- 79. The traffic processor of claim 78, wherein the service categories include variable bit rate and constant bit rate.
- 80. The traffic processor of claim 75, wherein an information element segment is not scheduled for forwarding from the corresponding port if the port is congested.
- 81. The traffic processor of claim 80, wherein the port is a physical egress port, the port being congested if the number of information element segments already scheduled for forwarding from the port exceeds an occupancy threshold for the port.
- 82. The traffic processor of claim 80, wherein the port is a logical egress port, the logical port assignment to a flow being based upon the corresponding physical egress port and a priority.
- 83. The traffic processor of claim 75, wherein the shaper arbitration employs one of the following algorithms: strict priority; round robin; weighted round robin; weighted fair queuing; or an arbitration algorithm based on one or more the foregoing.
- 84. The traffic processor of claim 75, wherein each port is associated with a subgroup of shapers within a group, and the shapers within the subgroup are arbitrated together during shaper arbitration.
- 85. The traffic processor of claim 75, wherein a shaper can service flows of different service categories during an arbitration cycle.
- 86. The traffic processor of claim 75, wherein an information element segment is not scheduled for forwarding if, for the corresponding flow, a credit value associated with the number of information element segments scheduled for forwarding from the corresponding port does not satisfy a burst tolerance for the flow.
- 87. The traffic processor of claim 86, wherein, if the burst tolerance is not satisfied, the credit value is adjusted to make it more likely than otherwise that the burst tolerance for the flow will be satisfied during a next scheduling cycle.
- 88. The traffic processor of claim 75, wherein a credit value is used to ensure that scheduling of an information element segment for forwarding meets a sustained cell rate (“SCR”) constraint, and, if the information element segment is not scheduled for forwarding during a current scheduling cycle, the information element segment is awarded a credit to make it more likely than otherwise that the SCR constraint will be satisfied during a next scheduling cycle.
- 89. The traffic processor of claim 88, wherein, if the information element segment is scheduled for forwarding during a current scheduling cycle, the information element segment is discredited for purposes of the next scheduling cycle.
- 90. The traffic processor of claim 77, further comprising a shaper counter associated with each shaper, wherein a shaper joins shaper arbitration if it is valid, a shaper being valid based in part on its shaper counter elapsing.
- 91. The traffic processor of claim 90, wherein the validity of the shaper is also based upon at least one flow associated with the shaper being in a first service category (even if no such flow is active), or all flows associated with the shaper being in a second service category and at least one such flow being active.
- 92. The traffic processor of claim 91, wherein the first service category is variable bit rate, and the second service category is constant bit rate.
- 93. The traffic processor of claim 91, wherein the valid shaper having a highest priority will win the arbitration.
- 94. The traffic processor of claim 93, wherein, if no flow in the first service category is active, then a credit is awarded to all such flows associated with the valid shaper, so that when one of such flows subsequently becomes active and valid, it will be more favored than otherwise to be scheduled for forwarding.
- 95. The traffic processor of claim 90, further comprising a group arbitration counter associated with each group, wherein a group joins group arbitration if it is valid, a group being valid based in part on one of the shapers within the group being valid and the group arbitration counter having elapsed.
- 96. The traffic processor of claim 95, wherein the group arbitration counter includes:
a group fraction counter; and a group counter for counting in response to at least one enable digit of the fraction counter being set to a first count enable value, wherein at least one shaper counter counts in response to at least one enable digit of the group counter being set to a second count enable value.
- 97. The traffic processor of claim 96, wherein the fraction counter is an incrementing counter, the at least one enable digit of the fraction counter is a most significant bit, and the first count enable value is a binary one.
- 98. The traffic processor of claim 97, wherein the group counter and the at least one shaper counter are decrementing counters, the at least one enable digit of the group counter are all digits of the group counter, and the second count enable value is binary zero.
- 99. The traffic processor of claim 98, wherein initial values, increment values and decrement values of the counters are set so that when the at least one shaper counter elapses, a peak cell rate period has elapsed.
- 100. The method of claim 75 wherein the information element segment is the entire information element.
- 101. A method for scheduling information elements for forwarding, wherein each information element is associated with a flow and comprises at least one information element segment, the method comprising:
arbitrating among the at least one group of shapers to select a group, wherein each shaper is associated with an egress port and at least one flow of information elements, and each shaper is governed by at least one quality of service.(“QoS”) parameter; arbitrating among the at least one shaper within the selected group to select a shaper; and scheduling for forwarding an information element segment associated with the selected shaper.
- 102. The method of claim 101, wherein the at least one QoS parameter is priority.
- 103. The method of claim 101, wherein the at least one QoS parameter includes priority and rate.
- 104. The method of claim 101, wherein the scheduling is based upon service category.
- 105. The method of claim 104, wherein the service categories include variable bit rate and constant bit rate.
- 106. The method of claim 101, wherein an information element segment is not scheduled for forwarding from the corresponding port if the port is congested.
- 107. The method of claim 106, wherein the port is a physical egress port, the port being congested if the number of information element segments already scheduled for forwarding from the port exceeds an occupancy threshold for the port.
- 108. The method of claim 106, wherein the port is a logical egress port, the logical port assignment to a flow being based upon the corresponding physical egress port and a priority.
- 109. The method of claim 101, wherein the shaper arbitration employs one of the following algorithms: strict priority; round robin; weighted round robin; weighted fair queuing; or an arbitration algorithm based on one or more the foregoing.
- 110. The method of claim 101, wherein each port is associated with a subgroup of shapers within a group, the method further comprising arbitrating the shapers within the subgroup together during shaper arbitration.
- 111. The method of claim 101, further comprising servicing flows of different service categories during an arbitration cycle.
- 112. The method of claim 101, wherein an information element segment is not scheduled for forwarding if, for the corresponding flow, a credit value associated with the number of information element segments scheduled for forwarding from the corresponding port does not satisfy a burst tolerance for the flow.
- 113. The method of claim 112, wherein, if the burst tolerance is not satisfied, the credit value is adjusted to make it more likely than otherwise that the burst tolerance for the flow will be satisfied during a next scheduling cycle.
- 114. The method of claim 101, wherein a credit value is used to ensure that scheduling of an information element segment for forwarding meets a sustained cell rate (“SCR”) constraint, and, if the information element segment is not scheduled for forwarding during a current scheduling cycle, the information element segment is awarded a credit to make it more likely than otherwise that the SCR constraint will be satisfied during a next scheduling cycle.
- 115. The method of claim 114, wherein, if the information element segment is scheduled for forwarding during a current scheduling cycle, the information element segment is discredited for purposes of the next scheduling cycle.
- 116. The method of claim 103, further comprising a shaper joining shaper arbitration if it is valid, a shaper being valid based in part on an associated shaper counter elapsing.
- 117. The method of claim 116, wherein the validity of the shaper is also based upon at least one flow associated with the shaper being in a first service category (even if no such flow is active), or all flows associated with the shaper being in a second service category and at least one such flow being active.
- 118. The method of claim 117, wherein the first service category is variable bit rate, and the second service category is constant bit rate.
- 119. The method of claim 117, further comprising a valid shaper winning the arbitration if it has a highest priority.
- 120. The method of claim 119, wherein, if no flow in the first service category is active, then a credit is awarded to all such flows associated with the valid shaper, so that when one of such flows subsequently becomes active and valid, it will be more favored than otherwise to be scheduled for forwarding.
- 121. The method of claim 116, further comprising a group joining group arbitration if it is valid, a group being valid based in part on one of the shapers within the group being valid and an associated group arbitration counter having elapsed.
- 122. The method of claim 121, further comprising
a group counter counting in response to at least one enable digit of a group fraction counter being set to a first count enable value; and at least one shaper counter counting in response to at least one enable digit of the group counter being set to a second count enable value.
- 123. The method of claim 122, wherein the fraction counter is an incrementing counter, the at least one enable digit of the fraction counter is a most significant bit, and the first count enable value is a binary one.
- 124. The method of claim 123, wherein the group counter and the at least one shaper counter are decrementing counters, the at least one enable digit of the group counter are all digits of the group counter, and the second count enable value is binary zero.
- 125. The method of claim 124, wherein initial values, increment values and decrement values of the counters are set so that when the at least one shaper counter elapses, a peak cell rate period has elapsed.
- 126. The method of claim 101 wherein the information element segment is the entire information element.
- 127. A hierarchical counter comprising:
a first subcounter; a second subcounter for counting in response to at least one enable digit of the first subcounter being set to a first count enable value; at least one third subcounter for counting in response to at least one enable digit of the second subcounter being set to a second count enable value.
- 128. The counter of claim 127, wherein the at least one enable digit is a most significant bit.
- 129. The counter of claim 128, wherein the first count enable value is a binary one.
- 130. The counter of claim 127, wherein the at least one enable digit of the second subcounter are all digits of the second subcounter.
- 131. The counter of claim 130, wherein the second count enable value is binary zero.
- 132. The counter of claim 127, wherein the first subcounter is an incrementing counter, and the second and third subcounters are decrementing counters.
- 133. The counter of claim 129, wherein the at least one enable digit of the second subcounter are all digits of the second subcounter, the second count enable value is binary zero, the first subcounter is an incrementing counter, and the second and third subcounters are decrementing counters.
- 134. A method for hierarchical counting comprising:
a second subcounter counting in response to at least one enable digit of a first subcounter being set to a first count enable value; and at least one third subcounter counting in response to at least one enable digit of the second subcounter being set to a second count enable value.
- 135. The method of claim 134, wherein the at least one enable digit is a most significant bit.
- 136. The method of claim 135, wherein the first count enable value is a binary one.
- 137. The method of claim 134, wherein the at least one enable digit of the second subcounter are all digits of the second subcounter.
- 138. The method of claim 137, wherein the second count enable value is binary zero.
- 139. The method of claim 134, wherein the first subcounter is an incrementing counter, and the second and third subcounters are decrementing counters.
- 140. The method of claim 136, wherein the at least one enable digit of the second subcounter are all digits of the second subcounter, the second count enable value is binary zero, the first subcounter is an incrementing counter, and the second and third subcounters are decrementing counters.
- 141. A system to manage congestion of a plurality of ports, comprising:
a first network processor including a traffic processor; a second network processor for informing the first network processor whether an egress port is available, wherein the traffic processor does not schedule a flow for forwarding from the egress port if the first network processor has been informed that the egress port is not available.
- 142. The system of claim 141, wherein the egress port address is a logical egress port address based on a physical egress port address and a priority.
- 143. The system of claim 142, wherein the first network processor and the second network processor operate in simplex mode in opposite directions.
- 144. The system of claim 143, wherein the first network processor is an ingress network processor, and the second network processor is an egress network processor.
- 145. The system of claim 142, further comprising a backpressure memory, wherein the second network processor indicates egress port availability by setting in the backpressure memory a backpressure indicator corresponding to the egress port.
- 146. The system of claim 145, further comprising a per-flow traffic descriptor including designations of the physical egress port and the priority corresponding to the flow, and the traffic processor addresses the backpressure memory with a logical egress port address formed from the physical egress port address and the priority to retrieve the backpressure indicator.
- 147. A method for managing congestion of a plurality of ports, comprising:
a second network processor informing a first network processor whether an egress port is available, wherein
the first network processor does not schedule a flow for forwarding from the egress port if the first network processor has been informed that the egress port is not available.
- 148. The method of claim 147, wherein the egress port address is a logical egress port address based on a physical egress port address and a priority.
- 149. The method of claim 148, wherein the first network processor and the second network processor operate in simplex mode in opposite directions.
- 150. The method of claim 149, wherein the first network processor is an ingress network processor, and the second network processor is an egress network processor.
- 151. The method of claim 148, further comprising the second network processor indicating egress port availability by setting in a backpressure memory a backpressure indicator corresponding to the egress port.
- 152. The method of claim 151, further comprising designating, in a per-flow traffic descriptor, the physical egress port and the priority corresponding to the flow, and addressing the backpressure memory with a logical egress port address formed from the physical egress port address and the priority to retrieve the backpressure indicator.
- 153. Within a network processor, an input/output unit, comprising:
at least one input/output port; an input/output memory that includes a plurality of buffers; and an input/output scheduler to configurably assign at least one of the buffers to at least one of the input/output ports.
- 154. The input/output unit of-claim 153, wherein the at least one input/output port receives an information element, and the input/output scheduler stores the information element in the at least one of the buffers configurably assigned to the at least one input/output port that is to transmit the information element.
- 155. The input/output unit of claim 154 further comprising an input/output error checker, coupled to the input/output scheduler, for checking for errors in the information element before storing the at least one information element in the at least one buffer configurably assigned to the at least one input/output port that received the information element.
- 156. The input/output unit of claim 154, wherein the number of buffers assigned to an input/output port depends on the number of input/output ports configured to transmit information elements.
- 157. The input/output unit of claim 153, wherein the input/output scheduler configurably assigns 192 buffers to a single input/output port, the single input/output port being configured to operate in an optical carrier level 192 (“OC-192”) mode.
- 158. The input/output unit of claim 153, wherein the at least one buffer is a FIFO buffer.
- 159. Within a network processor, a method for configuring at least one input/output port, the method comprising:
providing an input/output memory that includes a plurality of buffers; and configurably assigning at least one input/output buffer to the at least one input/output port.
- 160. The method of claim 159, further comprising receiving an information element at the at least one input/output port, and storing the information element in the at least one buffer configurably assigned to the at least one input/output port that is to transmit the information element.
- 161. The method of claim 160 further comprising
checking for errors in the information element before storing the at least one information element in the at least one buffer configurably assigned to the at least one input/output port that received the information element.
- 162. The method of claim 160, wherein the number of buffers assigned to an input/output port depends on the number of input/output ports configured to transmit information elements.
- 163. The method of claim 159, further comprising configurably assigning 192 buffers to a single input/output port, the single input/output port being configured to operate in an optical carrier level 192 (“OC-192”) mode.
- 164. The method of claim 159, wherein the at least one buffer is a FIFO buffer.
- 165. A system for identifying the flow to which an information element belongs, wherein the information element is received by an input port of a network processor, the system comprising:
a direct key generator for forming a first key from selected fields of the information element, wherein the fields are selected as a function of the corresponding input port, and the direct key generator is selected to form the first key based upon a configuration of the input port; and at least one content addressable memory (“CAM”) for providing a flow identifier in response to the first key hitting in the at least one CAM.
- 166. The system of claim 165, wherein the direct key generator selects an instruction as a function of the input port, and the instruction selects fields of the information element to form the first key.
- 167. The system of claim 165, wherein the flow identifier and the information element are provided to a policy controller, the policy controller for determining whether an information element conforms to a predetermined quality of service.
- 168. The system of claim 165, wherein the at least one CAM provides a default flow identifier in response to the first key not hitting in the at least one CAM.
- 169. The system of claim 165, wherein the at least one CAM provides an exception in response to the first key not hitting in the at least one CAM.
- 170. The system of claim 165, wherein a second key is formed from selected fields of the information element, the at least one CAM, in response to the first key hitting in the at least one CAM, for alternatively providing an instruction and for providing the flow identifier in response to the second key.
- 171. The system of claim 170, wherein the at least one CAM responsive to the first key is a first CAM, and the at least one CAM responsive to the second key is a second CAM.
- 172. The system of claim 170, wherein a second instruction selects fields of the information element to form the second key.
- 173. The system of claim 170, wherein the at least one CAM provides a default flow identifier in response to the first key not hitting in the at least one CAM.
- 174. The system of claim 170, wherein the at least one CAM provides an exception in response to the first key not hitting in the at least one CAM.
- 175. The system of claim 165, wherein the direct key generator forms the first key from fields of the information element and data relating to the corresponding input port.
- 176. The system of claim 165, further comprising:
an indirect internal key generator for forming an internal key from fields of the information element selected as a function of the corresponding input port; and an indirect first key generator for forming the first key from selected fields of the information element, wherein the fields are selected as a function of the internal key, and, based upon a configuration of the input port, the indirect key generator is selected to form the internal key and the indirect first key generator is selected to form the first key instead of the direct key generator.
- 177. The system of claim 176, wherein the indirect first key generator comprises an internal CAM for selecting an instruction in response to the internal key, and the instruction selects fields of the information element to form the first key.
- 178. The system of claim 171, further comprising a first pipeline stage including the first and second CAMs, and a second pipeline stage including third and fourth CAMs, respectively responsive to third and fourth keys, for providing a flow identifier or an instruction, wherein if the second CAM provides an instruction in response to the second key, the first pipeline stage generates the third key.
- 179. The system of claim 178, wherein, if the third CAM provides an instruction in response to the third key, the fourth key is formed to address the fourth CAM.
- 180. A method for identifying the flow to which an information element belongs, the method comprising:
receiving the information element at an input port; forming a first key from selected fields of the information element, wherein the fields are selected as a function of the corresponding input port; and providing a flow identifier in response to the first key hitting in at least one CAM.
- 181. The method of claim 180, further comprising selecting an instruction as a function of the input port, wherein the instruction selects fields of the information element to form the first key.
- 182. The method of claim 180, further comprising determining whether an information element conforms to a predetermined quality of service.
- 183. The method of claim 180, further comprising providing a default flow identifier in response to the first key not hitting in the at least one CAM.
- 184. The method of claim 180, further comprising providing an exception in response to the first key not hitting in the at least one CAM.
- 185. The method of claim 180, further comprising, in response to the first key hitting in the at least one CAM, providing an instruction in response to the first key, forming a second key from selected fields of the information element, and providing the flow identifier in response to the second key.
- 186. The method of claim 185, wherein the at least one CAM responsive to the first key is a first CAM, and the at least one CAM responsive to the second key is a second CAM.
- 187. The method of claim 185, wherein a second instruction selects fields of the information element to form the second key.
- 188. The method of claim 185, further comprising providing a default flow identifier in response to the first key not hitting in the at least one CAM.
- 189. The method of claim 185, further comprising providing an exception in response to the first key not hitting in the at least one CAM.
- 190. The method of claim 180, wherein the first key is formed from fields of the information element and data relating to the corresponding input port.
- 191. The method of claim 180, further comprising:
forming an internal key from fields of the information element selected as a function of the corresponding input port; and forming the first key from selected fields of the information element, wherein the fields are selected as a function of the internal key, wherein, based upon a configuration of the input port, either the first key is formed directly or the internal key is formed and used to form the first key.
- 192. The method of claim 191, wherein an internal CAM selects an instruction in response to the internal key, and the instruction selects fields of the information element to form the first key.
- 193. The method of claim 186, wherein a first pipeline stage includes the first and second CAMs, and a second pipeline stage includes third and fourth CAMs, respectively responsive to third and fourth keys, for providing a flow identifier or an instruction, the method further comprising, if the second CAM provides an instruction in response to the second key, the first pipeline stage generating the third key.
- 194. The method of claim 193, further comprising, if the third CAM provides an instruction in response to the third key, the fourth key is formed to address the fourth CAM.
- 195. An exception processing system, wherein an information element belongs to a flow of information elements and comprises at least one information element segment, the exception processing system comprising:
a policy controller for detecting an exception related to the information element; a memory for storing each information element in at least one information segment storage unit; a processor for receiving the information element from the policy controller, wherein the processor handles the exception and sends the information element to the policy controller after handling the exception, and a traffic processor, wherein, after exception handling of the information element, the policy controller sends the information element to the memory and notifies the traffic processor that the flow to which the information element belongs is available for scheduling.
- 196. The exception processing system of claim 195, further comprising
a forwarding processor that, after the traffic processor selects for forwarding an information element segment belonging to the flow, fetches from the memory the selected information element segment and forwards it to an egress port.
- 197. The exception processing system of claim 196, wherein the information element segment is the entire information element.
- 198. An exception processing system, comprising:
a policy controller for detecting an exception related to an information element, wherein the information-element belongs to a flow of information elements; a memory for storing the information elements; a processor for receiving from the policy controller the information element related to the exception, wherein the processor handles the exception and sends the information element to the memory after handling the exception, and a traffic processor, wherein, after exception handling of the information element, the processor notifies the traffic processor that the flow to which the information element belongs is available for scheduling.
- 199. The exception processing system of claim 198, further comprising
a forwarding processor that, after the traffic processor selects for forwarding an information element segment belonging to the flow, fetches from the memory the selected information element segment and forwards it to an egress port.
- 200. The exception processing system of claim 199, wherein the information element segment is the entire information element.
- 201. An exception processing system comprising:
a policy controller for detecting an exception related to an information element, wherein the information element belongs to a flow of information elements; a memory for storing the information elements, wherein the policy controller stores the information element related to the exception in at least one information segment storage unit in the memory; a processor for receiving from the policy controller notification of the exception, wherein the processor fetches the information element related to the exception from the memory, handles the exception, and stores the information element in the memory after handling the exception, and a traffic processor, wherein, after exception handling of the information element, the processor notifies the traffic processor that the flow to which the information element belongs is available for scheduling.
- 202. The exception processing system of claim 201, further comprising
a forwarding processor that, after the traffic processor selects for forwarding an information element segment belonging to the flow, fetches from the memory the selected information element segment and forwards it to an egress port.
- 203. The exception processing system of claim 202, wherein the information element segment is the entire information element.
- 204. A method for processing exceptions related to an information element, wherein the information element belongs to a flow of information elements and comprises at least one information element segment, the method comprising:
a policy controller detecting an exception related to the information element; a processor receiving from the policy controller the information element related to the exception; the processor handling the exception; the processor sending the information element to the policy controller after handling the exception, and after the exception is handled, the policy controller sending the information element to a memory and notifying a traffic processor that the flow to which the information element belongs is available for scheduling.
- 205. The method of claim 204, further comprising
selecting for forwarding an information element segment belonging to the flow; fetching from the memory the selected information element segment; and forwarding the information element segment to an egress port.
- 206. The method of claim 205, wherein the information element segment is the entire information element.
- 207. A method for processing exceptions related to an information element, wherein the information element belongs to a flow of information elements and comprises at least one information element segment, the method comprising:
a policy controller detecting an exception related to the information element; a processor receiving from the policy controller the information element related to the exception; the processor handling the exception; the processor sending the information element to a memory after handling the exception; and after the exception is handled, the processor notifying a traffic processor that the flow to which the information element belongs is available for scheduling.
- 208. The method of claim 207, further comprising
selecting for forwarding an information element segment belonging to the flow; fetching from the memory the selected information element segment; and forwarding the information element segment to an egress port.
- 209. The method of claim 208, wherein the information element segment is the entire information element.
- 210. A method for processing exceptions related to an information element, wherein the information element belongs to a flow of information elements and comprises at least one information element segment, the method comprising:
a policy controller detecting an exception related to an information element; a processor receiving from the policy controller notification of the exception, the processor fetching the information element related to the exception from a memory, the processor handling the exception; the processor storing the information element in the memory after handling the exception, and after the exception is handled, the processor notifying a traffic processor that the flow to which the information element belongs is available for scheduling.
- 211. The method of claim 210, further comprising
selecting for forwarding an information element segment belonging to the flow; fetching from the memory the selected information element segment; and forwarding the information element segment to an egress port.
- 212. The method of claim 211, wherein the information element segment is the entire information element.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application Ser. No. 60/382,217, filed May 20, 2002, and U.S. Provisional Application Ser. No. 60/372,656, filed Apr. 14, 2002, both entitled “Network Processor Architecture,” and both incorporated by reference herein in their entirety.
[0002] This application is a continuation-in-part of U.S. patent application Ser. No. 10/251,946, filed Sep. 19, 2002, entitled “Vertical Instruction and Data Processing in a Network Processor Architecture,” which claims the benefit of U.S. Provisional Application Ser. No. 60/382,437, filed May 20, 2002, entitled “Vertical Instruction and Data Processing in a Network Processor Architecture,” U.S. Provisional Application Ser. No. 60/372,507, filed Apr. 14, 2002, entitled “Differentiated Services for a Network Processor,” and U.S. Provisional Application Ser. No. 60/323,627, filed Sep. 19, 2001, entitled “System and Method for Vertical Instruction and Data Processing in a Network Processor Architecture,” all of which are incorporated by reference herein in their entirety.
Provisional Applications (5)
|
Number |
Date |
Country |
|
60382217 |
May 2002 |
US |
|
60372656 |
Apr 2002 |
US |
|
60382437 |
May 2002 |
US |
|
60372507 |
Apr 2002 |
US |
|
60323627 |
Sep 2001 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10251946 |
Sep 2002 |
US |
Child |
10413776 |
Apr 2003 |
US |