Claims
- 1. A method of processing access requests to a storage unit from multiple processors, the method comprising:
adding tags to access requests issued by multiple processors, wherein a tag includes:
a processor identification that identifies the processor that issued the access request; retrieving from the storage unit data corresponding to the access requests; adding tags to the retrieved data, wherein a tag includes:
the processor identification that identifies the processor that issued the access request corresponding to the retrieved data.
- 2. The method of claim 1 further comprising:
distributing the retrieved data to the processors based on the processor identifications.
- 3. The method of claim 1, wherein the tag includes:
a processor access sequence number relating to the order that the processors issued the access requests.
- 4. The method of claim 3 further comprising:
assembling the retrieved data based on the processor access sequence numbers.
- 5. The method of claim 1 further comprising:
removing the tags from the access requests before retrieving the data from the storage unit, wherein the tags added to the retrieved data correspond to the tags removed from the access requests.
- 6. The method of claim 5 further comprising:
tracking the correspondence of the removed tags and the data retrieved based on the access requests.
- 7. The method of claim 6, wherein the data is retrieved from a first storage unit with a first latency period, wherein the data is retrieved from a second storage unit with a second latency period, and wherein the first and second latency periods are different.
- 8. The method of claim 1, wherein the tag includes a priority for the access request, and further comprising:
before retrieving data from the storage unit, sorting the access requests based on the priority indicated in the tags of the access requests.
- 9. The method of claim 8, wherein the access request includes write access requests and read access request, and further comprising:
arbitrating between write access request and read access requests using the priority for the access requests.
- 10. The method of claim 8, wherein sorting includes:
when access requests have the same priority, sorting the access requests based on an arrival time.
- 11. The method of claim 1, wherein the tag includes a data sequence number that is used to reassemble multiple data units that satisfy a single access request.
- 12. The method of claim 11, wherein the storage unit includes a plurality of banks, and wherein retrieving from the storage unit data corresponding to the access requests comprises:
accessing the banks of the storage unit in a non-fixed sequence; retrieving data units from the banks corresponding to the access requests; and wherein adding tags to the retrieved data comprises:
tagging the data units using data sequence numbers.
- 13. The method of claim 12, wherein the storage unit includes a wait time, wherein a bank that has been accessed cannot be accessed again during the wait time, and wherein accessing the banks of the storage unit in a non-fixed sequence comprises:
accessing a first set of banks within the storage unit to satisfy a first request; and during the wait time, accessing a second set of banks within the storage unit to satisfy a second request, wherein the banks in the first and second sets are different.
- 14. The method of claim 1, wherein the multiple processors are coupled to the storage unit through multiple channels, and wherein the tag added to the retrieved data includes a channel number corresponding to the channel used to retrieve the data from the storage unit.
- 15. A method of processing access requests to a storage unit from multiple processors:
adding a first tag to a first access request issued by a first processor, the first tag having:
a processor identification that identifies the first processor and a processor access sequence number, adding a second tag to a second access request issued by a second processor, the second tag having:
a processor identification that identifies the second processor and a processor access sequence number, wherein the first access request is issued before the second access request, and the processor access sequence numbers of the first and second tags indicate a higher order for the first access request than the second access request; removing the first tag from the first access request before retrieving data corresponding to the first access request from the storage unit, removing the second tag from the second access request before retrieving data corresponding to the second access request from the storage unit; retrieving from the storage unit a first data corresponding to the first access request, retrieving a second data from the storage unit corresponding to the second access request; adding the first tag to the first data retrieved from the storage unit; adding the second tag to the second data retrieved from the storage unit; distributing the first data to the first processor; distributing the second data to the second processor; and assembling the first data and the second data based on the processor access sequence numbers of the first and second tags.
- 16. A system of processing access requests to a storage unit from multiple processors, the system comprising:
a plurality of processors that issue access requests having tags, wherein a tag includes:
a processor identification that identifies the processor that issued the access request; a controller that retrieves from the storage unit data corresponding to the access requests issued by the processors; and a tag marking unit that adds tags to the retrieved data from the storage unit, wherein a tag includes:
the processor identification that identifies the processor that issued the access request corresponding to the retrieved data.
- 17. The system of claim 16 further comprising:
a data dispatcher that distributes the retrieved data with the tags to the processors based on the processor identifications of the tags, wherein the processors reassemble the retrieved data based on the processor access sequence numbers of the tags.
- 18. The system of claim 16, wherein the tag includes:
a processor access sequence number relating to the order that the processors issued the access requests.
- 19. The system of claim 18, wherein the retrieved data is reassembled based on the processor access sequence numbers.
- 20. The system of claim 16, wherein the tag marking unit removes the tags from the access requests before the controller retrieving the data from the storage unit, and wherein the tags added to the retrieved data correspond to the tags removed from the access requests.
- 21. The system of claim 20, wherein the tag marking unit tracks the correspondence of the removed tags and the data retrieved based on the access requests.
- 22. The system of claim 21, wherein the storage unit comprises:
a first storage unit; and a second storage unit, wherein data is retrieved from the second storage unit with a latency period in comparison to data retrieved from the first storage unit.
- 23. The system of claim 16, wherein the tag includes a priority for the access requests, and further comprising:
an arbiter coupled between the processors and the controller that sorts access requests based on the priority indicated in the tags of the access requests before retrieving the data from the data unit.
- 24. The system of claim 23, wherein the access request includes write access requests and read access request, and wherein the controller arbitrates etween write access request and read access requests using the priority for the access requests.
- 25. The system of claim 23, wherein the arbiter sorts access requests having the same priority based on an arrival time.
- 26. The system of claim 16, wherein the tag includes a data sequence number, and wherein the processors reassemble multiple data units that satisfy a single access request using the data sequence number.
- 27. The system of claim 26, wherein the storage unit includes a plurality of banks, wherein the controller retrieves data units from the banks of the storage unit in a non-fixed sequence, and wherein the data units are tagged using data sequence numbers.
- 28. The system of claim 27, wherein the storage unit includes a wait time, wherein a bank that has been accessed cannot be accessed again during the wait time, and wherein the banks of the storage unit are accessed in a non-fixed sequence by:
accessing a first set of banks within the storage unit to satisfy a first request; and during the wait time, accessing a second set of banks within the storage unit to satisfy a second request, wherein the banks in the first and second sets are different.
- 29. The system of claim 16 further comprising:
a plurality of channels that connect the processors to the storage unit, wherein the tag added to the retrieved data includes a channel number corresponding to the channel used to retrieve the data from the storage unit.
- 30 A system of storing data transmitted through multiple data channels in multiple memory units, the system comprising:
a plurality of data channels; a plurality of memory units that store data transmitted by a plurality of data sources that send data through the data channels; a plurality of data buffers,
wherein a data buffer is coupled to a data channel, a data buffer includes a plurality of entries, and the number of entries in the data buffer is equal to or greater than the number of data sources that can simultaneously transmit data to the data buffer; and a plurality of arbiters,
wherein an arbiter is coupled to a memory unit and each of the plurality of data buffers.
- 31. The system of claim 30, wherein the plurality of data sources includes channel memory controllers.
- 32. The system of claim 30, wherein the entry of a data buffer includes a register.
- 33. The system of claim 30, wherein the memory units store data for a plurality of processors, wherein a portion of a memory unit is dedicated to store data for each of the plurality of processors.
- 34. The system of claim 33, wherein data to be stored in a memory unit includes a tag having a processor identification, and wherein an arbiter uses the processor identification to determine the portion of a memory unit to store the data.
- 35. The system of claim 34, wherein a portion of a memory unit comprises a plurality of portions of the memory unit that store data for a processor.
- 36. The system of claim 35, wherein data to be stored in a memory unit includes a tag having a processor access sequence number, and wherein an arbiter uses the processor access sequence number to determine which one of the plurality of portions of the memory unit that store data for a processor to store the data.
- 37. The system of claim 36, wherein data to be stored in a memory unit includes a tag having a data sequence number, and wherein an arbiter uses the data sequence number to determine which one of the memory units to store the data.
- 38. A method of storing data transmitted through multiple data channels in multiple memory units, the method comprising:
receiving data on at least one of a plurality of data channels sent by at least one of a plurality of data sources; storing data in at least one of a plurality of data buffers,
wherein a data buffer is coupled to a data channel, a data buffer includes a plurality of entries, and the number of entries in the data buffer is equal to or greater than the number of data sources that can simultaneously transmit data to the data buffer; retrieving data from a data buffer using at least one of a plurality of arbiters,
wherein an arbiter is coupled to at least one of a plurality of memory units and each of the plurality of data buffers; and storing data in the memory unit using the data buffer.
- 39. The method of claim 38, wherein the plurality of data sources includes channel memory controllers.
- 40. The method of claim 38, wherein the entry of a data buffer includes a register.
- 41. The method of claim 38, wherein the memory unit stores data for a plurality of processors, wherein a portion of the memory unit is dedicated to store data for each of the plurality of processors.
- 42. The method of claim 41, wherein data to be stored in a memory unit includes a tag having a processor identification, and wherein storing data in the memory unit comprises:
examining the processor identification to determine the portion of the memory unit that is dedicated to store data for the processor corresponding to the processor identification.
- 43. The method of claim 42, wherein a portion of the memory unit comprises a plurality of portions of the memory unit that store data for a processor.
- 44. The method of claim 41, wherein data to be stored in a memory unit includes a tag having a processor access sequence number, and wherein storing data in the memory unit comprises:
examining the processor access sequence number to determine which one of the plurality of portions of the memory unit that store data for the processor to store the data.
- 45. The method of claim 41, wherein data to be stored in a memory unit includes a tag having a data sequence number, and wherein storing data in the memory unit comprises:
examining the data sequence number to determine which one of the plurality of memory units to store the data.
- 46. A system of grouping multiple processors, the system comprising:
an input scheduler; and a plurality of processors coupled to the input scheduler to receive commands from the input scheduler, wherein the plurality of processors are grouped into a plurality of groups, a group of processors having:
a first processor, at least a second processor, and a communication channel connecting the first and the second processors, wherein when the first processor can receive a command from the input schedule, execute the command, then forward the command to the second processor through the communication channel.
- 47. The system of claim 46, wherein the processors within a group of processors execute commands in a circular sequence, wherein the first processor in the group executes a first command, the second processor in the group executes a second command, then the first processor in the group executes a third command.
- 48. The system of claim 46, wherein the input scheduler assigns a command to the first processor, then waits for the first processor to report that a current buffer is an end of packet (EOP) buffer before assigning a command to the second processor.
- 49. The system of claim 48, wherein when the current buffer is not an EOP buffer, the first processor forwards the command to the second processor through the communication channel and the input scheduler waits for the second processor to report that the current buffer is an EOP buffer before assigning a command to a third processor in the same group of processors as the first and second processors.
- 50. The system of claim 48, wherein when the current buffer is the EOP buffer, the input scheduler assigns a command to the second processor.
- 51. The system of claim 46, wherein the processors execute commands corresponding to flows of information elements, and wherein the input scheduler tracks which processor is executing a command belonging to a particular flow of information elements.
- 52. The system of claim 51, wherein when the input scheduler receives a new command, the input scheduler determines if any of the processors are processing a command belonging to the same flow of information elements as the new command.
- 53. The system of claim 52, wherein when a processor is determined to be processing a command belonging to the same flow of information elements as the new command, the scheduler assigns the new command to the group of processors having the processor that was determined to be processing the command belonging to the same flow of information elements as the new command.
- 54. The system of claim 52, wherein when none of the processors is determined to be processing a command belonging to the same flow of information elements as the new command, the schedule assigns the new command to any of the groups of processors ready to process a command.
- 55. The system of claim 46, wherein each of the processors are assigned to an output port to output data through the output ports, and further comprising:
an output scheduler coupled to each of the plurality of processors to schedule the output of data.
- 56. The system of claim 55, wherein processors within a group of processors transfers data to the output scheduler on a first-in-first-out basis.
- 57. The system of claim 56, wherein the group of processors transfers data corresponding to a particular flow of information elements in sequence to the output scheduler.
- 58. The system of claim 56, wherein the groups of processors transfer data to the output scheduler on a first-ready-first-out basis.
- 59. The system of claim 58, wherein the groups of processors transfer data corresponding to different flows of information elements to the output scheduler.
- 60. The system of claim 59, wherein a group of processors is selected to transfer data to the output scheduler, and wherein the group of processors remains selected until an end of packet (EOP) indication is received by the output scheduler.
- 61. A method of grouping multiple processors, the method comprising:
grouping a plurality of processors into a plurality of groups, wherein a group of processors include: a first processor, at least a second processor, and a communication channel connecting the first and second processors; and assigning a command from an input scheduler to the first processor, wherein the command can be forwarded by the first processor to the second processor through the communication channel.
- 62. The method of claim 61 further comprising:
executing commands within a group of processors in a circular sequence, wherein:
a first command is executed by the first processor in the group, a second command is executed by the second processor in the group after the first command has been executed by the first processor, and a third command is executed by the first processor after the second command has been executed by the second processor.
- 63. The method of claim 61, wherein assigning a command comprises:
assigning a command to the first processor from the input scheduler; and waiting to receive a report from the first processor that a current buffer is an end of packet (EOP) buffer before assigning a command to the second processor from the input scheduler.
- 64. The method of claim 63, wherein when the current buffer is not an EOP buffer, forwarding the command from the first processor to the second processor through the communication channel; and
waiting to receive from the second processor that the current buffer is an EOP buffer before assigning a command to a third processor in the same group of processor as the first and second processors.
- 65. The method of claim 63, wherein when the current buffer is an EOP buffer, assigning a command to the second processor from the input scheduler.
- 66. The method of claim 61, wherein the processors execute commands corresponding to flows of information elements, and further comprising:
tracking which processor is executing a command belonging to a particular flow of information elements.
- 67. The method of claim 66 further comprising:
when a new command is received by the input scheduler, determining if any of the processors are processing a command belonging to the same flow of information element as the new command.
- 68. The method of claim 67, wherein when a processor is determined to be processing a command belonging to the same flow of information elements as the new command,
assigning the new command to the group of processors having the processor that was determined to be processing the command belonging to the same flow of information elements as the new command.
- 69. The method of claim 67, wherein when one of the processors is determined to be processing a command belonging to the same flow of information elements as the new command,
assigning the new command to any of the groups of processors ready to process a command.
- 70. The method of claim 61, wherein each of the processors are assigned to an output port to output data through the output ports, and further comprising:
scheduling the output of data from the processors using an output scheduler coupled to the processors.
- 71. The method of claim 70, wherein processors within a group of processors transfers data to the output scheduler on a first-in-first-out basis.
- 72. The method of claim 70, wherein the data transferred by the group of processors corresponds to a particular flow of information elements.
- 73. The method of claim 71, wherein the groups of processors transfers data to the output schedule on a first-ready-first-out basis.
- 74. The method of claim 73, wherein the groups of processors transfer data corresponding to different flows of information elements to the output scheduler.
- 75. The method of claim 74, wherein a group of processors is selected to transfer data to the output scheduler, and wherein the group of processors remains selected until an end of packet (EOP) indication is received by the output scheduler.
- 76. A system of forwarding data to input/output (I/O) ports, the system comprising:
at least one I/O port having a data queue; a forwarding scheduler coupled to the I/O port that sends data to the I/O port; a pipeline engine coupled between the forwarding scheduler and the I/O port that routes data from the forwarding scheduler to the I/O port; and a blocking prevention unit that determines whether there is adequate space in the data queue of the I/O port before sending data from the forwarding scheduler to the pipeline engine.
- 77. The system of claim 76, wherein the blocking prevention unit sends data from the forwarding scheduler to the pipeline engine to be sent to the I/O port when the amount of data to be processed in the pipeline engine for the I/O port is not greater than the amount of available capacity in the data queue of the I/O port.
- 78. The system of claim 77, wherein the blocking prevention unit does not send data from the forwarding scheduler to the pipeline engine when the amount of data to be processed in the pipeline engine for the I/O port is equal to or greater than the amount of available capacity in the data queue of the I/O port.
- 79. The system of claim 78, wherein the amount of data in the pipeline engine to be processed for the I/O port is determined by the number of entries in the pipeline engine, and wherein the amount of available capacity in the data queue of the I/O port is determined by the number of available entries in the data queue.
- 80. The system of claim 79, wherein the pipeline engine includes:
a first pipe used to perform protocol conversion; and a second pipe used to route data to the I/O port, wherein the amount of data in the pipeline engine is based on the number of entries corresponding to the first pipe and the second pipe.
- 81. The system of claim 76, wherein at least one I/O port comprises:
a first I/O port having a data queue; and a second I/O port having a data queue, wherein the pipeline engine routes data to the data queue of the first I/O port or the second I/O port, and wherein the blocking prevention unit determines whether there is adequate space in the data of the first I/O port or the second I/O port before sending data from the forwarding scheduler to the pipeline engine.
- 82. The system of claim 76, wherein the data queue of the I/O port is a first-in-first-out (FIFO) queue.
- 83. A method of forwarding data to input/output (I/O) ports, the method comprising:
sending data from a forwarding scheduler to at least one I/O port having a data queue; routing data through a pipeline engine coupled between the forwarding scheduler and the I/O port, wherein the pipeline engine includes a data queue; and determining whether there is adequate space in the data queue of the I/O port before sending data from the forwarding scheduler to the pipeline engine.
- 84. The method of claim 83, wherein sending data comprises:
when the amount of data to be processed in the pipeline engine for the I/O port is not greater than the amount of available capacity in the data queue of the I/O port, sending data from the forwarding scheduler to the pipeline engine.
- 85. The method of claim 84, wherein sending data further comprises:
when the amount of data to be processed in the pipeline engine for the I/O port is equal to or greater than the amount of available capacity in the data queue of the I/O port, not sending data from the forwarding scheduler to the pipeline engine.
- 86. The method of claim 85, wherein determining comprises:
determining the number of entries in the pipeline engine, and wherein the amount of available capacity in the data queue of the I/O port is determined by the number of available entries in the data queue.
- 87. The method of claim 86 further comprising:
converting the data using a first pipe in the pipeline engine, wherein the converted data is routed through a second pipe in the pipeline engine, and wherein the amount of data in the pipeline engine is based on the number of entries corresponding to the first pipe and the second pipe.
- 88. The method of claim 83, wherein at least one I/O port comprises:
a first I/O port having a data queue; and a second I/O port having a data queue, wherein the pipeline engine routes data to the data queue of the first I/O port or the second I/O port, and wherein determining comprises determining whether there is adequate space in the data of the first I/O port or the second I/O port before sending data from the forwarding scheduler to the pipeline engine.
- 89. The method of claim 83, wherein the data queue of the I/O port is a first-in-first-out (FIFO) queue.
- 90. A system of multicasting data to multiple subscribers, the system comprising:
a multicast table that includes:
flow identifications of subscribers, wherein a flow identification identifies a flow of information elements the subscribers should receive, and an input/output (I/O) port assignment for the flow of information elements; a multicast unit that accesses the multicast table to identify subscribers in the multicast table that should receive the flow of information elements; and a unicast unit coupled to the multicast unit that sends the flow of element to each of the identified subscribers through the I/O port specified in the multicast table for the flow of information element.
- 91. The system of claim 90, wherein the flow of elements is stored in a memory location, and wherein the unicast unit accesses the memory location to send the flow of information elements to the identified subscribers.
- 92. The system of claim 90 further comprising:
a multicast command queue, wherein the multicast unit retrieves a multicast command from the multicast command queue for a flow of information elements to be transmitted to subscribers.
- 93. The system of claim 92, wherein the multicast unit uses the multicast command to retrieve a forward processing instruction to determine the location of the multicast table corresponding to the flow of information elements to be transmitted to subscribers.
- 94. The system of claim 93, wherein the forward processing instruction includes a pointer indicating the location of the multicast table.
- 95. The system of claim 90, wherein the flow of information elements includes data cells, and wherein the multicast table includes virtual channel identifiers (VCI) and virtual path identifiers (VPI) for the flow of information elements.
- 96. The system of claim 90, wherein the flow of information elements includes data packets, and wherein the multicast table includes packet headers for the flow of information elements.
- 97. The system of claim 90, wherein the flow of information elements are transmitted in accordance with a common switch interface (CSIX) specification, and wherein the multicast table includes CSIX multicast extension headers for the flow of information elements.
- 98. The system of claim 90, wherein the multicast table comprises:
a first multicast page; and a second multicast page, wherein the first multicast page includes a pointer that links the first multicast page to the second multicast page.
- 99. The system of claim 98, wherein the unicast unit sends the flow of information elements to subscribers identified in the first multicast page, then sends the flow of information elements to subscribers identified in the second multicast page.
- 100. The system of claim 98, wherein the first multicast page includes a list of subscribers, and wherein the second multicast page is added when the list of subscribers in the first multicast page exceeds a size limit.
- 101. A method of multicasting data to multiple subscribers, the method comprising:
accessing a multicast table using a multicast unit to identify subscribers in the multicast table that should receive a flow of information elements, the multicast table including:
a list of subscribers to receive the flow of information elements, and an output port assignment for the flow of information elements; and sending a flow of information element to identified subscribers through an input/output (I/O) port assigned to the flow of information element by the multicast table using a unicast unit.
- 102. The method of claim 101 further comprising:
storing the flow of information elements in a memory location; wherein sending a flow of information element comprises:
accessing the flow of information element from the memory location to send the flow of information element to the identified subscribers.
- 103. The method of claim 101 further comprising:
before accessing the multicast table, retrieving a multicast command from a multicast command queue for a flow of information elements to be transmitted to subscribers.
- 104. The method of claim 103, retrieving a forward processing instruction using the multicast command to determine the location of the multicast table corresponding to the flow of information elements to be transmitted to subscribers.
- 105. The method of claim 104, wherein the forward processing instruction includes apointer indicating the location of the multicast table.
- 106. The method of claim 101, wherein the flow of information elements includes data cells, and wherein the multicast table includes virtual channel identifiers (VCI) and virtual path identifiers (VPI) for the flow of information elements.
- 107. The method of claim 101, wherein the flow of information elements includes data packets, and wherein the multicast table includes packet headers for the flow of information elements.
- 108. The method of claim 101, wherein the flow of information elements are transmitted in accordance with a common switch interface (CSIX) specification, and wherein the multicast table includes CSIX multicast extension headers for the flow of information elements.
- 109. The method of claim 101, wherein the multicast table comprises:
a first multicast page; and a second multicast page, wherein the first multicast page includes a pointer that links the first multicast page to the second multicast page.
- 110. The method of claim 109, wherein the flow of information elements is sent to subscribers identified in the first multicast page, then sent to subscribers identified in the second multicast page.
- 111. The method of claim 110, wherein the second multicast page is added when the list of subscribers in the first multicast page exceeds a size limit.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority of an earlier filed provisional applications U.S. Provisional Application Serial No. 60/372,746, titled FORWARD PROCESSING UNIT filed Apr. 14, 2002, and U.S. Provisional Application Serial No. 60/382,268, titled DATA FORWARDING ENGINE, filed May 20, 2002, both of which are incorporated herein by reference in their entirety.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60372746 |
Apr 2002 |
US |
|
60382268 |
May 2002 |
US |