Claims
- 1. An apparatus capable of queuing and de-queuing data stored in a plurality of queues, the apparatus comprising:
a status storage device to track status for each of the plurality of queues; a status cache to track status for a subset of the plurality of queues that are undergoing processing; and a queuing engine to queue incoming data and de-queue outgoing data, wherein said queuing engine receives and updates the status for the subset of the plurality of queues from said status cache and receives and updates the status for remaining ones of the plurality of queues from said status storage device.
- 2. The apparatus of claim 1, wherein said status cache writes the status for each of the subset of the plurality of queues back to said status storage device upon completion of the processing for each respective queue.
- 3. The apparatus of claim 1, further comprising status pipelines to provide throughput between said status storage device and said queuing engine.
- 4. The apparatus of claim 1, wherein said status pipelines are part of said status storage device.
- 5. The apparatus of claim 1, wherein reading and writing of queue addresses to said status storage device are performed on alternative clock cycles.
- 6. The apparatus of claim 5, wherein the reading and the writing to said status storage device alternate between queuing and de-queuing.
- 7. The apparatus of claim 1, further comprising a data storage device to store data associated with each of the plurality of queues.
- 8. The apparatus of claim 7, further comprising data pipelines to provide throughput between said status data device and said queuing engine.
- 9. The apparatus of claim 8, wherein said data pipelines are part of said data storage device.
- 10. The apparatus of claim 7, further comprising a data cache to maintain a copy of data being written or planned to be written to said data storage device, wherein the data from the cache will be used if the data is required before the data is written to the data storage device.
- 11. The apparatus of claim 7, wherein said data storage device is divided into a plurality of blocks and each queue is associated with at least one block.
- 12. The apparatus of claim 11, further comprising a link storage device to link blocks within the data storage device for queues having data in more than one block.
- 13. The apparatus of claim 12, wherein said status cache tracks the status for a subset of the plurality of queues that are undergoing at least some subset of reading a status associated with a selected queue, performing an action on the selected queue, and updating the status of the selected queue.
- 14. The apparatus of claim 13, wherein the action is at least one of adding data to said data storage device, adding a link to said link storage device, removing data from said data storage device, and removing a link from said link storage device.
- 15. The apparatus of claim 1, wherein said status storage device tracks at least some subset of head, tail and count for each queue.
- 16. In a store and forward device, a method for queuing and de-queuing data stored in a plurality of queues, the method comprising:
receiving a queue address for processing; checking a status cache for a status of a queue associated with the queue address, wherein the status will be in the status cache if the queue is undergoing processing; reading the status for the associated queue, wherein the status will be read from a status storage device if the status is not in the status queue; processing the queue; writing an updated status for the queue in the status storage device.
- 17. The method of claim 16, wherein said checking includes checking the status cache for at least some subset of head, tail and count for each queue.
- 18. The method of claim 16, wherein said processing includes at least some subset of adding data to a data storage device, adding a link to a link storage device, removing data from the data storage device, and removing a link from the link storage device.
- 19. The method of claim 16, wherein for each queue said reading the status occurs in a first phase, said performing associated actions occur in a second phase, and said writing an update occurs in a third phase.
- 20. The method of claim 16, wherein said reading and said writing are performed on alternative clock cycles.
- 21. The method of claim 16, wherein said reading and said writing alternate between queuing and de-queuing operations.
- 22. The method of claim 16, further comprising writing all data being written or planned to be written to a data storage device to a data cache, wherein the data from the data cache will be used if the data is required before the data is written to the data storage device.
- 23. A store and forward device for queuing and de-queuing of data stored in a plurality of queues, the device comprising:
a plurality of receivers to receive packets of data; a storage medium to store the packets of data in a plurality of queues; a plurality of transmitters to transmit the packets of data from the queues; a status storage device to track status for each of the plurality of queues; a status cache to track status for a subset of the plurality of queues that are undergoing processing; and a queuing engine to queue incoming data and de-queue outgoing data, wherein said queuing engine receives and updates the status for the subset of the plurality of queues from said status cache and receives and updates the status for remaining the plurality of queues from said status storage device.
- 24. The device of claim 23, wherein said status cache writes the status for each of the subset of the plurality of queues back to said status storage device upon completion of the processing for each respective queue.
- 25. The device of claim 23, further comprising a data cache to maintain a copy of data being written or planned to be written to said storage medium, wherein the data from the cache will be used if the data is required before the data is written to the storage medium.
- 26. The device of claim 23, wherein said storage medium is divided into a plurality of blocks and each queue is associated with at least one block.
- 27. The device of claim 26, further comprising a link storage device to link blocks within the storage medium for queues having data in more than one block.
- 28. The device of claim 23, wherein said receivers are Ethernet cards.
- 29. The device of claim 23, further comprising an optical backplane.
- 30. An apparatus capable of performing multiple simultaneous operations on a single queue, the apparatus comprising:
a queuing engine to schedule queue operations; and a status cache to track operations being performed on queues, wherein a single queue may be associated with multiple simultaneous operations.
- 31. The apparatus of claim 30, wherein said queuing engine schedules read, process and write operations.
- 32. The apparatus of claim 30, wherein said queuing engine alternates read status and write status operations every clock cycle.
- 33. The apparatus of claim 30, wherein said queuing engine alternates between queuing and de-queuing operations.
- 34. The apparatus of claim 30, wherein said queuing engine reads status data in a first pipeline, modifies status data in a second pipeline, and writes updated status data in a third pipeline.
- 35. The apparatus of claim 30, wherein said status cache has a status bit associated with each possible operation.
- 36. The apparatus of claim 35, wherein each status bit is independent of each other.
- 37. A store and forward device for queuing and de-queuing of data stored in a plurality of queues, the device comprising:
a plurality of transceivers to receive and transmit data; a storage medium to store the data in a plurality of queues; a queuing engine to schedule operations on the queue; and a status cache to track operations being performed on the queues, wherein a single queue may be associated with multiple simultaneous operations.
- 38. The apparatus of claim 37, wherein said status cache has a status bit associated with each possible operation.
- 39. The device of claim 37, wherein said transmitters are SONET cards.
- 40. The device of claim 37, further comprising an optical backplane.
Parent Case Info
[0001] This application claims priority under 35 U.S.C. §119(e) of U.S. Provisional Application No. 60/367,523 entitled “Method and Apparatus for High-Speed Queuing and Dequeuing of data in a Switch or Router Using Caching of Queue State” filed on Mar. 25, 2002 which is herein incorporated by reference, but is not admitted to be prior art.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60367523 |
Mar 2002 |
US |