Claims
- 1. A network system comprising:
a plurality of network processor interfaces for transmitting and receiving data cell sequences, a switch fabric interface; an ingress path providing a plurality of ingress queues between the plurality of network processor interfaces and the switch fabric interface combining the transmitted data calls of the network processors to a single data cell sequence; an egress path providing a plurality of egress queues and a memory controller between the plurality of the switch fabric interface and network processor interfaces for distributing data cell sequences from a received data cell sequence to the respective network processor interfaces.
- 2. System according to claim 1, wherein the egress path comprises a first egress path handling control signals and a second egress path handling data signals.
- 3. System according to claim 1, wherein each network processor interface comprises a receiving interface and a transmitting interface.
- 4. System according to claim 3, wherein the ingress queues each have an input and an output, each ingress queue input being coupled with a respective transmitting network processor interface, and the ingress path further comprises a multiplexer coupled with the outputs of the plurality of ingress queues and the switch fabric interface.
- 5. System according to claim 4, further comprising an ingress output queue coupled between the multiplexer and the switch fabric interface.
- 6. System according to claim 1, wherein the egress path comprises a de-multiplexer coupled with the switch fabric interface and the plurality of egress queues.
- 7. System according to claim 1, wherein said memory controller comprises a memory interface and a egress path routing switch routing the received cells through a memory coupled with the memory controller or directly to the network processor interfaces if no memory is coupled with the memory controller.
- 8. System according to claim 7, further comprising a first set of egress queues coupled between the de-multiplexer and a memory multiplexer coupled with a memory controller input, a memory de-multiplexer coupled with a memory controller output, a second set of egress queues coupled between the memory de-multiplexer and the network processor interfaces.
- 9. System according to claim 8, wherein the egress path comprises a first egress path handling control signals and a second egress path handling data signals, wherein the first egress path comprises a third set of egress queues coupled between the de-multiplexer and the network processors and the second egress path comprises the first and second egress queues, and wherein a plurality of output multiplexers is coupled between the network processors and the first and second egress paths.
- 10. System according to claim 8, wherein the first and second set of egress queues comprises two queues associated with each network processor interface.
- 11. System according to claim 7, wherein the memory interface is configured to couple with an error correcting memory.
- 12 System according to claim 7, wherein the memory interface is configured to couple with a DDR SRAM.
- 13. System according to claim 11, wherein the memory interface is configured to couple with a QDR ECC SRAM.
- 14. System according to claim 11, wherein the error correcting memory is an in-band a memory.
- 15. System according to claim 1, wherein each queue comprises an associated watermark register.
- 16. System according to claim 15, further comprising a control unit for controlling the ingress and egress queues.
- 17. System according to claim 15, further comprising a host-subsystem interface coupled with the control unit.
- 18. System according to claim 1, wherein the network processor interface is provided on a line card having five network processor ports.
- 19. System according to claim 18, comprising a plurality of five network processor ports.
- 20. System according to claim 20, wherein the switch fabric interface has a higher bandwidth than one of the plurality of network processor interfaces and the number of network processors interfaces is adapted to approximately match the bandwidth of the bandwidth of the switch fabric interface.
- 21. A method of controlling the ingress and egress data paths of a network processor interface system, said method comprising the steps of:
providing a plurality of network processor interfaces for transmitting and receiving data cell sequences, providing a switch fabric interface; providing an ingress path having a plurality of ingress queues between the plurality of network processor interfaces and the switch fabric interface combining the transmitted data calls of the network processors to a single data cell sequence; and providing an egress path having a plurality of egress queues and a memory controller between the plurality of the switch fabric interface and network processor interfaces for distributing data cell sequences from a received data cell sequence to the respective network processor interfaces.
- 22. Method according to claim 21, further comprising the steps of:
buffering transmitted data cells in the ingress queues, combining the content of the ingress queues, and buffering the combined data cells in an ingress output queue.
- 23. Method according to claim 21, further comprising the step of:
splitting the egress path in a first path handling control data cells and a second path handling data cells.
- 24. Method according to claim 21, further comprising the step of:
if a memory is coupled to the memory interface, storing received data cells in the memory, otherwise moving the received data cells directly to the respective network processor interface.
- 25. Method according to claim 23, further comprising the steps of:
providing at least two egress queues for each network processor interface, and selecting which queue is coupled with the associated network processor interface.
- 26. Method according to claim 24, further comprising the steps of:
generating a control data cell by the memory controller, and routing the generated control cell through the first egress path.
- 27. Method according to claim 21, further comprising the steps of:
monitoring the filling level of the queues, and generating control signals according to the filling level.
- 28. Method according to claim 27, further comprising the step of:
discarding data cells according to their status if the filling level is reached within a queue.
- 29. Method according to claim 21, further comprising the step of:
distributing data cells according to a priority scheme included in the data cells.
- 30. Method according to claim 21, further comprising:
distributing data cells according to a Quality of Service scheme included in the data cells.
- 31. Method according to claim 21, wherein storage area network and networking protocols are processed.
- 32. Method according to claim 21, wherein the switch fabric interface has a higher bandwidth than one of the plurality of network processor interfaces, and the method further comprises the step of providing a number of network processor interfaces adapted for combining the bandwidth of the network processors to approximately match the bandwidth of the switch fabric interface.
- 33. Method according to claim 32, wherein the bandwidth of the switch fabric interface is lower than the combined bandwidth of the network processor interfaces.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to U.S. patent application Ser. No. ______, titled “Caching System and Method for a Network Storage System” by Lin-Sheng Chiou, Mike Witkowski, Hawkins Yao, Cheh-Suei Yang, and Sompong Paul Olarig, which was filed on Dec. 14, 2000 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. ______, [attorney docket number 069099.0102/B2], titled “System, Apparatus and Method for Address Forwarding for a Computer Network” by Hawkins Yao, Cheh-Suei Yang, Richard Gunlock, Michael L. Witkowski, and Sompong Paul Olarig, which was filed on Oct. 26, 2001 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No., titled “Network Processor to Switch Fabric Bridge Implementation” by Sompong Paul Olarig, Mark Lyndon Oelke, and John E. Jenne, which was filed on, and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. ______ ,[attorney docket number 069099.0106/B6-A], titled “XON/XOFF Flow Control for Computer Network” by Hawkins Yao, John E. Jenne, and Mark Lyndon Oelke, which is being filed concurrently on Dec. 31, 2001, and which is incorporated herein by reference in its entirety for all purposes; and U.S. patent application Ser. No. ______, [attorney docket number 069099.0107/B6-B], titled “Buffer to Buffer Credit Flow Control for Computer Network” by John E. Jenne, Mark Lyndon Oelke and Sompong Paul Olarig, which is being filed concurrently on Dec. 31, 2001, and which is incorporated herein by reference in its entirety for all purposes.