Claims
- 1. A network content delivery system, comprising:
at least one storage processor; at least one application processor performing endpoint functionality processing; a system interface connection configured to be coupled to a network; at least one network processor, the network processor coupled to the system interface connection to receive data from the network; and an interconnection between the system processor and the network processor so that the network processor may analyze data provided from the network and process the data at least in part and then forward the data to the interconnection so that other processing may be performed on the data within the system, wherein the interconnection enables a data content delivery path between the storage processor and the network processor for providing data from a storage system to the network, the data content delivery path by-passing the application processor.
- 2. The system of claim 1, wherein the system further comprises an additional processor, the additional processor provided in the data content delivery path between the storage processor and the network processor.
- 3. The system of claim 1, wherein the application processor is in a content request path between the network processor and the storage processor.
- 4. The system of claim 1, wherein the storage processor, application processor and the network processor are configured as an asymmetric multi-processor staged pipelined system.
- 5. The system of claim 1, wherein the storage processor, application processor and the network processor communicate in a peer to peer environment.
- 6. The system of claim 5, wherein the application processor is in a content request path between the network processor and the storage processor.
- 7. The system of claim 6, wherein the system further comprises an additional processor, the additional processor provided in the data content delivery path between the storage processor and the network processor.
- 8. The system of claim 7, wherein the additional processor is also in the content request path between the network processor and the storage processor.
- 9. The system of claim 8, wherein the interconnection comprises a distributed interconnection.
- 10. The system of claim 9, wherein the distributed interconnection comprises a switch fabric.
- 11. The system of claim 5, wherein the interconnection comprises a distributed interconnection.
- 12. The system of claim 11, wherein the distributed interconnection comprises a switch fabric.
- 13. The system of claim 1, wherein the interconnection comprises a switch fabric.
- 14. A method of operating a content delivery system, the method comprising:
providing a network processor within the content delivery system, the network processor being configured to be coupled to an interface which couples the content delivery system to a network; processing data passing through the interface with the network processor; and forwarding data from the network processor to a distributed interconnection; coupling a storage processor to the distributed interconnection, the storage processor configured to also be coupled to a storage system; coupling an application processor to the distributed interconnection, the application processor performing at least some processing upon incoming network data; and providing outgoing content through an outgoing content data path from the storage processor to the network processor, the outgoing content data path by-passing the application processor.
- 15. The method of claim 14, wherein the network processor analyzes headers of data packets transmitted to the network endpoint system from the network.
- 16. The method of claim 15, the method further comprising configuring the network processor, the storage processor and the application processor in a peer to peer computing environment.
- 17. The method of claim 16, the network processor, the storage processor and the application processor configured as an asymmetric multi-processor staged pipeline system.
- 18. The method of claim 17, the method further comprising operating the network endpoint system in a staged pipeline processing manner.
- 19. The method of claim 14, wherein the method further comprises providing an additional processor in the outgoing content data path between the storage processor and the network processor.
- 20. The method of claim 19, wherein the method further comprises providing the application processor in a content request path between the network processor and the storage processor.
- 21. The method of claim 14, wherein the method further comprises providing the application processor in a content request path between the network processor and the storage processor.
- 22. The method of claim 14, the method further comprising configuring the network processor, the storage processor and the application processor in a peer to peer computing environment.
- 23. The method of claim 14, wherein the network endpoint system comprises a plurality of system processors, the method further comprising configuring the network processor and the plurality of system processors in a peer to peer computing environment.
- 24. The method of claim 23, wherein the method further comprises providing the application processor in a content request path between the network processor and the storage processor.
- 25. The method of claim 24, wherein the method further comprises providing an additional processor in the outgoing content data path between the storage processor and the network processor.
- 26. The method of claim 25, wherein the method further comprises providing the additional processor in the content request path between the storage processor and the network processor.
- 27. The method of claim 14, the data forwarded by the network processor being forwarded through a switch fabric.
- 28. A network connectable content delivery system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; a distributed interconnection coupled to the first, second and storage processor engines, the tasks of the first, second and storage processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection; a content request path, the content request path including the first, second and storage processor engines; and a content delivery path, the content delivery path by-passing the second processor engine.
- 29. The system of claim 28, wherein the system further comprises a third processor engine, the third processor engine provided in the content delivery path between the storage processor engine and the network processor engine.
- 30. The system of claim 29, wherein the third processor engine is also provided in the content request path.
- 31. The system of claim 28, wherein the system further comprises a third processor engine, the third processor engine provided in the content request path.
- 32. The system of claim 31, wherein the third processor engine is a transport/protocol processor engine.
- 33. The system of claim 28, wherein the system is a network endpoint system.
- 34. The system of claim 28, wherein the first processor engine is a network interface engine comprising a network processor.
- 35. The system of claim 34, wherein the second processor engine is an application processor engine.
- 36. The system of claim 35, wherein the system further comprises a third processor engine, the third processor engine provided in the content delivery path between the storage processor engine and the network processor engine.
- 37. The system of claim 36, wherein the third processor engine is also provided in the content request path.
- 38. The system of claim 35, wherein the system further comprises a third processor engine, the third processor engine provided in the content request path.
- 39. The system of claim 35, wherein at least one of the network interface engine, the application processor engine or the storage processor engine comprises multiple processor modules operating in parallel.
- 40. The system of claim 39, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 41. The system of claim 40, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 42. The system of claim 41, wherein the distributed interconnect is a switch fabric.
- 43. The system of claim 42, wherein the system further comprises a third processor engine, the third processor engine provided in the content delivery path between the storage processor engine and the network processor engine.
- 44. The system of claim 43, wherein the third processor engine is also provided in the content request path.
- 45. The system of claim 42, wherein the system further comprises a third processor engine, the third processor engine provided in the content request path.
- 46. The system of claim 28, wherein the distributed interconnect is a switch fabric.
- 47. The system of claim 46, wherein at least one of the network interface processor engine, the application processor engine, or the storage processor engine comprises multiple processor modules operating in parallel.
- 48. The system of claim 47, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 49. The system of claim 47, wherein the network interface processor engine comprising a network processor.
- 50. The system of claim 49, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 51. A method of providing content from a content delivery system to a network, comprising:
providing a first processor engine; providing a second processor engine; providing a storage processor engine, the storage processor engine being configured to be coupled to a content storage system; providing a distributed interconnection coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection; routing content requests through a request path that includes the first, second and storage processor engines; and routing content to be delivered to the network through delivery path that by-passes the second processor engine.
- 52. The method of claim 51, wherein the method further comprises providing a third processor engine in the delivery path.
- 53. The method of claim 52, wherein the third processor engine performs at least some protocol processing.
- 54. The method of claim 52, wherein the method further comprises providing the third processor engine in the request path.
- 55. The method of claim 51, wherein the method further comprises providing a third processor engine in the request path.
- 56. The method of claim 51, wherein the first processor engine is a network interface processor engine and the second processor engine is an application processor engine.
- 57. The method of claim 56, wherein the network interface processor engine comprises a network processor.
- 58. The method of claim 57, wherein the method further comprises providing a third processor engine in the delivery path.
- 59. The method of claim 58, wherein the third processor engine performs at least some protocol processing.
- 60. The method of claim 58, wherein the method further comprises providing the third processor engine in the request path.
- 61. The method of claim 57, wherein the method further comprises providing a third processor engine in the request path.
- 62. The method of claim 57, wherein the distributed interconnection is a switch fabric.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/187,211, which was filed Mar. 30, 2000 and is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” from Provisional Application Serial No. 60/246,343, which was filed Nov. 7, 2000 and is entitled “NETWORK CONTENT DELIVERY SYSTEM WITH PEER TO PEER PROCESSING COMPONENTS,” from Provisional Application Serial No. 60/246,335, which was filed Nov. 7, 2000 and is entitled “NETWORK SECURITY ACCELERATOR,” from Provisional Application Serial No. 60/246,443, which was filed Nov. 7, 2000 and is entitled “METHODS AND SYSTEMS FOR THE ORDER SERIALIZATION OF INFORMATION IN A NETWORK PROCESSING ENVIRONMENT,” from Provisional Application Serial No. 60/246,373, which was filed Nov. 7, 2000 and is entitled “INTERPROCESS COMMUNICATIONS WITHIN A NETWORK NODE USING SWITCH FABRIC,” from Provisional Application Serial No. 60/246,444, which was filed Nov. 7, 2000 and is entitled “NETWORK TRANSPORT ACCELERATOR,” and from Provisional Application Serial No. 60/246,372, which was filed Nov. 7, 2000 and is entitled “SINGLE CHASSIS NETWORK ENDPOINT SYSTEM WITH NETWORK PROCESSOR FOR LOAD BALANCING,” the disclosures of each being incorporated herein by reference.
Provisional Applications (7)
|
Number |
Date |
Country |
|
60187211 |
Mar 2000 |
US |
|
60246343 |
Nov 2000 |
US |
|
60246335 |
Nov 2000 |
US |
|
60246443 |
Nov 2000 |
US |
|
60246373 |
Nov 2000 |
US |
|
60246444 |
Nov 2000 |
US |
|
60246372 |
Nov 2000 |
US |