Claims
- 1. A network endpoint system for providing network content via a network, the content being stored in a data storage system, comprising:
a network unit programmed to receive incoming requests for data; an application unit programmed to respond to the requests; a storage unit programmed to receive the requests from the application unit, to forward the data requests to the data storage unit, to format the data received from the storage unit into a protocol suitable for transport on the network, and to communicate the data directly to the network unit; and an interconnection medium for directly connecting the network unit, the application unit, and the storage unit.
- 2. The system of claim 1, wherein the interconnection medium is a bus.
- 3. The system of claim 1, wherein the interconnection medium is a switch fabric.
- 4. The system of claim 1, wherein the network is the Internet and the network unit is further programmed to process at least part of transport protocol processing.
- 5. The system of claim 4, wherein the network unit is further programmed to perform all protocol layer processing such that it passes data to the application unit at the socket level.
- 6. The system of claim 1, wherein the network unit has a network processor for executing its programming.
- 7. The system of claim 1, further comprising a management unit programmed to write data to the data storage system.
- 8. The system of claim 1, wherein incoming data requests and outgoing data follow separate paths within the endpoint system.
- 9. The system of claim 1, wherein the network unit is further programmed to perform security tasks.
- 10. The system of claim 1, wherein the network unit is further programmed to detect failures within the endpoint system.
- 11. The system of claim 1, wherein the network unit performs load balancing tasks.
- 12. A method for processing network data at a network endpoint that provides network content via a network, comprising the steps of:
using a network unit to receive incoming requests for data; using an application unit to respond to the requests; using a storage unit to send data requests to a data storage unit, to format the data received from the storage unit into a protocol suitable for transport on the network, and to communicate the data directly to the network unit; and directly connecting the network unit, the application unit, and the storage unit via an interconnection medium.
- 13. The method of claim 12, wherein the network unit has a network processor for executing its programming.
- 14. The method of claim 12, wherein the interconnection medium is a bus.
- 15. The method of claim 12, wherein the interconnection medium is a switch fabric.
- 16. The method of claim 12, wherein the network is the Internet and the network unit performs at least part of transport protocol processing.
- 17. The method of claim 12, wherein the network unit performs all protocol layer processing such that it passes data to the application unit at the socket level.
- 18. The method of claim 12, further comprising the step of using the network unit to detect failures of the processing units.
- 19. The method of claim 12, wherein incoming data requests and outgoing data follow separate paths within the endpoint system.
- 20. The method of claim 12, further comprising the step of using the network unit to perform security tasks.
- 21. The method of claim 12, wherein the network unit performs load balancing tasks.
- 22. A network endpoint system, comprising:
a plurality of system processors; a system interface connection configured to be coupled to a network; at least one network processor, the network processor coupled to the system interface connection to receive data from the network; and an interconnection between the plurality of system processors and the network processor so that the network processor may analyze data provided from the network and process the data at least in part and then forward the data to the interconnection so that other processing may be performed on the data within the system, wherein the plurality of system processors and the network processor are configured to communicate across the interconnection as peers in a peer to peer environment.
- 23. The network endpoint system of claim 22, wherein the plurality of system processors comprise a storage processor.
- 24. The network endpoint system of claim 22, wherein the plurality of system processors comprise an application processor.
- 25. The network endpoint system of claim 22, wherein the system is configured as an asymmetric multi-processor system.
- 26. The network endpoint system of claim 22, wherein the plurality of system processors comprise a storage processor and an application processor.
- 27. The network endpoint system of claim 26, wherein the interconnection comprises a distributed interconnection.
- 28. The network endpoint system of claim 27, wherein the distributed interconnection comprises a switch fabric.
- 29. The network endpoint system of claim 22, wherein the plurality of system processors comprises a plurality of storage processors.
- 30. The network endpoint system of claim 22, wherein the plurality of system processors comprises a plurality of application processors.
- 31. The network endpoint system of claim 30, wherein the plurality of system processors comprises a plurality of storage processors.
- 32. The network endpoint system of claim 31, wherein the interconnection comprises a distributed interconnection.
- 33. The network endpoint system of claim 32, wherein the distributed interconnection comprises a switch fabric.
- 34. The network endpoint system of claim 22, wherein the interconnection comprises a switch fabric.
- 35. The network endpoint system of claim 22, wherein the network processor filters data incoming to the network endpoint system from the network.
- 36. The network endpoint system of claim 22, the network processor enabling accelerated system performance.
- 37. The network endpoint system of claim 22, the network endpoint system being a content delivery system.
- 38. The network endpoint system of claim 37, the network endpoint system providing accelerated content delivery.
- 39. A method of operating a network endpoint system, the method comprising:
providing a network processor within the network endpoint system, the network processor being configured to be coupled to an interface which couples the network endpoint system to a network; processing data passing through the interface with the network processor; and forwarding data from the network processor to a distributed interconnection; coupling a plurality of system processors to the distributed interconnection; processing data forwarded by the network processor with the plurality of system processors; and communicating between the network processors and the plurality of system processors as peers in a peer to peer environment.
- 40. The method of claim 39, the network processor and the plurality of system processors configured as an asymmetric multi-processor manner.
- 41. The method of claim 40, the method further comprising operating the network endpoint system in a staged pipeline processing manner.
- 42. The method of claim 41, the plurality of system processors comprising a storage processor and an application processor.
- 43. The method of claim 41, wherein the endpoint functionality is content delivery.
- 44. The method of claim 43, further comprising accelerating the content delivery of the network endpoint system.
- 45. The method of claim 39, the network processor performing filter functions upon the data passing through the interface.
- 46. The method of claim 39, the distributed interconnection being a switch fabric.
- 47. The method of claim 41, wherein the system is a content delivery system, the method further comprising accelerating the content delivery of the network endpoint system.
- 48. The method of claim 39, the plurality of system processors comprising a plurality of storage processors.
- 49. The method of claim 48, the plurality of system processors comprising a plurality of application processors.
- 50. The method of claim 49, the distributed interconnection being a switch fabric.
- 51. The method of claim 39, the plurality of system processors comprising a plurality of application processors.
- 52. A network connectable computing system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a third processor engine, the third processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines; and a distributed interconnection coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection, wherein peer to peer communication between the first, second, and storage processor engines is enabled through the distributed interconnection.
- 53. The system of claim 52, wherein the system is a network endpoint system.
- 54. The system of claim 52, wherein the first processor engine is a network interface engine comprising a network processor.
- 55. The system of claim 54, wherein the second processor engine is an application processor engine and the third processor engine is a storage processor engine.
- 56. The system of claim 55, wherein at least one of the first, second or third processor engines comprises multiple processor modules operating in parallel.
- 57. The system of claim 56, when the multiple processor modules also communicate in a peer to peer environment.
- 58. The system of claim 56, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 59. The system of claim 58, when the multiple processor modules also communicate in a peer to peer environment.
- 60. The system of claim 58, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 61. The system of claim 60, wherein the distributed interconnect is a switch fabric.
- 62. The system of claim 52, wherein the distributed interconnect is a switch fabric.
- 63. The system of claim 62, wherein the second processor engine is an application processor engine and the third processor engine is a storage processor engine.
- 64. The system of claim 63, wherein at least one of the first, second or third processor engines comprises multiple processor modules operating in parallel.
- 65. The system of claim 64, when the multiple processor modules also communicate in a peer to peer environment.
- 66. The system of claim 64, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 67. The system of claim 66, when the multiple processor modules also communicate in a peer to peer environment.
- 68. The system of claim 64, wherein the first processor engine is a network interface processor engine comprising a network processor.
- 69. The system of claim 68, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 70. A network connectable content delivery system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and a distributed interconnection coupled to the first, second and third processor engines, wherein peer to peer communication between the first, second, and storage processor engines is enabled through the distributed interconnection.
- 71. The system of claim 70, wherein the system is a network endpoint system.
- 72. The system of claim 70, wherein the first processor engine is a network interface engine comprising a network processor.
- 73. The system of claim 72, wherein the second processor engine is an application processor engine.
- 74. The system of claim 73, wherein at least one of the first, second or storage processor engines comprises multiple processor modules operating in parallel.
- 75. The system of claim 74, when the multiple processor modules also communicate in a peer to peer environment.
- 76. The system of claim 74, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 77. The system of claim 76, when the multiple processor modules also communicate in a peer to peer environment.
- 78. The system of claim 76, wherein the distributed interconnect is a switch fabric.
- 79. The system of claim 70, wherein the distributed interconnect is a switch fabric.
- 80. The system of claim 79, wherein the second processor engine is an application processor engine.
- 81. The system of claim 80, wherein at least one of the first, second or storage processor engines comprises multiple processor modules operating in parallel.
- 82. The system of claim 81, when the multiple processor modules also communicate in a peer to peer environment.
- 83. The system of claim 81, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 84. The system of claim 83, when the multiple processor modules also communicate in a peer to peer environment.
- 85. The system of claim 81, wherein the first processor engine is a network interface processor engine comprising a network processor.
- 86. A network connectable content delivery system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and a distributed interconnection coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection, wherein at least one of the first or second processor engines performs system management functions so as to off-load management functions from the other processor engines, and wherein peer to peer communication between the first, second, and storage processor engines is enabled through the distributed interconnection.
- 87. The system of claim 86, wherein the first processor engine is a storage management processor engine that performs at least some of the off-loaded management functions.
- 88. The system of claim 86, wherein the first processor engine is a network interface processor engine that performs at least some of the off-loaded management functions.
- 89. The system of claim 88, wherein the network interface processor engine comprises a network processor.
- 90. The system of claim 86, wherein the second processor engine is an application processor engine, wherein at least some system management functions are off-loaded from both the storage processor engine and the application processor engine.
- 91. The system of claim 86, wherein the system management functions comprise prioritizing data flow between the peers of the peer to peer environment.
Parent Case Info
[0001] This application claims priority from Provisional Application Ser. No. 60/246,343 filed on Nov. 7,2000 which is entitled “NETWORK CONTENT DELIVERY SYSTEM WITH PEER TO PEER PROCESSING COMPONENTS” and to Provisional Application Ser. No. 60/187,211 filed on Mar. 3, 2000 which is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” the disclosures of each being incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60246343 |
Nov 2000 |
US |
|
60187211 |
Mar 2000 |
US |