Claims
- 1. A method of using a switch fabric to communicate control data between different processes at a network node, comprising the steps of:
connecting a first processor to a switch fabric, using a first switch fabric interface; connecting a second processor to a switch fabric, using a second switch fabric interface; and using the switch fabric interface to recognize data units as containing data messages or control messages.
- 2. The method of claim 1, wherein the first processor executes security tools.
- 3. The method of claim 1, wherein the first processor performs network transport processing.
- 4. The method of claim 1, wherein the first processor performs load balancing tasks.
- 5. The method of claim 1, wherein the network node is an endpoint node.
- 6. The method of claim 1, wherein the network node is a content provider node.
- 7. The method of claim 1, further comprising the step of using a segmentation and reassembly unit within the switch fabric interface to change the data unit size from a size appropriate for the switch fabric to a size appropriate for one of the processors.
- 8. The method of claim 1, further comprising the step of using a bus interface within the switch fabric interface to connect to a bus associated with at least one of the processors.
- 9. The method of claim 1, further comprising the step of using logic within the switch fabric interface to prioritize messages and memory within the switch fabric interface to queue messages.
- 10. A switch fabric interface for connecting a processor to a switch fabric at a network node, comprising:
a physical interface for connecting the switch fabric to a communications medium; a bus interface for connecting the switch fabric to the processor; and a logic unit for differentiating data units containing data messages from data units containing control messages.
- 11. The interface of claim 10, further comprising memory for storing message queues.
- 12. The interface of claim 10, wherein the logic unit is further operable to perform segmentation and reassembly tasks.
- 13. The interface of claim 10, wherein the logic unit is further operable to prioritize messages.
- 14. The interface of claim 10, wherein the logic unit is further operable to permit the processor to directly read or write another processor's memory.
- 15. A network node system for processing network data transmitted and received via a network, comprising:
a first processor programmed to receive and transmit network data on the public network; a second processor programmed to communicate network data to and from the first processor; a switch fabric interface associated with each processor, the switch fabric interface having a bus interface at the processor side and a physical interface at the switch fabric side; and a switch fabric for directly connecting the network processor to the processing unit.
- 16. The system of claim 15, wherein the switch fabric interface has a logic unit operable to differentiate data units containing data messages from data units containing control messages.
- 17. The system of claim 15, wherein the switch fabric interface has memory for storing message queues.
- 18. The system of claim 15, wherein the switch fabric interface has a logic unit operable to perform segmentation and reassembly tasks.
- 19. The system of claim 15, wherein the switch fabric interface has a logic unit operable to prioritize messages.
- 20. The system of claim 15, wherein the switch fabric interface has a logic unit operable to permit one processor to directly read or write the other processor's memory.
- 21. A network endpoint system for processing network data transmitted and received via a network, comprising:
a network processor programmed to receive and transmit network data on the public network; at least one processing unit programmed to communicate network data to and from the network processor; a switch fabric interface associated with each processing unit, the switch fabric interface having a bus interface at the processing unit side and a physical interface at the switch fabric side; and a switch fabric for directly connecting the network processor to the processing unit.
- 22. The system of claim 21, wherein the system is contained within a single chassis.
- 23. The system of claim 21, wherein the switch fabric interface has a logic unit operable to differentiate data units containing data messages from data units containing control messages.
- 24. The system of claim 21, wherein the switch fabric interface has memory for storing message queues.
- 25. The system of claim 21, wherein the switch fabric interface has a logic unit operable to perform segmentation and reassembly tasks.
- 26. The system of claim 21, wherein the switch fabric interface has a logic unit operable to prioritize messages.
- 27. The system of claim 21, wherein the switch fabric interface has a logic unit operable to permit one processor to directly read or write the other processor's memory.
- 28. A method for processing network data at a network endpoint system, comprising the steps of:
using a network processor at the front end of the system to receive network data; using one or more processing units to receive data from the network processor and to execute network applications programming; and communicating the data from the network processor to the processing units with a switch fabric.
- 29. The method of claim 28, wherein the communicating step is performed by queuing data units containing the network data on the basis of a message classification scheme.
- 30. The method of claim 28, wherein the communicating step is performed by segmenting and reassembling data units at an interface between the processing units and the switch fabric.
- 31. The method of claim 28, wherein the system has more than one processing unit and further comprising the step of communicating messages from memory of one processing unit to memory of another processing unit via the switch fabric.
- 32. A network endpoint system, comprising:
at least one system processor performing endpoint functionality processing; a system interface connection configured to be coupled to a network; at least one network processor, the network processor coupled to the system interface connection to receive data from the network; and a switch fabric coupled between the system processor and the network processor so that the network processor may analyze data provided from the network and process the data at least in part and then forward the data to the interconnection so that other processing may be performed on the data within the system.
- 33. The network endpoint system of claim 32, wherein the system processor comprises a storage processor.
- 34. The network endpoint system of claim 32, wherein the system processor comprises an application processor.
- 35. The network endpoint system of claim 32, wherein the system comprises a plurality of system processors configured as an asymmetric multi-processor system.
- 36. The network endpoint system of claim 32, wherein the system comprises a plurality of processors communicating in a peer to peer environment.
- 37. The network endpoint system of claim 32, wherein the plurality of processors comprises the network processor and the system processor.
- 38. The network endpoint system of claim 37, wherein the plurality of processors comprises the network processor and multiple system processors.
- 39. The network endpoint system of claim 38, wherein the multiple system processors comprises a storage processor and an application processor.
- 40. The network endpoint system of claim 32, wherein the network processor filters data incoming to the network endpoint system from the network.
- 41. The network endpoint system of claim 32, the network processor enabling accelerated system performance.
- 42. The network endpoint system of claim 32, the network endpoint system being a content delivery system.
- 43. The network endpoint system of claim 42, the network endpoint system providing accelerated content delivery.
- 44. A method of operating a network endpoint system, the method comprising:
providing a network processor within the network endpoint system, the network processor being configured to be coupled to an interface which couples the network endpoint system to a network; processing data passing through the interface with the network processor; and forwarding data from the network processor to a system processor through a switch fabric; performing at least some endpoint functionality upon the data within the system processor.
- 45. The method of claim 44, wherein the network processor analyzes headers of data packets transmitted to the network endpoint system from the network.
- 46. The method of claim 45, the method further comprising configuring the network processor and the system processor in a peer to peer computing environment.
- 47. The method of claim 45, wherein the network endpoint system comprises a plurality of system processors, the method further comprising configuring the network processor and the plurality of system processors in a peer to peer computing environment.
- 48. The method of claim 47, the network processor and the plurality of system processors configured as an asymmetric multi-processor manner.
- 49. The method of claim 48, the method further comprising operating the network endpoint system in a staged pipeline processing manner.
- 50. The method of claim 49, the plurality of system processors comprising a storage processor and an application processor.
- 51. The method of claim 49, wherein the endpoint functionality is content delivery.
- 52. The method of claim 51, further comprising accelerating the content delivery of the network endpoint system.
- 53. The method of claim 44, the method further comprising configuring the network processor and the system processor in a peer to peer computing environment.
- 54. The method of claim 44, wherein the network endpoint system comprises a plurality of system processors, the method further comprising configuring the network processor and the plurality of system processors in a peer to peer computing environment.
- 55. The method of claim 44, the network endpoint system configured as an asymmetric multi-processor system.
- 56. The method of claim 44, the network processor performing filter functions upon the data passing through the interface.
- 57. The method of claim 49, wherein the endpoint functionality is content delivery, the method further comprising accelerating the content delivery of the network endpoint system.
- 58. A network endpoint system, comprising:
a first processor engine, the first processor engine configured to receive data from a network; a second processor engine, the second processor engine performing at least some endpoint functionality, the first processor engine performing tasks different from the endpoint functionality tasks performed by the second processor engine; and an interconnect coupling the first and second processor engines, wherein the network endpoint system is configure in at least one manner to provide accelerated performance.
- 59. The network endpoint system of claim 58, the first processor engine performing processing upon at least a portion of the data packets of the received data so as to off-load processing from the second processor engine.
- 60. The network endpoint system of claim 58, wherein the first and second processor engines are configured in a peer to peer environment.
- 61. The network endpoint system of claim 60, wherein the interconnect is a switch fabric.
- 62. The network endpoint system of claim 58, wherein the interconnect is a switch fabric.
- 63. The network endpoint system of claim 58, further comprising a third processor engine, the third processor engine performing tasks different from the tasks performed by the first and second processor engines.
- 64. The network endpoint system of claim 63, wherein at least two of the first, second or third processor engines each comprises a plurality of processor modules.
- 65. The network endpoint system of claim 64, wherein one or more processor modules of one processor engine may be reassigned to perform the tasks of another processor engine.
- 66. The network endpoint system of claim 63, further comprising a system management processor engine.
- 67. The network endpoint system of claim 58, further comprising a system management processor engine.
- 68. The network endpoint system of claim 67, wherein the first processor engine is a network interface processor engine and the second processor engine is a storage processor engine or an application processor engine.
- 69. The network endpoint system of claim 58, wherein the first processor engine is a network interface processor engine and the second processor engine is a storage processor engine or an application processor engine.
- 70. The network endpoint system of claim 69, wherein the second processor engine is an application processor engine, the network endpoint system further comprising a storage processor engine.
- 71. The network endpoint system of claim 70, wherein the network interface processor engine, the storage processor engine processor and the application processor engine are configured in a peer to peer environment.
- 72. The network endpoint system of claim 71, wherein the interconnect is a distributed interconnect.
- 73. The network endpoint system of claim 72, wherein the distributed interconnect is a switch fabric.
- 74. The network endpoint system of claim 72, wherein the network endpoint system comprises a network processor.
- 75. The network endpoint system of claim 74, wherein the storage processor engine processor and the application processor engine each comprise a plurality of processor modules.
- 76. The network endpoint system of claim 75, further comprising a system management processor engine.
- 77. The network endpoint system of claim 76, wherein the system is contained within a single chassis.
- 78. A method of providing a network endpoint termination through the use of a network endpoint system, comprising:
providing a plurality of separate processor engines, the processor engines being assigned separate tasks in an asymmetrical multi-processor configuration; providing an interface connection to at least one of the processor engines to couple the network endpoint system to a network; communicating between the plurality of separate processor engines through a switch fabric having fixed latencies, the plurality of separate processors and the switch fabric being formed in a single chassis; and generating an accelerated data flow through the network endpoint system.
- 79. The method of claim 78, wherein the separate processor engines communicate as peers in a peer to peer environment.
- 80. The method of claim 79, wherein the processor engine coupling the network endpoint system to a network comprises a network processor.
- 81. The method of claim 80, further comprising performing look ahead processing within the network processor to off-load processing tasks from the other processor engines.
- 82. The method of claim 78, wherein the network endpoint system is a content delivery system.
- 83. The method of claim 82, wherein the processor engine coupling the network endpoint system to a network comprises a network processor.
- 84. The method of claim 83, further comprising performing look ahead processing within the network processor to off-load processing tasks from the other processor engines.
- 85. The method of claim 84, wherein the separate processor engines communicate as peers in a peer to peer environment.
- 86. The method of 84, wherein the network processor is contained with a network interface engine, the other processing engines comprising a storage processor engine and an application processor engine.
- 87. The method of claim 86, wherein the network interface engine, the storage processor engine and the application processor engine communicate as peers in a peer to peer environment.
- 88. The method of claim 87, further comprising performing at least some system management functions in a system management processor engine.
- 89. The method of claim 88, further comprising tracking system performance within the system management processor engine.
- 90. The method of claim 88, further comprising implementing system policies with the system management processor engine.
- 91. The method of claim 78, the network endpoint system being a content delivery
- 92. A method of providing a content delivery system through the use of a network connectable computing system, comprising:
providing a plurality of separate processor engines, the processor engines being assigned separate tasks in an asymmetrical multi-processor configuration; providing a storage processor engine, the storage processor engine being one of the plurality of separate processor engines; providing a switch fabric for communication between the plurality of separate processor engines and the storage processor engine; providing a network interface connection to at least one of the processor engines to couple the content delivery system to a network; providing a storage interface connection to the storage processor engine to couple the storage processor engine to a content storage system; and accelerating content delivery through the network endpoint system.
- 93. The method of claim 92, wherein the separate processor engines and the storage processor engine communicate as peers in a peer to peer environment.
- 94. The method of claim 93, wherein the processor engine coupling the network endpoint system to a network comprises a network processor.
- 95. The method of claim 94, further comprising performing look ahead processing within the network processor to off-load processing tasks from the other processor engines.
- 96. The method of claim 92, wherein the separate processor engine coupling the network endpoint system to a network interface processor engine comprises a network processor.
- 97. The method of claim 96, further comprising performing look ahead processing within the network processor to off-load processing tasks from the other processor engines.
- 98. The method of claim 97, wherein the separate processor engines and the storage processor engine communicate as peers in a peer to peer environment.
- 99. The method of claim 96, wherein one of the separate processor engines is an application processor engine.
- 100. The method of claim 99, wherein the network interface engine, the storage processor engine and the application processor engine communicate as peers in a peer to peer environment.
- 101. The method of claim 100, further comprising performing at least some system management functions in a system management processor engine.
- 102. The method of claim 101, further comprising tracking system performance within the system management processor engine.
- 103. The method of claim 101, further comprising implementing system policies with the system management processor engine.
- 104. A network connectable computing system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a third processor engine, the third processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines; and a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection.
- 105. The system of claim 104, wherein the system is a network endpoint system.
- 106. The system of claim 104, wherein the first processor engine is a network interface engine comprising a network processor.
- 107. The system of claim 106, wherein the second processor engine is an application processor engine and the third processor engine is a storage processor engine.
- 108. The system of claim 107, wherein at least one of the first, second or third processor engines comprises multiple processor modules operating in parallel.
- 109. The system of claim 108, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 110. The system of claim 109, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 111. A network connectable content delivery system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection.
- 112. The system of claim 111, wherein the system is a network endpoint system.
- 113. The system of claim 111, wherein the first processor engine is a network interface engine comprising a network processor.
- 114. The system of claim 113, wherein the second processor engine is an application processor engine.
- 115. The system of claim 114, wherein at least one of the first, second or storage processor engines comprises multiple processor modules operating in parallel.
- 116. The system of claim 115, wherein the application processor engine comprises multiple processor modules operating in parallel and the storage processor engine comprises multiple processor modules operating in parallel.
- 117. The system of claim 116, wherein the network interface processor engine, the application processor engine, and the storage processor engine communicate in a peer to peer fashion.
- 118. A network connectable content delivery system, comprising:
a first processor engine; a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine; a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the switch fabric, wherein the first processor engine, second processor engine, storage processor engine and the switch fabric are all contained within a single chassis, and wherein at least one of the first or second processor engines performs system management functions so as to off-load management functions from the other processor engines.
- 119. The system of claim 118, wherein the first processor engine is a storage management processor engine that performs at least some of the off-loaded management functions.
- 120. The system of claim 118, wherein the first processor engine is a network interface processor engine that performs at least some of the off-loaded management functions.
- 121. The system of claim 120, wherein the network interface processor engine comprises a network processor.
- 122. The system of claim 118, wherein at least some system management functions are offloaded from the storage processor engine.
- 123. The system of claim 122, wherein the second processor engine is an application processor engine, wherein at least some system management functions are off-loaded from both the storage processor engine and the application processor engine.
- 124. The system of claim 123, wherein the system management functions comprise prioritizing data flow through the system.
- 125. The system of claim 123, wherein the system management functions comprise quality of service functions.
- 126. The system of claim 123, wherein the system management functions comprise service level agreement functions.
- 127. The system of claim 123, wherein the system management functions comprise filtering content requests.
- 128. The system of claim 127, wherein the first processor engine is a system management processor engine that performs the filtering functions.
- 129. The system of claim 127, wherein the first processor engine is a network interface processor engine that performs the filtering functions, the network interface processor engine comprising a network processor.
- 130. The system of claim 118, wherein the system management functions comprise prioritizing data flow through the system.
- 131. The system of claim 118, wherein the system management functions comprise quality of service functions.
- 132. The system of claim 118, wherein the system management functions comprise service level agreement functions.
- 133. The system of claim 118, wherein the system management functions comprise filtering content requests.
- 134. The system of claim 133, wherein the first processor engine is a system management processor engine that performs the filtering functions.
- 135. The system of claim 133, wherein the first processor engine is a network interface processor engine that performs the filtering functions, the network interface processor engine comprising a network processor.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/246,373 filed on Nov. 7, 2000 which is entitled “INTERPROCESS COMMUNICATIONS WITHIN A NETWORK NODE USING SWITCH FABRIC” and to Provisional Application Serial No. 60/187,211 filed on Mar. 3, 2000 which is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” the disclosures of each being incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60246373 |
Nov 2000 |
US |
|
60187211 |
Mar 2000 |
US |