Claims
- 1. A network processing endpoint system for responding to network requests incoming via a network, comprising:
a network processor programmed to receive the network requests and to provide load balancing of network processing for the requests; a set of processing units programmed to receive the requests from the network processor, to respond to the requests, and to deliver response data to the network processor; and an interconnection medium for directly connecting the network processor to the processing units, such that the latency of the connections is determinable.
- 2. The system of claim 1, wherein the interconnection medium is a bus.
- 3. The system of claim 1, wherein the interconnection medium is a switch fabric.
- 4. The system of claim 1, wherein the interconnection medium is shared memory.
- 5. The system of claim 1, wherein the system is contained within a single chassis.
- 6. The system of claim 1, wherein the network is the Internet and the network processor is further programmed to process at least part of protocol processing.
- 7. The system of claim 1, wherein the network processor is further programmed to detect failures of the processing units.
- 8. The system of claim 7, wherein the network processor is further programmed to respond to the failures.
- 9. The system of claim 1, wherein the processing units are configured as redundant pairs of processing units.
- 10. The system of claim 1, wherein the load balancing is on the basis of sessions represented by the requests.
- 11. A method for processing network data at a network endpoint that responds to network requests via a network, comprising the steps of:
using a network processor to receive the network sessions and to provide load balancing of network processing for the requests; using a set of processing units to receive the requests from the network processor, to respond to requests, and to deliver response data to the network processor; and directly connecting the network processor to the processing units via an interconnection medium having determinable latency.
- 12. The method of claim 11, wherein the interconnection medium is a bus.
- 13. The method of claim 11, wherein the interconnection medium is a switch fabric.
- 14. The method of claim 11, wherein the interconnection medium is shared memory.
- 15. The method of claim 11, further comprising the step of housing the network processor, the processing units, and the interconnection medium in a single chassis.
- 16. The method of claim 11, wherein the network is the Internet and the network processor processes at least part of protocol processing.
- 17. The method of claim 11, further comprising the step of using the network processor to detect failures of the processing units.
- 18. The method of claim 17, further comprising the step of using the network processor to respond to the failures.
- 19. The method of claim 11, wherein the load balancing is on the basis of sessions represented by the requests.
- 20. A network connectable computing system, the system being configured to be connected on at least one end to a network, the system comprising:
a network interface engine comprising at least one network processor, the network interface engine coupling data from the network to the computing system; a plurality of system processors for performing system functionality; a distributed interconnection between the plurality of system processors and the network interface engine, wherein the system enables load balancing to improve system performance.
- 21. The network connectable computing system of claim 20, further comprising a plurality of system processor engines, each system processor engines comprising one or more of the system processors, at least two of the system processor engines performing different tasks, the system being configured such that processor resources of a first system engine may be reassigned to a second system engine which performs tasks different from the first system engine in order to perform the load balancing.
- 22. The network connectable computing system of claim 21, wherein the plurality of system processor engines includes at least one of a transport processor engine, an application processor engine or a storage processor engine.
- 23. The network connectable computing system of claim 22, wherein the first system engine is an application processor engine and the second system engine is a transport processor engine.
- 24. The network connectable computing system of claim 22, wherein the first system engine is a storage processor engine and the second system engine is a transport processor engine.
- 25. The network connectable computing system of claim 20, further comprising a first system processor engine, the first system processor engine comprising a two or more of the system processors, wherein the load balancing is performed by assigning workloads between the two or more system processors of the first system processor engine.
- 26. The network connectable computing system of claim 25, the assignment of workloads performed at least in part by the network interface engine.
- 27. The network connectable computing system of claim 26, further comprising a second system processor engine, the second system processor engine comprising a two or more of the system processors, wherein the load balancing is also performed by assigning workloads between the two or more system processors of the second system processor engine.
- 28. The network connectable computing system of claim 27, the assignment of workloads in the first and second system processor engines performed at least in part by the network interface engine.
- 29. The network connectable computing system of claim 28, wherein the plurality of system processor engines includes at least one of a transport processor engine, an application processor engine or a storage processor engine.
- 30. The network connectable computing system of claim 28, wherein at least the first system processor engine, the second system processor engine and the network interface engine communicate in a peer to peer environment across a distributed interconnect.
- 31. The network connectable computing system of claim 30, wherein the interconnection is a switch fabric.
- 32. A method of configuring a network endpoint computing system for load balancing, comprising:
providing a network interface engine comprising at least one network processor, the network interface engine coupling data from the network to the computing system; providing a plurality of system processing engines for performing different endpoint related tasks within the system; a distributed interconnection between the plurality of system processing engines and the network interface engine, load balancing the system to improve system performance.
- 33. The method of claim 32, wherein, each system processor engines comprising one or more system processors, at least two of the system processor engines performing different tasks, the system being configured such that processor resources of a first system engine may be reassigned to a second system engine which performs tasks different from the first system engine in order to perform the load balancing.
- 34. The method of claim 33, wherein the plurality of system processor engines includes at least one of a transport processor engine, an application processor engine or a storage processor engine.
- 35. The method of claim 34, wherein the first system engine is an application processor engine and the second system engine is a transport processor engine.
- 36. The method of claim 34, wherein the first system engine is a storage processor engine and the second system engine is a transport processor engine.
- 37. The method of claim 32, the plurality of system processing engines comprising a first system processor engine, the first system processor engine comprising a two or more of system processors, wherein the load balancing is performed by assigning workloads between the two or more system processors of the first system processor engine.
- 38. The method of claim 37, the assignment of workloads performed at least in part by the network interface engine.
- 39. The method of claim 37, the plurality of system processing engines further comprising a second system processor engine, the second system processor engine comprising a two or more of system processors, wherein the load balancing is also performed by assigning workloads between the two or more system processors of the second system processor engine.
- 40. The method of claim 39, the assignment of workloads in the first and second system processor engines performed at least in part by the network interface engine.
- 41. The method of claim 39, wherein the plurality of system processor engines includes at least one of a transport processor engine, an application processor engine or a storage processor engine.
- 42. The method of claim 39, wherein at least the first system processor engine, the second system processor engine and the network interface engine communicate in a peer to peer environment across a distributed interconnect.
- 43. The method of claim 42, wherein the distributed interconnection has a determinable latency
- 44. The method of claim 43, the distributed interconnection being a switch fabric.
- 45. A method of operating a network endpoint computing system for load balancing, comprising:
coupling data from a network to the computing system through a network interface engine; providing a plurality of system processing engines for performing different endpoint related tasks within the system; configuring the network interface engine and the plurality of system processing engines as peers in a peer to peer environment; communicating between the peers through a distributed interconnection having determinable latencies; load balancing the system.
- 46. The method of claim 45, wherein the load balancing comprises assigning hardware processing resources amongst different system processing engines.
- 47. The method of claim 45, wherein two or more of the system processing engines have different dedicated tasks.
- 48. The method of claim 45, wherein the load balancing comprises allocated workloads amongst separate processor resources amongst the same system processing engine.
- 49. The method of claim 48, wherein the load balancing is based upon a round robin load balancing.
- 50. The method of claim 48, wherein the load balancing is based upon a weighed round robin load balancing.
- 51. The method of claim 48, wherein the load balancing is based upon the type of requests contained with the data incoming to the computing system.
- 52. The method of claim 48, wherein the load balancing is based upon feedback regarding system performance provided from one or more system resources.
- 53. The method of claim 48, wherein a load balancing decision is made on a data session by data session basis.
- 54. The method of claim 48, wherein the load balancing is implemented at least in part through the network interface engine.
- 55. The method of claim 48, wherein the system processing engines are arranged in a staged pipelined configuration, load balancing decisions being performed by a plurality of the stages of the pipelined configuration.
- 56. The method of claim 45, wherein the load balancing considers at least in part system wellness data.
- 57. The method of claim 45, wherein the load balancing considers at least in part system performance feedback.
- 58. The method of claim 45, wherein the network interface engine comprises a network processor, the method further comprising analyzing incoming data packet headers with the network processor.
- 59. The method of claim 58, wherein two or more of the system processing engines have different dedicated tasks.
- 60. The method of claim 58, wherein the load balancing comprises allocated workloads amongst separate processor resources amongst the same system processing engine.
- 61. The method of claim 60, wherein the load balancing is based upon a round robin load balancing.
- 62. The method of claim 60, wherein the load balancing is based upon a weighed round robin load balancing.
- 63. The method of claim 60, wherein the load balancing is based upon the type of requests contained with the data incoming to the computing system.
- 64. The method of claim 60, wherein the load balancing is based upon feedback regarding system performance provided from one or more system resources.
- 65. The method of claim 60, wherein a load balancing decision is made on a data session by data session basis.
- 66. The method of claim 60, wherein the load balancing is implemented at least in part through the network interface engine.
- 67. The method of claim 60, wherein the system processing engines are arranged in a staged pipelined configuration, load balancing decisions being performed by a plurality of the stages of the pipelined configuration.
- 68. The method of claim 58, wherein the load balancing considers at least in part system wellness data.
- 69. The method of claim 58, wherein the load balancing considers at least in part system performance feedback.
- 70. A network endpoint system for performing endpoint functionality, the endpoint system comprising, the system comprising:
a network interface engine comprising at least one network processor, the network interface engine coupling data from the network to the computing system; a plurality of system processors for performing endpoint system functionality; a distributed interconnection between the plurality of system processors and the network interface engine, wherein the system enables workload load balancing.
- 71. The system of claim 70, further comprising a plurality of system processor engines, each system processor engines comprising one or more of the system processors, at least two of the system processor engines performing different tasks, the system being configured such that processor resources of a first system engine may be reassigned to a second system engine which performs tasks different from the first system engine in order to perform the hardware assignment load balancing.
- 72. The system of claim 71, wherein the plurality of system processor engines includes at least one of a transport processor engine, an application processor engine or a storage processor engine.
- 73. The system of claim 72, wherein the first system engine is an application processor engine and the second system engine is a transport processor engine.
- 74. The system of claim 72, wherein the first system engine is a storage processor engine and the second system engine is a transport processor engine.
- 75. The system of claim 70, further comprising a first system processor engine, the first system processor engine comprising a two or more of the system processors, wherein the load balancing is performed by assigning workloads between the two or more system processors of the first system processor engine.
- 76. The system of claim 75, the assignment of workloads performed at least in part by the network interface engine.
- 77. The system of claim 76, further comprising a second system processor engine, the second system processor engine comprising a two or more of the system processors, wherein the load balancing is also performed by assigning workloads between the two or more system processors of the second system processor engine.
- 78. The system of claim 75, wherein the plurality of system processors comprises at least one storage processor and at least one application processor.
- 79. The system of claim 78, wherein the network processor, the storage processor and the application processor operate in a peer to peer environment across the distributed interconnection.
- 80. The system of claim 79, wherein the distributed interconnection is a switch fabric.
- 81. The system of claim 70, wherein the network endpoint system is a content delivery system.
- 82. The system of claim 81 wherein:
the plurality of system processors comprises at least one storage processor and at least one application processor, the storage processor being configured to interface with a storage system; and the network processor, the storage processor and the application processor operate in a peer to peer environment across the distributed interconnection.
- 83. The system of claim 82 wherein the distributed interconnection is a switch fabric.
- 84. The system of claim 83, wherein the system is configured in a single chassis.
- 85. A network connectable computing system, the system being configured to be connected on at least one end to a network, the system comprising:
a network interface engine comprising at least one network processor, the network interface engine coupling data from the network to the computing system; a plurality of system processor engines providing system functionality processing; a distributed interconnection between the plurality of system processor engines and the network interface engine, the distributed interconnection having a known latency, wherein the system enables load balancing to improve system performance.
- 86. The system of claim 85, wherein the network processor analyzes headers of the data packets provided to the computing system.
- 87. The system of claim 85, wherein the system is an intermediate network node system.
- 88. The system of claim 87, wherein the system is a network switch.
- 89. The system of claim 85, wherein the system is a network endpoint system.
- 90. The system of claim 85, wherein the system is a network endpoint system having at least one server or at least one server card.
- 91. The system of claim 85, wherein the system is incorporated into a network interface card.
- 92. The system of claim 90, wherein the system is a content delivery system.
- 93. The system of claim 92, wherein the distributed interconnection is a switch fabric.
- 94. The system of claim 85, wherein system is an asymmetric multi-processing system.
- 95. The system of claim 85, wherein the plurality of system processor engines are configured to perform separate tasks.
- 96. The system of claim 95, wherein the distributed interconnection is a switch fabric and the task specific processor engines include storage or application processor engines.
- 97. The system of claim 96, wherein the task specific processor engines include storage and application processors.
Parent Case Info
[0001] This application claims priority from Provisional Application Serial No. 60/246,372 filed on Nov. 7, 2000, which is entitled SINGLE CHASSIS NETWORK ENDPOINT SYSTEM WITH NETWORK PROCESSOR FOR LOAD BALANCING,” the disclosure of which is being incorporated herein by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60246372 |
Nov 2000 |
US |