Claims
- 1. A processing engine to accomplish a multiplicity of tasks, the engine comprising:
a multiplicity of processing tribes, each tribe comprising a multiplicity of context register sets and a multiplicity of processing resources for concurrent processing of a multiplicity of threads to accomplish the tasks; a memory structure having a multiplicity of memory blocks, each block storing data for processing threads; and an interconnect structure and control system enabling tribe-to-tribe migration of contexts to move threads from tribe-to-tribe; characterized in that individual ones of the tribes have preferential access to individual ones of the multiplicity of memory blocks.
- 2. The processing engine of claim 1 wherein preferential access from an individual one of the multiplicity of tribes to an individual one of the multiplicity of memory blocks is provided by an individual one of a multiplicity of controlled memory ports.
- 3. The processing engine of claim 2 characterized in that the multiplicity of tribes, the multiplicity of memory blocks, and the multiplicity of memory ports are equal in number, wherein each tribe has a dedicated port to a memory block.
- 4. The processing engine of claim 1 wherein processing tasks are received sequentially, an individual task received creating a thread, including a program counter and context, in a first one of the multiplicity of tribes.
- 5. The processing engine of claim 4 wherein the thread operating in the first one of the tribes is migrated via the interconnect structure to a second one of the tribes before completion of the task, by moving the program counter and at least a portion of the context to registers in the second one of the tribes.
- 6. The processing engine of claim 4 wherein original assignment of tasks received to tribes is at least partially dependent on distribution of processing data among the memory blocks.
- 7. The processing engine of claim 6 wherein original assignment of tasks to tribes is at least partly software controlled.
- 8. The processing engine of claim 6 wherein original assignment of tasks to tribes is at least partly hardware controlled.
- 9. The processing engine of claim 5 wherein migration of a thread from one tribe to another tribe is at least partly dependent on distribution of processing data among the memory blocks.
- 10. The processing engine of claim 9 wherein direction and timing of migration from tribe to tribe is at least partly software controlled.
- 11. The processing engine of claim 9 wherein direction and timing of migration from tribe to tribe is at least partly hardware controlled.
- 12. The processing engine of claim 1 implemented at a first node in a data packet network wherein the tasks are generated by receipt of data packets and processing the packets for translation to a second node in the network.
- 13. The processing engine of claim 12 wherein the data packet network is the Internet network.
- 14. A method for concurrently processing a multiplicity of tasks, the method comprising the steps of:
(a) implementing in a single processing engine a multiplicity of processing tribes, each tribe comprising a multiplicity of context register sets and a multiplicity of processing resources for concurrent processing of a multiplicity of threads to accomplish the tasks; (b) providing to the processing engine a memory structure having a multiplicity of memory blocks, each block storing data for processing threads, the memory blocks connected to the tribes in a way that individual ones of the tribes have preferential access to individual ones of the multiplicity of memory blocks; (c) connecting the tribes through an interconnect structure and control system enabling tribe-to-tribe migration of contexts to move threads from tribe-to-tribe; and (d) initiating a thread, including a program counter and context in registers, in a first one of the multiplicity of tribes for each task received.
- 15. The method of claim 14 wherein, in step (b), preferential access from an individual one of the multiplicity of tribes to an individual one of the multiplicity of memory blocks is provided by an individual one of a multiplicity of controlled memory ports.
- 16. The method of claim 15 wherein, in step (b), the multiplicity of tribes, the multiplicity of memory blocks, and the multiplicity of memory ports are equal in number, and each tribe has a dedicated port to a memory block.
- 17. The method of claim 13 wherein, in step (d), processing tasks are received sequentially.
- 18. The method of claim 14 further comprising a step wherein the thread operating in the first one of the tribes is migrated via the interconnect structure to a second one of the tribes before completion of the task associated with the thread, by moving the program counter and at least a portion of the context to registers in the second one of the tribes.
- 19. The method of claim 14 wherein, in step (d), original assignment of tasks received to tribes is at least partially dependent on distribution of processing data among the memory blocks.
- 20. The method of claim 14 wherein original assignment of tasks to tribes is at least partly software controlled.
- 21. The method of claim 14 wherein original assignment of tasks to tribes is at least partly hardware controlled.
- 22. The method of claim 18 wherein migration of a thread from one tribe to another tribe is at least partly dependent on distribution of processing data among the memory blocks.
- 23. The method of claim 18 wherein direction and timing of migration from tribe to tribe is at least partly software controlled.
- 24. The method of claim 18 wherein direction and timing of migration from tribe to tribe is at least partly hardware controlled.
- 25. The method of claim 14 implemented at a first node in a data packet network wherein the tasks are generated by receipt of data packets and processing the packets for translation to a second node in the network.
- 26. The method of claim 25 wherein the data packet network is the Internet network.
CROSS-REFERENCE TO RELATED DOCUMENTS
[0001] The present patent application is a non-provisional application claiming priority to three provisional patent applications as follows: No. 60/325,638 filed on Sep. 28, 2001, No. 60/341,689 filed Dec. 17, 2001, and No. 60/388,278, filed on Jun. 13, 2002. Each of these priority documents is incorporated in its entirety herein, at least by reference.
Provisional Applications (3)
|
Number |
Date |
Country |
|
60325638 |
Sep 2001 |
US |
|
60341689 |
Dec 2001 |
US |
|
60388278 |
Jun 2002 |
US |