Claims
- 1. A data distributor in a computational unit, wherein the data distributor receives data from a network in an adaptive computing engine and distributes the data to components within the computational unit, the data distributor comprising
an input mechanism for receiving the data; a distribution mechanism responsive to a control signal for distributing the data to a selected component; and a control mechanism responsive to a control signal for distributing the data to the selected component in a selected manner.
- 2. The data distributor of claim 1, wherein a selected component includes a register.
- 3. The data distributor of claim 1, wherein a selected component includes a memory.
- 4. The data distributor of claim 1, wherein a selected manner of distributing the data includes a look-up table.
- 5. The data distributor of claim 4, wherein a selected manner of distributing the data includes using an output port number to distribute the data.
- 6. The data distributor of claim 4, wherein a selected manner of distributing the data includes using a direct-memory address transfer to distribute the data.
- 7. The data distributor of claim 4, wherein a selected manner of distributing the data includes using an interrupt to distribute the data.
- 8. The data distributor of claim 1, wherein the input mechanism includes a pipeline register.
- 9. A data aggregator in a computational unit in an adaptive computing engine is used to aggregate 4data for transfer from the computational unit to the network, wherein the computational unit includes multiple components, wherein each component can request transfer of data to the network, the data aggregator comprising
an output register coupled to the network; and an arbiter mechanism for arbitrating priority of the requests from the multiple components.
- 10. The data aggregator of claim 9, wherein one of the multiple components includes a Peek/Poke Module.
- 11. The data aggregator of claim 9, wherein one of the multiple components includes a Execution Unit.
- 12. The data aggregator of claim 9, wherein one of the multiple components includes a DMA Engine.
- 13. The data aggregator of claim 9, wherein one of the multiple components includes a HTM Message Generator.
- 14. A method for distributing data in a computational unit, the method comprising
receiving data from a network in an adaptive computing engine; distributing the data to components within the computational unit according to a control signal for distributing the data to a selected component and according to a control signal for distributing the data to the selected component in a selected manner.
- 15. The method of claim 14, wherein a selected component includes a register.
- 16. The method of claim 14, wherein a selected component includes a memory.
- 17. The method of claim 14, wherein a selected manner of distributing the data includes using a look-up table.
- 18. The method of claim 17, wherein a selected manner of distributing the data includes using an output port number to distribute the data.
- 19. The method of claim 17, wherein a selected manner of distributing the data includes using a direct-memory address transfer to distribute the data.
- 20. The method of claim 17, wherein a selected manner of distributing the data includes using an interrupt to distribute the data.
- 21. The data distributor of claim 14, wherein the input mechanism includes a pipeline register.
- 22. A method for outputting data to a network from a computational unit in an adaptive computing engine, the method comprising
arbitrating among multiple components to select a component's output data to transfer to the network.
- 23. The method of claim 22, further comprising
sending data to an output register coupled to the network; and prioritizing the requests from the multiple components.
- 24. The method of claim 22, wherein one of the multiple components includes a Peek/Poke Module.
- 25. The data aggregator of claim 22, wherein one of the multiple components includes a Execution Unit.
- 26. The data aggregator of claim 22, wherein one of the multiple components includes a DMA Engine.
- 27. The data aggregator of claim 22, wherein one of the multiple components includes a HTM Message Generator.
CLAIM OF PRIORITY
[0001] This application claims priority from U.S. Provisional Patent Application No. 60/391,874, filed on Jun. 25, 2002 entitled “DIGITAL PROCESSING ARCHITECTURE FOR AN ADAPTIVE COMPUTING MACHINE”; which is hereby incorporated by reference as if set forth in full in this document for all purposes.
[0002] This application is related to U.S. patent application Ser. No. 09/815,122, filed on Mar. 22, 2001, entitled “ADAPTIVE INTEGRATED CIRCUITRY WITH HETEROGENEOUS AND RECONFIGURABLE MATRICES OF DIVERSE AND ADAPTIVE COMPUTATIONAL UNITS HAVING FIXED, APPLICATION SPECIFIC COMPUTATIONAL ELEMENTS.”
[0003] This application is also related to the following copending applications:
[0004] U.S. patent application No. [TBD], filed on [TBD], entitled, “HARDWARE TASK MANAGER FOR ADAPTIVE COMPUTING” (Attorney Docket 21202-003500US); and
[0005] U.S. patent application No. [TBD], filed on [TBD], entitled, “PROCESSING ARCHITECTURE FOR A RECONFIGURABLE ARITHMETIC NODE IN AN ADAPTIVE COMPUTING SYSTEM” (Attorney Docket 21202-002910US).
Provisional Applications (1)
|
Number |
Date |
Country |
|
60391874 |
Jun 2002 |
US |