Claims
- 1. A processor, comprising:
at least two cores, each of the at least two cores having a first level cache memory, each of the at least two cores being multi-threaded; an interconnect structure; a plurality of cache bank memories in communication with the at least two cores through the interconnect structure, each of the plurality of cache bank memories in communication with a main memory interface.
- 2. The processor of claim 1, wherein the interconnect structure includes a crossbar in communication with each of the plurality of cache bank memories and the at least two cores, and a buffer switch core in communication with each of the plurality of cache bank memories.
- 3. The processor of claim 2, further including:
an input/output bridge in communication with the crossbar and an input output device, the input/output bridge enabling control register transfers with the input/output device.
- 4. The processor of claim 2, wherein the buffer switch core enables direct memory accesses.
- 5. The processor of claim 1, wherein the first level cache memory includes an instruction cache unit and a data cache unit.
- 6. The processor of claim 1, wherein each thread associated with the at least two cores is configured to run on a pipeline.
- 7. The processor of claim 5, wherein the pipeline is a single issue pipeline.
- 8. The processor of claim 1, wherein the cache bank memories are single ported static random access memories.
- 9. A server, comprising:
an application processor chip, the application processor chip, including:
a plurality of multithreaded central processing unit cores, each of the plurality of multithreaded central processing unit cores having a first level cache memory; an interconnect structure; a plurality of cache bank memories in communication with the at least two cores through the interconnect structure, each of the plurality of cache bank memories in communication with a main memory interface.
- 10. The server of claim 9, wherein the interconnect structure includes a crossbar in communication with each of the plurality of cache bank memories and the plurality of multithreaded central processing unit cores, and a buffer switch core in communication with each of the plurality of cache bank memories
- 11. The server of claim 9, wherein the server is selected from the group consisting of a web server, an application server and a database server.
- 12. The server of claim 10, wherein the application processor chip includes,
an input/output bridge in communication with the crossbar and an input output device, the input/output bridge enabling control register transfers with the input/output device.
- 13. The server of claim 9, wherein the first level cache memory includes an instruction cache unit and a data cache unit.
- 14. The server of claim 9, wherein each thread of the central processing unit cores is configured to run on a single issue pipeline.
- 15. A method for optimizing utilization of a multithreaded processor core, comprising:
accessing a processor core through a first thread operation; issuing a long latency operation through the first thread; suspending the first thread; identifying a second thread operation ready to access the processor core; and processing the second thread operation through the processor core while the first thread performs the long latency operation in the background.
- 16. The method of claim 15, wherein the method operation of identifying a second thread operation ready to access the processor core includes,
selecting the second thread operation according to a scheduling algorithm.
- 17. The method of claim 15, wherein the processor core includes four threads.
- 18. The method of claim 15, further including:
providing a integrated circuit chip having eight processor cores, wherein each of the processor cores include four threads.
- 19. The method of claim 15, wherein the method operation of suspending the first thread includes;
obtaining a result from the long latency operation; and after obtaining the result from the long latency operation, indicating the first thread is ready to be run on the processor core.
- 20. The method of claim 15, wherein each thread of the multithreaded processor core is configured as a single issue pipeline using in order execution.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Provisional Patent Application No. 60/345,315 filed Oct. 22, 2001 and entitled “High Performance Web Server”. This provisional application is herein incorporated by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60345315 |
Oct 2001 |
US |