Claims
- 1. A method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor and said compute engine is coupled to a set of cache memory, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application,
wherein said step (b) includes the step of:
(1) said coprocessor accessing said cache memory.
- 2. The method of claim 1, wherein said step (b)(1) includes the steps of:
(i) obtaining a physical memory address; and (ii) providing said physical memory address to said set of cache memory.
- 3. The method of claim 2, wherein said step (b)(1)(i) includes the step of:
translating a virtual address to said physical address within said coprocessor.
- 4. The method of claim 2, wherein said compute engine is coupled to a memory management unit, said step (b)(1)(i) including the steps of:
said coprocessor providing a virtual address to said memory management unit; and said memory management unit providing said physical address to said coprocessor in response to said virtual address.
- 5. The method of claim 2, wherein said coprocessor includes a set of application engines, wherein said step (b) further includes the steps of:
(2) said coprocessor initializing an application engine in said set of applications to perform a series of steps for said application, and (3) said application engine performing said series of steps for said application.
- 6. The method of claim 5, wherein said step (b)(2) includes the steps of:
(i) providing an enable signal to said application engine; and (ii) providing data and control signals to said application engine.
- 7. The method of claim 5, wherein said step (b)(1) includes the step of:
(iii) retrieving information from said cache memory.
- 8. The method of claim 5, wherein said step (b) further includes the step of:
(4) said coprocessor querying said application engine.
- 9. The method of claim 5, wherein said step (b)(3) includes the steps of:
(i) retrieving information; and (ii) providing said information to a second application engine in said set of application engines.
- 10. The method of claim 9, wherein said step (b) includes the step of:
(4) initializing said second application engine.
- 11. The method of claim 9, wherein said application engine is a media access controller and said second application engine is a streaming output engine coupled to said cache memory, wherein said step (b)(3) includes the step of:
(iii) said streaming output engine storing said information in said cache memory.
- 12. The method of claim 11, wherein said step (b)(3)(i) includes the step of:
said media access controller retrieving said information from a communications medium.
- 13. The method of claim 9, wherein said application engine is a streaming input engine coupled to said cache memory and said second application engine is a media access controller, wherein said step (b)(3)(i) includes the step of:
said streaming input engine retrieving said information from said cache memory.
- 14. The method of claim 13, wherein said step (b)(3) includes the step of:
(iv) said media access controller providing said information to a communications network.
- 15. The method of claim 1, wherein said step (b)(1) includes the step of:
(iii) performing a full cache line transfer.
- 16. The method of claim 15, wherein said full cache line includes 64 bytes.
- 17. The method of claim 1, wherein said method includes the step of:
(c) said coprocessor informing said central processing unit that said application is complete.
- 18. The method of claim 17, wherein said step (c) includes the step of:
(1) said coprocessor providing an interrupt signal to said central processing unit.
- 19. The method of claim 17, wherein said step (c) includes the step of:
(1) said coprocessor setting a bit in an internal register.
- 20. A method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor including a sequencer and a set of application engines coupled to said sequencer, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application,
wherein said step (b) includes the step of:
(1) said coprocessor initializing an application engine in said set of applications to perform a series of steps for said application, and (2) said application engine performing said series of steps for said application.
- 21. The method of claim 20, wherein said step (b)(1) includes the steps of:
(i) providing an enable signal to said application engine, and (ii) providing data and control signals to said application engine.
- 22. The method of claim 20, wherein said step (b) further includes the step of:
(3) said coprocessor querying said application engine.
- 23. The method of claim 20, wherein said step (b)(2) includes the steps of:
(i) retrieving information; and (ii) providing said information to a second application engine in said set of application engines.
- 24. The method of claim 20, wherein said step (b) includes the step of:
(3) initializing a second application engine.
- 25. The method of claim 24, wherein said step (b)(2) includes the step of:
(iii) transferring information from said application engine to said second application engine.
- 26. The method of claim 20, wherein said method includes the step of:
(c) said coprocessor informing said central processing unit that said application is complete.
- 27. The method of claim 26, wherein said step (c) includes the steps of:
(1) said sequencer detecting that said application engine has completed said application; and (2) said sequencer generating a signal indicating that said application is complete.
- 28. A method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor including a set of application engines, wherein said compute engine is coupled to a set of cache memory, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application,
wherein said step (b) includes the step of:
(1) said coprocessor initializing an application engine in said set of applications to perform a series of steps for said application, and (2) said application engine performing said series of steps for said application, wherein said step (b)(2) includes the step of:
(i) said coprocessor accessing said cache memory.
- 29. The method of claim 28, wherein said compute engine is coupled to a memory management unit, wherein said step (b)(2)(i) includes the steps of:
said coprocessor providing a virtual memory address to said memory management unit; said memory management unit providing a physical memory address to said coprocessor in response to said virtual memory address; and providing said physical memory address to said set of cache memory.
- 30. The method of claim 28, wherein said step (b) further includes the step of:
(3) said coprocessor querying said application engine.
- 31. The method of claim 28, wherein said step (b)(3) includes the steps of:
(i) retrieving information; and (ii) providing said information to a second application engine in said set of application engines.
- 32. The method of claim 31, wherein said application engine is a media access controller and said second application engine is a streaming output engine coupled to said cache memory,
wherein said step (b)(3)(i) includes the step of:
said media access controller retrieving said information from a communications medium, and wherein said step (b)(3) includes the step of:
(iii) said streaming output engine storing said information in said cache memory.
- 33. The method of claim 31, wherein said application engine is a streaming input engine coupled to said cache memory and said second application engine is a media access controller,
wherein said step (b)(3)(i) includes the step of:
said streaming input engine retrieving said information from said cache memory, and wherein said step (b)(3) includes the step of:
(iv) said media access controller providing said information to a communications medium.
- 34. A method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor including a sequencer and a set of application engines coupled to said sequencer, wherein said compute engine is coupled to a set of cache memory, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application,
wherein said step (b) includes the step of:
(1) said sequencer initializing an application engine in said set of applications to perform a series of steps for said application, and (2) said application engine performing said series of steps for said application, wherein said step (b)(2) includes the step of:
(i) providing a physical memory address to a cache memory; (ii) retrieving information from said physical memory address in said cache memory; and (iii) providing said information to a second application engine in said set of application engines.
- 35. A processor readable storage medium having processor readable code embodied on said processor readable storage medium, said processor readable code for programming a processor to perform a method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor and said compute engine is coupled to a set of cache memory, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application,
wherein said step (b) includes the step of:
(1) said coprocessor accessing said cache memory.
- 36. The processor readable storage medium of claim 35, wherein said coprocessor includes a set of application engines, wherein said step (b) further includes the steps of:
(2) said coprocessor initializing an application engine in said set of applications to perform a series of steps for said application, and (3) said application engine performing said series of steps for said application.
- 37. The processor readable storage medium of claim 36, wherein said step (b)(2) includes the steps of:
(i) providing an enable signal to said application engine; and (ii) providing data and control signals to said application engine.
- 38. The processor readable storage medium of claim 36, wherein said step (b) further includes the step of:
(4) said coprocessor querying said application engine.
- 39. The processor readable storage medium of claim 36, wherein said step (b)(3) includes the steps of:
(i) retrieving information; and (ii) providing said information to a second application engine in said set of application engines.
- 40. The processor readable storage medium of claim 39, wherein said step (b) includes the step of:
(5) initializing said second application engine.
- 41. The processor readable storage medium of claim 40, wherein said application engine is a media access controller and said second application engine is a streaming output engine coupled to said cache memory,
wherein said step (b)(3)(i) includes the step of:
said media access controller retrieving said information from a communications medium, and wherein said step (b)(3) includes the step of:
(iv) said streaming output engine storing said information in said cache memory.
- 42. The processor readable storage medium of claim 40, wherein said application engine is a streaming input engine coupled to said cache memory and said second application engine is a media access controller,
wherein said step (b)(3)(i) includes the step of:
said streaming input engine retrieving said information from said cache memory, and wherein said step (b)(3) includes the step of:
(v) said media access controller providing said information to a communications medium.
- 43. The processor readable storage medium of claim 35, wherein said method includes the step of:
(c) said coprocessor informing said central processing unit that said application is complete.
- 44. A processor readable storage medium having processor readable code embodied on said processor readable storage medium, said processor readable code for programming a processor to perform a method for performing an application assigned to a compute engine, wherein said compute engine includes a central processing unit coupled to a coprocessor including a sequencer and a set of application engines coupled to said sequencer, said method comprising the steps of:
(a) said central processing unit initializing said coprocessor to perform said application; and (b) said coprocessor performing said application, wherein said coprocessor accesses a set of cache memory in performing said application,
wherein said step (b) includes the step of:
(1) said coprocessor initializing an application engine in said set of applications to perform a series of steps for said application, and (2) said application engine performing said series of steps for said application.
- 45. The processor readable storage medium of claim 44, wherein said step (b)(1) includes the steps of:
(i) providing an enable signal to said application engine, and (ii) providing data and control signals to said application engine.
- 46. The processor readable storage medium of claim 44, wherein said step (b) further includes the step of:
(3) said coprocessor querying said application engine.
- 47. The processor readable storage medium of claim 44, wherein said step (b)(2) includes the steps of:
(i) retrieving information; and (ii) providing said information to a second application engine in said set of application engines.
- 48. The processor readable storage medium of claim 44,
wherein said step (b) includes the step of:
(3) initializing a second application engine, and wherein said step (b)(2) includes the step of:
(iii) transferring information from said application engine to said second application engine.
- 49. The processor readable storage medium of claim 44, wherein said method includes the step of:
(c) said coprocessor informing said central processing unit that said application is complete, wherein said step (c) includes the steps of:
(1) said sequencer detecting that said application engine has completed said application; and (2) said sequencer generating a signal indicating that said application is complete.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of, and claims priority under 35 U.S.C. §120 from, U.S. patent application Ser. No. 09/900,481, entitled “Multi-Processor System,” filed on Jul. 6, 2001, which is incorporated herein by reference.
[0002] This Application is related to the following Applications:
[0003] “Coprocessor Including a Media Access Controller,” by Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan and Nazar Zaidi, Attorney Docket No. NEXSI-01021US0, filed the same day as the present application;
[0004] “Compute Engine Employing A Coprocessor,” by Robert Hathaway, Frederick Gruner, and Ricardo Ramirez, Attorney Docket No. NEXSI-01202US0, filed the same day as the present application;
[0005] “Streaming Input Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01203US0, filed the same day as the present application;
[0006] “Streaming Output Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01204US0, filed the same day as the present application;
[0007] “Transferring Data Between Cache Memory And A Media Access Controller,” by Frederick Gruner, Robert Hathaway, and Ricardo Ramirez, Attorney Docket No. NEXSI-01211US0, filed the same day as the present application;
[0008] “Processing Packets In Cache Memory,” by Frederick Gruner, Elango Ganesan, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01212US0, filed the same day as the present application;
[0009] “Bandwidth Allocation For A Data Path,” by Robert Hathaway, Frederick Gruner, and Mark Bryers, Attorney Docket No. NEXSI-01213US0, filed the same day as the present application;
[0010] “Ring-Based Memory Requests In A Shared Memory Multi-Processor,” by Dave Hass, Frederick Gruner, Nazar Zaidi, Ramesh Panwar, and Mark Vilas, Attorney Docket No. NEXSI-01281US0, filed the same day as the present application;
[0011] “Managing Ownership Of A Full Cache Line Using A Store-Create Operation,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01282US0, filed the same day as the present application;
[0012] “Sharing A Second Tier Cache Memory In A Multi-Processor,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01283US0, filed the same day as the present application;
[0013] “First Tier Cache Memory Preventing Stale Data Storage,” by Dave Hass, Robert Hathaway, and Frederick Gruner, Attorney Docket No. NEXSI-01284US0, filed the same day as the present application; and
[0014] “Ring Based Multi-Processing System,” by Dave Hass, Mark Vilas, Fred Gruner, Ramesh Panwar, and Nazar Zaidi, Attorney Docket No. NEXSI-01028US0, filed the same day as the present application.
[0015] Each of these related Applications are incorporated herein by reference.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09900481 |
Jul 2001 |
US |
Child |
10105979 |
Mar 2002 |
US |