Claims
- 1. A method for processing data in a packet from a communications medium, wherein a system processing said data includes a first set of cache memory and a second set of cache memory, wherein said first set of cache memory and said second set of cache memory are coupled to a main memory, said method comprising the steps of:
(a) transferring said data from said communications medium to said first set of cache memory; and (b) transferring said data from said first set of cache memory to said second set of cache memory in response to a request for said data, wherein said data passes from said first set of cache memory to said second set of cache memory in said step (b) without said data being stored in said main memory as part of said step (b).
- 2. The method of claim 1, wherein said step (a) is performed without storing said data in said main memory.
- 3. The method of claim 1, wherein said step (a) is performed without storing said data in a buffer having a number of bytes equal to or greater than a number of bytes in said packet.
- 4. The method of claim 1, wherein said step (b) is performed without storing said data in a buffer having a number of bytes equal to or greater than a number of bytes in said packet.
- 5. The method of claim 1, further including the steps of:
(c) assigning a set of applications to a pipeline set of compute engines, wherein a first application compute engine in said pipeline set of compute engines is coupled to a third set of cache memory; and (d) transferring said data from said second set of cache memory to said third set of cache memory in response to a request for said data from said first application compute engine, wherein said data passes from said second set of cache memory to said third set of cache memory in said step (d) without said data being stored in said main memory as part of said step (d).
- 6. The method of claim 5, further including the step of:
(e) said first application compute engine performing a first application in said set of applications, wherein said step (e) includes the step of said first application compute engine accessing said data in said third set of cache memory.
- 7. The method of claim 6, wherein said first application is an application from the set of applications consisting of virtual private networking, secure sockets layer processing, web caching, hypertext mark-up language compression, virus checking, packet reception, and packet transmission.
- 8. The method of claim 6, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Exclusive.
- 9. The method of claim 6, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Shared.
- 10. The method of claim 6, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Modified.
- 11. The method of claim 1, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Exclusive.
- 12. The method of claim 1, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Shared.
- 13. The method of claim 1, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Modified.
- 14. A method for processing data, wherein a system processing said data includes a first set of cache memory containing said data and a second set of cache memory coupled to a first application compute engine in a pipeline set of compute engines, wherein said first set of cache memory and said second set of cache memory are coupled to a main memory, said method comprising the steps of:
(a) assigning a set of applications to said pipeline set of compute engines; and (b) transferring said data from said first set of cache memory to said second set of cache memory in response to a request for said data from said first application compute engine, wherein said data passes from said first set of cache memory to said second set of cache memory in said step (b) without said data being stored in said main memory as part of said step (b).
- 15. The method of claim 14, further including the step of:
(c) said first application compute engine performing a first application in said set of applications, wherein said step (c) includes the step of said first application compute accessing said data in said second set of cache memory.
- 16. The method of claim 15, wherein said first application is an application from the set of applications consisting of virtual private networking, secure sockets layer processing, web caching, hypertext mark-up language compression, virus checking, packet reception, and packet transmission.
- 17. The method of claim 15, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Exclusive.
- 18. The method of claim 15, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Shared.
- 19. The method of claim 15, wherein said data has an associated MESI state upon being transferred from said first set of cache memory to said second set of cache memory, wherein said MESI state is Modified.
- 20. A method for processing data in a packet from a communications medium, wherein a system processing said data includes a first set of cache memory and a second set of cache memory, wherein said first set of cache memory and said second set of cache memory are coupled to a main memory, said method comprising the steps of:
(a) transferring said data from said communications medium to said first set of cache memory; (b) transferring said data from said first set of cache memory to said second set of cache memory in response to a request for said data; (c) assigning a set of applications to a pipeline set of compute engines, wherein a first application compute engine in said pipeline set of compute engines is coupled to a third set of cache memory; and (d) transferring said data from said second set of cache memory to said third set of cache memory in response to a request for said data from said first application compute engine, wherein said data passes from said first set of cache memory to said second set of cache memory in said step (b) without said data being stored in said main memory as part of said step (b), and wherein said data passes from said second set of cache memory to said third set of cache memory in said step (d) without said data being stored in said main memory as part of said step (d).
- 21. A method for processing data, said method comprising the steps of:
(a) receiving data into a first cache memory in a set of cache memory; (b) identifying a set of applications to be performed in relation to said data; (c) identifying a pipeline set of compute engines for performing said set of applications; and (d) transferring said data from said first cache memory to a second cache memory in said set of cache memory, without transferring said data to a memory outside said set of cache memory.
- 22. The method of claim 21, wherein said pipeline set of compute engines includes a first application compute engine coupled to said second cache memory.
- 23. The method of claim 22, further including the step of:
(e) said first application compute engine performing a first application in said set of applications, wherein said step (e) includes the step of said first application compute engine accessing said data in said second cache memory.
- 24. The method of claim 23, wherein said first application is an application from the set of applications consisting of virtual private networking, secure sockets layer processing, web caching, hypertext mark-up language compression, virus checking, packet reception, and packet transmission.
- 25. The method of claim 21, wherein said data has an associated MESI state upon being transferred from said first cache memory to said second cache memory, wherein said MESI state is Exclusive.
- 26. The method of claim 21, wherein said data has an associated MESI state upon being transferred from said first cache memory to said second cache memory, wherein said MESI state is Shared.
- 27. The method of claim 21, wherein said data has an associated MESI state upon being transferred from said first cache memory to said second cache memory, wherein said MESI state is Modified.
- 28. An apparatus comprising:
a first set of cache memory; a second set of cache memory coupled to said first set of cache memory; a second compute engine coupled to said second set of cache memory; a non-cache memory coupled to said first set of cache memory and said second set of cache memory; a first means for transferring data from a communications medium to said first set of cache memory; and a second means for transferring said data from said first set of cache memory to said second set of cache memory in response to a request for said data from said second compute engine, wherein said second means transfers said data from said first set of cache memory to said second set of cache memory without said data being stored in said non-cache memory.
- 29. The apparatus of claim 28, further including:
a third set of cache memory coupled to said first set of cache memory, said second set of cache memory, and said non-cache memory; a pipeline set of compute engines including a first application compute engine coupled to said third set of cache memory; an assignment means for assigning a set of applications to said pipeline set of compute engines; and a third means for transferring said data from said second set of cache memory to said third set of cache memory in response to a request for said data from said first application compute engine, wherein said data passes from said second set of cache memory to said third set of cache memory without said data being stored in said non-cache memory.
- 30. The apparatus of claim 29, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Exclusive.
- 31. The method of claim 29, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Shared.
- 32. The method of claim 29, wherein said data has an associated MESI state upon being transferred from said second set of cache memory to said third set of cache memory, wherein said MESI state is Modified.
- 33. The apparatus of claim 29, wherein said apparatus is formed on a single integrated circuit.
- 34. A method for processing data in a packet from a communications medium, wherein a system processing said data includes a set of cache memory adapted for coupling to a main memory, said method comprising the steps of:
(a) transferring said data from said communications medium to said set of cache memory, wherein said data passes from said communications medium to said set of cache memory in said step (a) without said data being stored in said main memory as part of said step (a); (b) assigning a set of applications to a pipeline set of compute engines, wherein a first application compute engine in said pipeline set of compute engines is coupled to said set of cache memory; and (c) said first application compute engine performing a first application in said set of applications, wherein said step (c) includes the step of said first application compute engine accessing said data in said set of cache memory.
- 35. The method of claim 34, wherein said step (c) is performed without storing said data in said main memory.
- 36. The method of claim 34, wherein said step (a) is performed without storing said data in a buffer having a number of bytes equal to or greater than a number of bytes in said packet.
- 37. The method of claim 34, wherein said first application is an application from the set of applications consisting of virtual private networking, secure sockets layer processing, web caching, hypertext mark-up language compression, virus checking, packet reception, and packet transmission.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of, and claims priority under 35 U.S.C. §120 from, U.S. patent application Ser. No. 09/900,481, entitled “Multi-Processor System,” filed on Jul. 6, 2001, which is incorporated herein by reference.
[0002] This Application is related to the following Applications:
[0003] “Coprocessor Including a Media Access Controller,” by Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan and Nazar Zaidi, Attorney Docket No. NEXSI-01021US0, filed the same day as the present application;
[0004] “Application Processing Employing A Coprocessor,” by Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan, and Nazar Zaidi, Attorney Docket No. NEXSI-01201US0, filed the same day as the present application;
[0005] “Compute Engine Employing A Coprocessor,” by Robert Hathaway, Frederick Gruner, and Ricardo Ramirez, Attorney Docket No. NEXSI-01202US0, filed the same day as the present application;
[0006] “Streaming Input Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01203US0, filed the same day as the present application;
[0007] “Streaming Output Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01204US0, filed the same day as the present application;
[0008] “Transferring Data Between Cache Memory And A Media Access Controller,” by Frederick Gruner, Robert Hathaway, and Ricardo Ramirez, Attorney Docket No. NEXSI-01211US0, filed the same day as the present application;
[0009] “Bandwidth Allocation For A Data Path,” by Robert Hathaway, Frederick Gruner, and Mark Bryers, Attorney Docket No. NEXSI-01213US0, filed the same day as the present application;
[0010] “Ring-Based Memory Requests In A Shared Memory Multi-Processor,” by Dave Hass, Frederick Gruner, Nazar Zaidi, Ramesh Panwar, and Mark Vilas, Attorney Docket No. NEXSI-01281US0, filed the same day as the present application;
[0011] “Managing Ownership Of A Full Cache Line Using A Store-Create Operation,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01282US0, filed the same day as the present application;
[0012] “Sharing A Second Tier Cache Memory In A Multi-Processor,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01283US0, filed the same day as the present application;
[0013] “First Tier Cache Memory Preventing Stale Data Storage,” by Dave Hass, Robert Hathaway, and Frederick Gruner, Attorney Docket No. NEXSI-01284US0, filed the same day as the present application; and
[0014] “Ring Based Multi-Processing System,” by Dave Hass, Mark Vilas, Fred Gruner, Ramesh Panwar, and Nazar Zaidi, Attorney Docket No. NEXSI-01028US0, filed the same day as the present application.
[0015] Each of these related Applications are incorporated herein by reference.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09900481 |
Jul 2001 |
US |
Child |
10105151 |
Mar 2002 |
US |