Claims
- 1. A method for managing memory access in a system including a plurality of processing clusters and a snoop controller adapted to service memory requests, wherein said snoop controller and each processing cluster in said plurality of processing clusters are coupled to a snoop ring, said method comprising the steps of:
(a) forwarding a memory request from a first processing cluster in said plurality of processing clusters to said snoop controller; (b) placing a snoop request from said snoop controller on said snoop ring in response to said memory request; (c) a second processing cluster in said plurality of processing clusters receiving said snoop request; and (d) said second processing cluster generating a response to said snoop request.
- 2. The method of claim 1, wherein said memory request requests access to a memory location and said snoop request calls for a change in ownership status of said memory location.
- 3. The method of claim 2, wherein said snoop request includes an instruction from a set of instructions consisting of:
a snoop own instruction instructing a processing cluster to transfer exclusive ownership of said memory location to another processing cluster, a snoop share instruction instructing a processing cluster to transfer shared ownership of said memory location to another processing cluster, and a snoop kill instruction instructing a processing cluster to release ownership of a memory location without performing any data transfers.
- 4. The method of claim 2, wherein said step (d) includes the step of:
(1) said second processing cluster modifying an ownership status for said memory location, wherein said response reflects said modification of ownership status.
- 5. The method of claim 4, wherein said step (d) includes the step of:
(2) said second processing cluster placing a MESI state indicator and contents of said memory location in said response.
- 6. The method of claim 5, including the step of:
(e) forwarding said response to said first processing cluster.
- 7. The method of claim 6, wherein each processing cluster in said plurality of processing clusters is coupled to a data ring, wherein said step (e) includes the steps of:
(1) said second processing cluster placing said response on said data ring; and (2) said first processing cluster retrieving said response from said data ring.
- 8. The method of claim 7, wherein said step (e) includes the steps of:
(3) a third processing cluster in said processing cluster retrieving said response from said data ring; and (4) said third processing cluster placing said response on said data ring.
- 9. The method of claim 2, wherein said first processing cluster includes a compute engine coupled to a first tier cache memory and a second tier cache memory coupled to said first tier cache memory, said method further including the step of:
(f) before said step (a) is performed, said first processing cluster determining that said first tier cache memory and said second tier cache memory do not own said memory location.
- 10. The method of claim 9, wherein said step (f) includes the steps of:
(1) said compute engine issuing said memory request to said first tier cache memory; (2) determining that said first tier cache memory does not own said memory location; (3) issuing said memory request to said second tier cache memory; and (4) determining that said second tier cache memory does not own said memory location.
- 11. The method of claim 10, wherein said step (f)(4) includes the steps of:
(i) determining whether said memory request is to be serviced after a set of memory requests; and (ii) servicing said set of memory requests.
- 12. The method of claim 11, wherein said step (f)(4)(i) includes the step of:
determining whether said memory request includes a store-release opcode.
- 13. The method of claim 1, further including the steps of:
(g) a third processing cluster in said plurality of processing clusters receiving said snoop request; and (h) forwarding said snoop request from said third processing cluster to said second processing cluster via said snoop ring.
- 14. The method of claim 1, wherein said plurality of processing clusters includes 4 processing clusters.
- 15. A method for managing memory access in a system including a plurality of processing clusters and a snoop controller adapted to service memory requests, wherein said snoop controller and each processing cluster in said plurality of processing clusters are coupled to a snoop ring and each processing cluster in said plurality of processing clusters is coupled to a data ring, said method comprising the steps of:
(a) forwarding a memory request from a first processing cluster in said plurality of processing clusters to said snoop controller, wherein said memory request calls for accessing a memory location; (b) placing a snoop request from said snoop controller on said snoop ring in response to said memory request; (c) a second processing cluster in said plurality of processing clusters receiving said snoop request; (d) informing said snoop controller that said second processing cluster does not own said memory location; and (e) after said step (d), transferring data from said memory location in a main memory to said data ring.
- 16. The method of claim 15 further including the steps of:
(f) said first processing cluster receiving said data from said data ring.
- 17. The method of claim 15 further including the steps of:
(g) each processing cluster in said plurality of processing clusters, other than said first processing cluster and said second processing cluster, receiving said snoop request; and (h) informing said snoop controller that each processing cluster receiving said snoop request in said step (g) does not own said memory location.
- 18. The method of claim 15 further including the step of:
(i) said first processing cluster taking ownership of said memory location.
- 19. The method of claim 15, wherein said snoop request calls for a change in ownership status of said memory location.
- 20. The method of claim 19, wherein said snoop request includes an instruction from a set of instructions consisting of:
a snoop own instruction instructing a processing cluster to transfer exclusive ownership of said memory location to another processing cluster, a snoop share instruction instructing a processing cluster to transfer shared ownership of said memory location to another processing cluster, and a snoop kill instruction instructing a processing cluster to release ownership of a memory location without performing any data transfers.
- 21. The method of claim 15, wherein said first processing cluster includes a compute engine coupled to a first tier cache memory and a second tier cache memory coupled to said first tier cache memory, said method further including the step of:
(j) before said step (a) is performed, said first processing cluster determining that said first tier cache memory and said second tier cache memory do not own said memory location.
- 22. The method of claim 21, wherein said step (j) includes the steps of:
(1) said compute engine issuing said memory request to said first tier cache memory; (2) determining that said first tier cache memory does not own said memory location; (3) issuing said memory request to said second tier cache memory; and (4) determining that said second tier cache memory does not own said memory location.
- 23. The method of claim 22, wherein said step (j)(4) includes the steps of:
(i) determining whether said memory request is to be serviced after a set of memory requests; and (ii) servicing said set of memory requests.
- 24. The method of claim 23, wherein said step (j)(4)(i) includes the step of:
determining whether said memory request includes a store-release opcode.
- 25. A method for managing memory access in a system including a plurality of processing clusters and a snoop controller adapted to service memory requests, wherein said snoop controller and each processing cluster in said plurality of processing clusters are coupled to a snoop ring, said method comprising the steps of:
(a) forwarding a memory request from a first processing cluster in said plurality of processing clusters to said snoop controller, wherein said memory request calls for accessing a memory location; (b) placing a snoop request from said snoop controller on said snoop ring in response to said memory request, wherein said snoop request calls for a change in ownership of said memory location; (c) a second processing cluster in said plurality of processing clusters receiving said snoop request; (d) said second processing cluster generating a response to said snoop request, wherein said step (d) includes the steps of:
(1) said second processing cluster modifying an ownership status for said memory location, wherein said response reflects said modification of ownership status, and (2) said second processing cluster placing contents of said memory location in said response; and (e) forwarding said response to said first processing cluster, wherein each processing cluster in said plurality of processing clusters is coupled to a data ring, wherein said step (e) includes the steps of (1) said second processing cluster placing said response on said data ring, and (2) said first processing cluster retrieving said response from said data ring.
- 26. A method for managing memory access in a system including a plurality of processing clusters and a snoop controller adapted to service memory requests, wherein said snoop controller and each processing cluster in said plurality of processing clusters are coupled to a snoop ring and each processing cluster in said plurality of processing clusters is coupled to a data ring, said method comprising the steps of:
(a) forwarding a memory request from a first processing cluster in said plurality of processing clusters to said snoop controller, wherein said memory request calls for accessing a memory location; (b) placing a snoop request from said snoop controller on said snoop ring in response to said memory request; (c) each processing cluster in said plurality of processing clusters, other than said first processing cluster, receiving said snoop request; (d) informing said snoop controller that each processing cluster receiving said snoop request in said step (c) does not own said memory location; (e) after said step (d), transferring data from said memory location in a main memory to said data ring; (f) said first processing cluster receiving said data from said data ring; and (g) said first processing cluster taking ownership of said memory location.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of, and claims priority under 35 U.S.C. §120 from, U.S. patent application Ser. No. 09/900,481, entitled “Multi-Processor System,” filed on Jul. 6, 2001, which is incorporated herein by reference.
[0002] This Application is related to the following Applications:
[0003] “Coprocessor Including a Media Access Controller,” by Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan and Nazar Zaidi, Attorney Docket No. NEXSI-01021US0, filed the same day as the present application;
[0004] “Application Processing Employing A Coprocessor,” by Frederick Gruner, Robert Hathaway, Ramesh Panwar, Elango Ganesan, and Nazar Zaidi, Attorney Docket No. NEXSI-01201US0, filed the same day as the present application;
[0005] “Compute Engine Employing A Coprocessor,” by Robert Hathaway, Frederick Gruner, and Ricardo Ramirez, Attorney Docket No. NEXSI-01202US0, filed the same day as the present application;
[0006] “Streaming Input Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01203US0, filed the same day as the present application;
[0007] “Streaming Output Engine Facilitating Data Transfers Between Application Engines And Memory,” by Ricardo Ramirez and Frederick Gruner, Attorney Docket No. NEXSI-01204US0, filed the same day as the present application; “Transferring Data Between Cache Memory And A Media Access Controller,” by Frederick Gruner, Robert Hathaway, and Ricardo Ramirez, Attorney Docket No. NEXSI-01211US0, filed the same day as the present application;
[0008] “Processing Packets In Cache Memory,” by Frederick Gruner, Elango Ganesan, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01212US0, filed the same day as the present application;
[0009] “Bandwidth Allocation For A Data Path,” by Robert Hathaway, Frederick Gruner, and Mark Bryers, Attorney Docket No. NEXSI-01213US0, filed the same day as the present application;
[0010] “Managing Ownership Of A Full Cache Line Using A Store-Create Operation,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01282US0, filed the same day as the present application;
[0011] “Sharing A Second Tier Cache Memory In A Multi-Processor,” by Dave Hass, Frederick Gruner, Nazar Zaidi, and Ramesh Panwar, Attorney Docket No. NEXSI-01283US0, filed the same day as the present application;
[0012] “First Tier Cache Memory Preventing Stale Data Storage,” by Dave Hass, Robert Hathaway, and Frederick Gruner, Attorney Docket No. NEXSI-01284US0, filed the same day as the present application; and
[0013] “Ring Based Multi-Processing System,” by Dave Hass, Mark Vilas, Fred Gruner, Ramesh Panwar, and Nazar Zaidi, Attorney Docket No. NEXSI-01028US0, filed the same day as the present application.
[0014] Each of these related Applications are incorporated herein by reference.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09900481 |
Jul 2001 |
US |
Child |
10105972 |
Mar 2002 |
US |