Claims
- 1. A shared memory symmetrical processing system, comprising: a first ring and a second ring for interconnecting a plurality of nodes, wherein data in said first ring flows in opposite directions with respect to said second ring, each of said plurality of nodes comprising a system control element, wherein internodal communications are routed and said system control element comprises a plurality of controllers for employing a bus protocol wherein partial coherency results are passed in parallel with a related snoop request, each of said plurality of nodes further comprising any combination of the following:
at least one processor; cache memory; a plurality of I/O adapters; and main memory.
- 2. The shared memory symmetrical processing system as in claim 1, wherein said system control element of each of said plurality of nodes comprises a pair of latches for holding ring responses on said first ring and said second ring, wherein one of said pair of latches is used to merge a local response with a response held by said one of said pair of latches to provide an outgoing first message response for being merged with a response held by the other one of said pair of latches to provide an outgoing second message final response.
- 3. The shared memory symmetrical processing system as in claim 2, wherein a response coherency ordering table is utilized when said partial and said final responses are generated, and said outgoing first message response and said outgoing second message final response are generated in accordance with an order provided by said response coherency ordering table.
- 4. A shared memory symmetrical processing system, comprising:
a plurality of nodes; a first ring providing switchless internodal communications between each of said plurality of nodes in a first direction; a second ring providing switchless internodal communications between each of said plurality of nodes in a second direction, said second direction being opposite to said first ring; a system control element for each of said plurality of nodes, said system control element having a pair of latches for holding ring responses on said first ring and said second ring, wherein one of said pair of latches is used to merge a local response with a response held by said one of said pair of latches to provide an outgoing first message response for being merged with a response held by the other one of said pair of latches to provide an outgoing second message final response.
- 5. The shared memory symmetrical processing system as in claim 4, wherein a response coherency ordering table is utilized when said partial and said final responses are generated, and said outgoing first message response and said outgoing second message final response are generated in accordance with an order provided by said response coherency ordering table.
- 6. The method as in claim 5, wherein first messages and second messages are launched from a requesting node of said plurality of nodes, said first and second messages circulate around said first ring and said second ring, wherein said first and second messages are merged and ordered according to predetermined priority standards as they arrive at each of said plurality of nodes not comprising said requesting node to form a first and a second outgoing message response at each of said plurality of nodes not comprising said requesting node prior to returning an accumulated response to said requesting node, wherein said requesting node does not collect and return responses in order to provide said accumulated response to any one of said plurality of nodes not comprising said requesting node.
- 7. A method for maintaining cache coherency in a symmetrical multiprocessing environment, comprising:
providing a plurality of nodes each being able to communicate with each other via a ring based topology comprising one or more communication paths between each of said plurality of nodes, each of said plurality of nodes comprising a plurality of processors, cache memory, a plurality of I/O adapters and a main memory accessible from each of said plurality of nodes; establishing a protocol for exchanging coherency information and operational status between each of said plurality of nodes; managing one or more of said communication paths between each of said plurality of nodes; circulating a plurality of bus operational messages around said ring based topology, said bus operational messages include information pertaining to but not limited to any one of the following; snoop commands, addresses, responses and data; wherein information related to said bus operational messages is managed in a manner which controls latency of said bus operational messages and promotes availability of busses on said one or more communication paths.
- 8. The method as in claim 7, wherein said ring-based topology comprises a pair of rings for providing said one or more communication paths between each of said plurality of nodes wherein one of said pair of rings transmits information in a direction opposite to the other one of said pair of rings and each bus operation is initiated by launching bus operational messages onto said pair of rings simultaneously.
- 9. The method as in claim 7, wherein said operational status and said coherency information are conveyed between said plurality of nodes via said ring topology, said coherency information comprises, IM Hit, IM Cast Out, IM Reject and MM Reject, Memory Data, Read-Only Hit, and Normal Completion.
- 10. The method as in claim 7, further comprising:
locally generating responses within one of said plurality of nodes from bus snooping actions; merging said locally generated responses with an incoming first message response received in conjunction with a snoop address of said bus snooping action; and applying a response order priority to generate an outgoing first message response.
- 11. The method as in claim 10, further comprising:
receiving an incoming second message within one of said plurality of nodes from bus snooping actions, merging an incoming second message response with said outgoing first message response to provide a cumulatively merged response; applying said response order priority to said cumulatively merged response to generate a final outgoing second message response.
- 12. The method as in claim 7, wherein said ring-based topology comprises a pair of rings for providing said one or more communication paths between each of said plurality of nodes wherein one of said pair of rings transmits information in a direction opposite to the other one of said pair of rings further comprising:
merging a first message and a second message to form an accumulated final response, said accumulated final response being returned to a requesting node of said plurality of nodes.
- 13. The method as in claim 12, wherein responses for bus operational messages unrelated to said first or second messages are permitted to be processed and forwarded on said plurality of nodes not comprising said requesting node during a period defined by the arrival of said first message on a node of said plurality of nodes not comprising said requesting node and the arrival of said second message on said node.
- 14. The method as in claim 7, wherein said ring-based topology comprises a pair of rings for providing said one or more communication paths between each of said plurality of nodes wherein one of said pair of rings transmits information in a direction opposite to the other one of said pair of rings, further comprising:
receiving a first message on a node of said plurality of nodes, said first message being received from one of said pair of rings; receiving a second message on said node, said second message being received from the other one of said pair of rings; merging said first message with a locally generated response related to said first message to form an outgoing first message response; merging said outgoing first message response with said incoming second message; ordering said outgoing first message response and said incoming second message response to form a final outgoing second message response; wherein said final outgoing response is prevented from being forwarded on either of said pair of rings until said first message and an intermediate response, if any, is launched onto one of said pair of rings.
- 15. The method as in claim 7, wherein outgoing ring requests are prioritized such that said outgoing ring requests that necessitate data movements between said plurality of nodes take precedence over said outgoing ring requests that do not necessitate data movement between said plurality of nodes and said outgoing ring requests that do not necessitate data movement are further prioritized wherein a first message request takes precedence over a second message request.
- 16. The method as in claim 15, wherein said protocol permits non-data requests to be launched on said ring based topology during cycles of a data transfer for a previously launched data operation.
- 17. The method as in claim 8, wherein an IM Cast Out data sourced from a remote cache of one of said plurality of nodes is returned on one of said pair of rings that transmits data in a direction opposite to the direction of an incoming first message related to said IM Cast Out data.
- 18. The method as in claim 17, wherein said IM Cast Out data is returns on the shortest path to a requesting node of said plurality of nodes.
- 19. The method as in claim 7, wherein data sourced from a remote main memory location on one of said plurality of nodes is returned on one of said pair of rings in the same direction as an outgoing second message related to a request for said remote main memory.
- 20. The method as in claim 19, wherein the ring upon which said data sourced from said remote main memory returns on the shortest path to a requesting node of said plurality of node.
- 21. The method as in claim 8, wherein each of said plurality of nodes comprises a system controller having a toggle switch for determining which of said pair of rings data will be returned on when said data is requested by a first message and a second message each arriving simultaneously of one of said plurality of nodes via said pair of rings.
- 22. The method as in claim 7, wherein first messages and second messages are launched from a requesting node of said plurality of nodes, said first and second messages circulate around said ring-based topology, wherein said first and second messages are merged and ordered according to predetermined priority standards as they arrive at each of said plurality of nodes not comprising said requesting node to form a first and a second outgoing message response at each of said plurality of nodes not comprising said requesting node prior to returning an accumulated response to said requesting node, wherein said requesting node does not collect and return responses in order to provide said accumulated response to any one of said plurality of nodes not comprising said requesting node.
- 23. The method as in claim 7, wherein memory data is transferred around said ring-based topology in conjunction with a final response message and seperate data transfer bus transactions are not required for transference of said memory data.
- 24. A method as in claim 7, wherein an IM cast out data is transferred around said ring-based topology in conjunction with an intermediate IM Cast Out response message and seperate data transfer bus transactions are not required for transference of said IM cast out data.
- 25. A method as in claim 7, wherein first messages convey both a snoop command and address information but second messages do not require forwarding of said snoop address, thereby limiting overall bus utilization.
- 26. The shared memory symmetrical processing system as in claim 1, wherein said first ring and said second ring each comprise a pair of buses wherein one said pair of buses is used for transference of requested data and the other one of said pair of buses is used for transference of messages comprising a combined snoop command/address and snoop responses which are ordered as the messages passes through each of the plurality of nodes.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to U.S. patent application, entitled: COHERENCY MANAGEMENT FOR A “SWITCHLESS” DISTRIBUTED SHARED MEMORY COMPUTER SYSTEM, attorney docket number POU920030054 filed contemporaneously with this application.
[0002] This application is also related to U.S. patent application, entitled: TOPOLOGY FOR SHARED MEMORY COMPUTER SYSTEM, attorney docket number POU920030055 filed contemporaneously with this application.
[0003] These co-pending applications and the present application are owned by one and the same assignee, International Business Machines Corporation of Armonk, N.Y.
[0004] The descriptions set forth in these co-pending applications are hereby incorporated into the present application by this reference.
[0005] Trademarks: IBM® is a registered trademark of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names may be registered trademarks or product names of International Business Machines Corporation or other companies.