Claims
- 1. A computer system, comprising:
a plurality of local resources including a plurality of local processors; a local service processor coupled to the plurality of local resources, the local service processor configured to receive reset information; and wherein the reset information is sent to a subset of the plurality of local resources including a subset of the plurality of local processors, the subset of the plurality of local resources determined by using a local routing table.
- 2. The computer system of claim 1, further comprising a configuration system module, the configuration system module coupled to the plurality of local resources and the local service processor.
- 3. The computer system of claim 2, wherein reset information is sent to a subset of the plurality of local resources through the configuration system module, wherein the configuration system module accesses the local routing table to identify the subset of the plurality of local resources.
- 4. The computer system of claim 2, wherein the local service processor is coupled to a plurality of remote service processors associated with a plurality of remote resources.
- 5. The computer system of claim 2, wherein the local routing table includes information for identifying a partition.
- 6. The computer system of claim 5, wherein the information for identifying a partition comprises a slot number.
- 7. The computer system of claim 2, wherein the local routing table includes information for identifying the type of reset signal.
- 8. The computer system of claim 2, wherein the local service processor is a primary service processor.
- 9. The computer system of claim 8, wherein the primary service processor is further configured to distribute reset information to a plurality of non-local service processors.
- 10. The computer system of claim 9, wherein the plurality of non-local service processors is determined by accessing a general routing table.
- 11. The computer system of claim 2, wherein the local service processor is a secondary service processor.
- 12. The computer system of claim 11, wherein the local service processor receives reset information from an I/O hub.
- 13. The computer system of claim 11, wherein the local service processor receives reset information from a primary service processor.
- 14. The computer system of claim 11, wherein the local service processor receives reset information over an I2C interface.
- 15. The computer system of claim 1, wherein the plurality of local resources further includes memory and an interconnection controller.
- 16. The computer system of claim 15, wherein the plurality of local processors and the interconnection controller are interconnected in a point-to-point architecture.
- 17. The computer system of claim 1, wherein the reset information identifies a cold reset.
- 18. The computer system of claim 1, wherein the reset information identifies a warm reset.
- 19. A method for distributing reset information, the method comprising:
identifying a plurality of local resources including a plurality of local processors; identifying a local service processor coupled to the plurality of local resources, the local service processor configured to receive reset information; and sending the reset information to a subset of the plurality of local resources including a subset of the plurality of local processors, the subset of the plurality of local resources determined by using a local routing table.
- 20. The method of claim 19, wherein a configuration system module is coupled to the plurality of local resources and the local service processor.
- 21. The method of claim 20, wherein reset information is sent to a subset of the plurality of local resources through the configuration system module, wherein the configuration system module accesses the local routing table to identify the subset of the plurality of local resources.
- 22. A computer system, comprising:
a plurality of local components including a local cluster of processors, the local cluster of processors interconnected in a point-to-point architecture and coupled to a plurality of remote clusters of processors; and a local I/O hub configured to receive a power management request from an operating system associated with a partition and forward the power management request to a plurality of remote boxes through a coherent interface.
- 23. The computer system of claim 22, wherein the power management request is a request to change processor voltage and frequency.
- 24. The computer system of claim 22, wherein the power management request is a request to change link width and frequency.
- 25. The computer system of claim 22, wherein the coherent interface is a host bridge.
- 26. The computer system of claim 22, wherein the I/O hub is a noncoherent device.
- 27. The computer system of claim 26, wherein the local I/O hub is further configured to send the power management request to a local configuration system module.
- 28. The computer system of claim 27, wherein a local routing table is accessed to determine a subset of the plurality of local components to receive the power management request.
- 29. The computer system of claim 28, wherein a local routing table is accessed to determine a subset of the plurality of local components to receive the power management request.
- 30. The computer system of claim 28, wherein the power management request is associated with LDTSTOP.
- 31. The computer system of claim 30, wherein the plurality of local resources further includes memory and an interconnection controller.
- 32. A method for distributing power management information, the method comprising:
identifying a plurality of local components including a local cluster of processors, the local cluster of processors interconnected in a point-to-point architecture and coupled to a plurality of remote clusters of processors; receiving a power management request at a local I/O hub from an operating system associated with a partition; and forwarding the power management request to a plurality of remote boxes through a coherent interface.
- 33. The method of claim 32, wherein the power management request is a request to change processor voltage and frequency.
- 34. The method of claim 32, wherein the power management request is a request to change link width and frequency.
- 35. The method of claim 32, wherein the power management request is a request to enter a particular sleep state.
- 36. A computer system, comprising:
means for identifying a plurality of local components including a local cluster of processors, the local cluster of processors interconnected in a point-to-point architecture and coupled to a plurality of remote clusters of processors; means for receiving a power management request at a local I/O hub from an operating system associated with a partition; and means for forwarding the power management request to a plurality of remote boxes through a coherent interface.
- 37. The computer system of claim 36, wherein the power management request is a request to change processor voltage and frequency.
- 38. The computer system of claim 36, wherein the power management request is a request to change link width and frequency.
- 39. The method of claim 36, wherein the power management request is a request to enter a particular sleep state.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is related to U.S. application Ser. Nos. ______, ______, and ______ titled Transaction Management In Systems Having Multiple Multi-Processor Clusters (Attorney Docket No. NWISP012), Routing Mechanisms In Systems Having Multiple Multi-Processor Clusters (Attorney Docket No. NWISP013), and Address Space Management In Systems Having Multiple Multi-Processor Clusters (Attorney Docket No. NWISP014) respectively, all by David B. Glasco, Carl Zeitler, Rajesh Kota, Guru Prasadh, and Richard R. Oehler, the entireties of which are incorporated by reference for all purposes.