The present invention relates generally to partitioning of a computer system into domains, and more particularly to fault containment and error handling in a partitioned computer system with shared resources.
Multi-node computer systems are often partitioned into domains, with each domain functioning as an independent machine with its own address space. Partitioning allows resources of a computer system to be efficiently allocated to different tasks. Domains in partitioned computer systems may dynamically share resources. When a fatal failure of a packet processing occurs in a domain, the processing cannot be continued in the system. As a result, a shared resource entry is left in an intermediate state. To reset and restart operation of the failing domain in the system, the shared resource must be reset entirely. This requires resetting all other domains, even if the other domains are running with no failure in the system.
One solution for error containment and recovery in a partitioned system is to use a dedicated resource for each domain so that if a failure occurs within a domain non-failing domains are not affected. However, using a dedicated resource for each domain to enable an error containment and recovery in a partitioned system requires a larger amount of resources than using a shared resource, because the amount of the resource has to accommodate the maximum requirements of all the domains in the system.
Therefore, it is desirable to provide a mechanism that would allow the system to contain an error in a failed domain so that non-failed domains remain unaffected.
The present invention is a system and method for fault containment and error handling in a logically partitioned computer system having a plurality of computer nodes coupled by an interconnect.
A system includes at least one resource that is dynamically shared by some or all domains. A resource definition table stores information related to a status of each resource, for example, whether a resource is allocated to a domain. The resource definition table also maintains an association between a resource and a domain to which the resource is allocated.
The system further includes a system manager having both read and write access to the resource definition table. When a packet processing failure occurs in a domain, the system manager forces the system to temporarily postpone the initiation of new packets by putting the system into a quiesce mode. The system manager observes the status information of the shared resources. For example, it identifies an allocated resource that is left in an intermediate state. Using a domain identifier stored in the resource definition table, the system manager also detects a failed domain associated with the allocated resource. System manager also detects one or more non-failed domains as having no resource associated with them in the resource definition table. The system manager then exits a quiesce mode for the non-failed domain so that non-failed domains resume their operations, thereby containing an error in the failed domain. The system manager then handles an error in the failed domain. For example, it deallocates the allocated resource for future use by other domains and resets the failed domain. As a result, the fault is contained within the failed domain, and non-failed domains continue their operations without being reset.
Referring now to
System 100 further includes a pool of one or more shared resources 130 dynamically used by at least one domain in system 100. System 100 further includes a resource definition table 155 for storing a status of a resource and an association between a resource and a domain to which that resource is allocated, even when the resource is no longer allocated to that domain. Resource definition table 155 is implemented as a register array with address decoding logic to allow entries to be read or written. The resource definition table 155 may also be implemented as a static RAM array with separate read and write ports. Resource definition table 155 is described in more details below in connection with
System 100 further includes an external agent called a system manager 140 coupled to interconnect 120. In a preferred embodiment, system manager 140 has read and write access to resource definition table 155. This advantageously allows system manager 140 to identify an allocated resource left in an intermediate state. Using domain ID, system manager 140 identifies a failed domain associated with the allocated resource. System manager 140 maintains a list of all domains in system 100 and a list of failed domains. This allows system manager 140 to identify a non-failed domain as having no resources associated with it in the resource definition table 155.
System manager 140 has both read and write access privileges to one or more local domain registers, for example, domain register 145. Having these privileges permits system manager 140 to monitor and control the state of each individual domain, such as to quiesce domains 131, 135, and 137 as part of a reconfiguration process. If a hardware fault occurs within a domain, the domain can become deadlocked because interconnect 120 is deadlocked. In conventional computer systems, deadlocked domains have the capability to cause errors in the operation of other domains due to resource sharing across domains. Having write and read access to local domain registers, such as register 145, allows system manager 140 to reset the domain state of a deadlocked domain. System manager 140 operates independently of the hardware or software running on any individual domain and thus is not affected by hardware or software faults in any individual domain of the computer system 100. System manager 140 may be implemented as hardware, software, firmware and combinations thereof. System manager 140 may be part of a system controller (not shown) having a control interface (not shown) for a system administrator (not shown).
Referring now to
System manager 140 puts system 100 in a “quiesce” mode, preferably using a mechanism called “bus lock,” which is issued when a node, such as CPU node 105, needs to lock down all the resources in a partitioned system. System manager 140 broadcasts a lock acquisition request to each node in all domains. Each node in system 100 receiving the request ceases issuing new processor requests to system 100. Each node guarantees sufficient resources for any outstanding requests from system 100 to that node to complete the outstanding request and waits for responses to be received for all outstanding requests. Subsequently, a response generated to the lock acquisition request is sent to system manager 140 by each node. Once responses have been received from all nodes, system 100 has been drained of all outstanding requests and enters the “quiesce” mode.
If a request is not completed due to a packet processing error, then no response to the lock acquisition request will be received from a particular node. This situation is detected by simply having system manager 140 timeout. Once the timeout interval expires, system manager 140 examines 30 resource definition table 155 to identify an allocated resource left in an intermediate state. Using domain ID, system manager detects 40 a failed domain associated with the allocated resource. It also detects 50 one or more non-failed domains as having no allocated resource associated with them in resource definition table 155. For example, as shown in
System manager 140 then handles an error in the failed domain. For example, it deallocates 70 the resource associated with the failed domain so that other non-failed domains can make use of that resource. Thus, if in
In the preferred embodiment of the present invention, channel 165 beneficially allows system manager 140 to selectively reset 80 a hardware state in a deadlocked domain by reinitializing or rebooting the system. Once the failed domain is reset, the process ends 90. As a result, the fault is contained within the failed domain, non-failed domains continue their operations without being reset, and the failed domain is reset.
This application is a continuation-in-part and claims priority from U.S. patent application Ser. No. 09/861,293 entitled “System and Method for Partitioning a Computer System into Domains” by Kazunori Masuyama, Patrick N. Conway, Hitoshi Oi, Jeremy Farrell, Sudheer Miryala, Yukio Nishimura, Prabhunanadan B. Narasimhamurthy, filed May 17, 2001. This application also claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 60/301,969, filed Jun. 29, 2001, and entitled “Fault Containment and Error Handling in a Partitioned System with Shared Resources” by Kazunori Masuyama, Yasushi Umezawa, Jeremy J. Farrell, Sudheer Miryala, Takeshi Shimizu, Hitoshi Oi, and Patrick N. Conway, which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5175839 | Ikeda et al. | Dec 1992 | A |
5465338 | Clay | Nov 1995 | A |
5561780 | Glew et al. | Oct 1996 | A |
5592671 | Hirayama | Jan 1997 | A |
5727150 | Laudon et al. | Mar 1998 | A |
5761460 | Santos et al. | Jun 1998 | A |
5829032 | Komuro et al. | Oct 1998 | A |
5859985 | Gormley et al. | Jan 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5931938 | Drogichen et al. | Aug 1999 | A |
5987506 | Carter et al. | Nov 1999 | A |
6006255 | Hoover et al. | Dec 1999 | A |
6014669 | Slaughter et al. | Jan 2000 | A |
6014690 | VanDoren et al. | Jan 2000 | A |
6026472 | James et al. | Feb 2000 | A |
6092213 | Lennie et al. | Jul 2000 | A |
6163855 | Shrivastava et al. | Dec 2000 | A |
6678840 | Kessler et al. | Jan 2004 | B1 |
6725261 | Novaes et al. | Apr 2004 | B1 |
6973517 | Golden et al. | Dec 2005 | B1 |
20020078263 | Darling et al. | Jun 2002 | A1 |
20020184345 | Masuyama et al. | Dec 2002 | A1 |
20020186711 | Masuyama et al. | Dec 2002 | A1 |
20030005070 | Narasimhamurthy et al. | Jan 2003 | A1 |
20030005156 | Miryala et al. | Jan 2003 | A1 |
20030007457 | Farrell et al. | Jan 2003 | A1 |
20030007493 | Oi et al. | Jan 2003 | A1 |
20030023666 | Conway et al. | Jan 2003 | A1 |
20040199680 | Shah | Oct 2004 | A1 |
Number | Date | Country |
---|---|---|
10-228458 | Aug 1998 | JP |
2000-132530 | May 2000 | JP |
Number | Date | Country | |
---|---|---|---|
20020186711 A1 | Dec 2002 | US |
Number | Date | Country | |
---|---|---|---|
60301969 | Jun 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09861293 | May 2001 | US |
Child | 10150618 | US |