The present invention relates generally to computer server systems and, more particularly, to a method and system for autonomously rebuilding a failed server onto another server.
In today's environment, a computing system often includes several components, such as servers, hard drives, and other peripheral devices. These components are generally stored in racks. For a large company, the storage racks can number in the hundreds and occupy huge amounts of floor space. Also, because the components are generally free standing components, i.e., they are not integrated, resources such as floppy drives, keyboards and monitors, cannot be shared.
A system has been developed by International Business Machines Corp. of Armonk, N.Y., that bundles the computing system described above into a compact operational unit. The system is known as an IBM eServer BladeCenter.™ The BladeCenter is a 7U modular chassis that is capable of housing up to 14 individual server blades. A server blade or blade is a computer component that provides the processor, memory, hard disk storage and firmware of an industry standard server. Each blade can be “hot-plugged” into a slot in the chassis. The chassis also houses supporting resources such as power, switch, management and blower modules. Thus, the chassis allows the individual blades to share the supporting resources.
Currently in the BladeCenter environment, if one of the server blades fails, an administrator must intervene to identify the failing blade, and unplug, remove and replace it with a new blade. This alone is a cumbersome task. If the administrator further wishes to retain the application and data on the failed blade's hard drive, the administrator must physically remove the hard drive from the failed blade and remount it into the new blade. This process is labor intense, time consuming, and economically costly, particularly if the failed blade is located at a remote site.
Accordingly, a need exists for a system and method for rebuilding a failed blade onto another blade. The system and method should be autonomous, i.e. requiring no human intervention, and easily implemented. The present invention addresses such a need.
A method and system for autonomously rebuilding a failed one of a plurality of servers and a computer system utilizing the same is disclosed. In a first aspect, the method comprises providing a bus for allowing a recovery mechanism to access each of the plurality of servers and utilizing the recovery mechanism to rebuild the failed server onto another server. In a second aspect, the computer system comprises a plurality of servers, a management module for monitoring and managing the plurality of servers, a recovery mechanism coupled to the management module, and a bus coupling the recovery mechanism to each of the plurality of servers, wherein the recovery mechanism rebuilds a failed server onto another of the plurality of servers.
The present invention relates generally to server systems and, more particularly, to a method and system for autonomously rebuilding a failed server onto another server. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Although the preferred embodiment of the present invention will be described in the context of a BladeCenter, various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
According to a preferred embodiment of the present invention, a recovery mechanism rebuilds the hard drive of a failed server onto the hard drive of another server in response to the detection of the failed server. The recovery mechanism preferably utilizes a bus that provides access to each server and allows the data on the failed server's hard drive to be copied and transferred to a hard drive of another server. In a system and method in accordance with the present invention, the failed server is rebuilt promptly and without human intervention. An administrator no longer is required to physically remove and remount the hard drive, thereby saving time and cost. Thus, the downtime for the failed server is minimized and QoS is improved.
To describe further the features of the present invention, please refer to the following discussion and Figures, which describe a computer system, such as the BladeCenter, that utilizes the preferred embodiment of the present invention.
A midplane circuit board 106 is positioned approximately in the middle of chassis 102 and includes two rows of connectors 108, 108′. Each one of the 14 slots includes one pair of midplane connectors, e.g., 108a, 108a′, located one above the other, and each pair of midplane connectors, e.g., 108a, 108a′ mates to a pair of connectors (not shown) at the rear edge of each server blade 104a.
As is shown in
The management modules 208 communicate with all of the key components of the system 100 including the switch 210, power 206, and blower 204 modules as well as the blade servers 104 themselves. The management modules 208 detect the presence, absence, and condition of each of these components. When two management modules are installed, a first module, e.g., MM1 (208a), will assume the active management role, while the second module MM2 (208b) will serve as a standby module.
The second chassis 202 also houses up to four switching modules SM1 through SM4 (210a–210d). The primary purpose of the switch module is to provide interconnectivity between the server blades (104a–104n), management modules (208a, 208b) and the outside network infrastructure (not shown). Depending on the application, the external interfaces may be configured to meet a variety of requirements for bandwidth and function.
Referring again to
In general, the management module (208) can detect the presence, quantity, type, and revision level of each blade 104, power module 206, blower 204, and midplane 106 in the system, and can detect invalid or unsupported configurations. The management module (208) will retrieve and monitor critical information about the chassis 102 and blade servers (104a–104n), such as temperature, voltages, power supply, memory, fan and HDD status. If a problem is detected, the management module 208 can transmit a warning to a system administrator via the port 402 coupled to the management server 404. If the warning is related to a failing blade, e.g., 104a, the system administrator must replace the failed blade 104a. In order to preserve the information on the failed blade's 104a hard drive, the administrator must manually remove the hard drive and remount it into a replacement blade. This process is labor intensive and economically costly. The present invention resolves this problem.
Please refer now to
The spare blade 504b is compatible with the blade type, in this case a server blade 504a, to which it has been designated as a spare. For example, within a chassis 102, several blade types, e.g., servers and storage blades, can be housed. The spare blade 504b for a server blade 504a will include system components compatible with those in the server blade 504a, i.e., the spare blade's hard drive 502 is compatible with the server blade's hard drive 502; whereas the spare blade for a storage blade will include system components compatible with those in the storage blade.
Each blade 504 includes a service processor 508 that is coupled to a central processing unit (CPU) 506. The management module 208 communicates with each blade's service processor 508 via the out-of-band serial bus 308. A standard IDE or SCSI interface bus 510 couples a plurality of peripheral devices 502, 502′, 502″, such as the hard drive 502, to the CPU 506, via a select module 512. Preferably, the select module 512 directs traffic to and from the IDE or SCSI interface bus 510 in one of two directions, to the CPU 506 or to a hard-drive-direct access (HDDA) bus 518. As is shown, the HDDA bus 518 preferably provides direct access to the hard drive 502 of each of the blades 504, 504a, 504b.
According to a preferred embodiment of the present invention, a recovery mechanism 516 is coupled to the management module 208 and controls the select module 512 via a control bus 514. Therefore, the recovery mechanism 516 controls whether traffic on the SCSI bus 510 flows to the CPU 506 or to the HDDA bus 518. While the recovery mechanism 516 is preferably in the management module 208, it can also be a stand alone system coupled to the management module 208. Moreover, the functionality of the control bus 514 can be incorporated into the HDDA bus 518, as those skilled in the art would readily appreciate.
At power up and under normal conditions, e.g., when all blades 504 are operating, the recovery mechanism 516 disables the HDDA bus 518 so that each blade's processor 506 has exclusive access to its associated hard drive 502. If a blade 504a fails, however, the recovery mechanism 516 enables the HDDA bus 518, activates the select module 512 in the failed blade 504a and in the spare blade 504b, and copies data from the hard drive 502 of the failed blade 504a to the hard drive 502 of the designated spare blade 504b via the HDDA bus 518.
While the spare blade 504b can be an “extra” blade that becomes operational only when it “replaces” a failed blade 504a, it can also be a fully operational blade 504 in a server farm. Under such circumstances, the operational blade 504 can be taken off-line and used to replace the failed blade 504a if the quality of service (QoS) terms required by a user of the failed blade 504a requires another blade and the QoS terms of a user of the operational blade 504 allow the server farm administrator to degrade overall service to the operational blade 504 user.
Once the contents of the hard drive 502 of the failed blade 504a have been transferred to the spare blade 504b, the recovery mechanism 516 disables the HDDA bus 518 and restores processor 506 access to the hard drive 502 of the spare blade 504b via the control bus 514 (in step 610). At this point, the recovery mechanism 516 returns control to the management module 208, which then powers down the failed blade 504a. In step 612, the management module 208 reassigns any chassis resources, e.g., a virtual LAN, from the failed blade 504a to the spare blade 504b, and enables the spare blade 504b so that it can assume the failed blade's 504a identity and resume the same system operation. Finally, in step 614, the management module 208 can transmit an alert to the administrator that the failed blade 504a should be replaced. In a preferred embodiment, once the failed blade 504ais replaced with a new blade, that blade can be the designated spare blade 504b.
Through aspects of the present invention, a failed blade can be rebuilt onto another blade autonomously. Upon being notified of the failed blade, the recovery mechanism causes the entire contents of the hard drive of the failed blade to be transferred to the hard drive of the spare blade, which eventually assumes the identity of the failed blade. Because the failed blade is rebuilt promptly and without human intervention, the downtime for the failed blade is minimized and QoS is improved. The administrator no longer is required to physically remove and remount the hard drive, thereby saving time and cost.
While the preferred embodiment of the present invention has been described in the context of a BladeCenter environment, the functionality of the recovery mechanism 516 could be implemented in any computer environment where the servers are closely coupled. Thus, although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4710926 | Brown et al. | Dec 1987 | A |
| 4987529 | Craft et al. | Jan 1991 | A |
| 5016244 | Massey, Jr. et al. | May 1991 | A |
| 5021938 | Hayakawa | Jun 1991 | A |
| 5157663 | Major et al. | Oct 1992 | A |
| 5212784 | Sparks | May 1993 | A |
| 5495569 | Kotzur | Feb 1996 | A |
| 5537533 | Staheli et al. | Jul 1996 | A |
| 5592611 | Midgely et al. | Jan 1997 | A |
| 5627962 | Goodrum et al. | May 1997 | A |
| 5696895 | Hemphill et al. | Dec 1997 | A |
| 5777874 | Flood et al. | Jul 1998 | A |
| 5822512 | Goodrum et al. | Oct 1998 | A |
| 5938732 | Lim et al. | Aug 1999 | A |
| 5987621 | Duso et al. | Nov 1999 | A |
| 5991900 | Garnett | Nov 1999 | A |
| 6021111 | Soga | Feb 2000 | A |
| 6029189 | Ishida et al. | Feb 2000 | A |
| 6108300 | Coile et al. | Aug 2000 | A |
| 6144999 | Khalidi et al. | Nov 2000 | A |
| 6163856 | Dion et al. | Dec 2000 | A |
| 6167531 | Sliwinski | Dec 2000 | A |
| 6185695 | Murphy et al. | Feb 2001 | B1 |
| 6205527 | Goshey et al. | Mar 2001 | B1 |
| 6260158 | Purcell et al. | Jul 2001 | B1 |
| 6269431 | Dunham | Jul 2001 | B1 |
| 6275953 | Vahalia et al. | Aug 2001 | B1 |
| 6285656 | Chaganty et al. | Sep 2001 | B1 |
| 6289426 | Maffezzoni et al. | Sep 2001 | B1 |
| 6292905 | Wallach et al. | Sep 2001 | B1 |
| 6363497 | Chrabaszcz | Mar 2002 | B1 |
| 6477629 | Goshey et al. | Nov 2002 | B1 |
| 6587970 | Wang et al. | Jul 2003 | B1 |
| 6625750 | Duso et al. | Sep 2003 | B1 |
| 6862692 | Ulrich et al. | Mar 2005 | B2 |
| 6898727 | Wang et al. | May 2005 | B1 |
| 20010027491 | Terretta et al. | Oct 2001 | A1 |
| 20010056554 | Chrabaszcz | Dec 2001 | A1 |
| 20020021686 | Ozluturk et al. | Feb 2002 | A1 |
| 20030005350 | Koning et al. | Jan 2003 | A1 |
| 20030046602 | Hino et al. | Mar 2003 | A1 |
| 20030135676 | Jensen | Jul 2003 | A1 |
| 20050010709 | Davies et al. | Jan 2005 | A1 |
| Number | Date | Country |
|---|---|---|
| WO09601533 | Nov 1996 | WO |
| WO0198902 | Dec 2001 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 20040255189 A1 | Dec 2004 | US |