1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, systems, and products for cascading failover of blade servers in a data center.
2. Description of Related Art
The development of the EDVAC computer system of 1948 is often cited as the beginning of the modern computer era. Since that time, computer devices have evolved into extremely complicated systems, much more sophisticated and complex than early systems such as the EDVAC. Computer systems typically include a combination of hardware and software components, application programs, operating systems, processors, buses, memory, input/output devices, and so on. As advances in semiconductor processing and computer architecture push the performance of the computer higher and higher, more sophisticated computer software has evolved to take advantage of the higher performance of the hardware, resulting in computer systems today that are much more powerful than just a few years ago.
Complex, sophisticated computer systems today are often organized in large data centers. Blade computers in such data centers are increasingly used to run critical applications that require a high level of redundancy and fault tolerance. Modern data centers employ various failover schemes whereby the failure of one blade server can trigger an automatic replacement of that server from a pre-established backup pool of standby servers. In this way, a catastrophic loss or serious degradation of performance in one server in a data center with thousands of blade servers will trigger the automatic introduction of another server to continue the original server's workload. In prior art systems, however, the technology is primarily focused on the availability of standby resources for such failover. As such, there is a risk that over time, these backup pools of systems may not contain any system that is optimized for the workload currently running on a primary system.
Methods, apparatus, and products implement cascading failover of blade servers in a data center by transferring by a system management server a data processing workload from a failing blade server to an initial replacement blade server, with the data processing workload characterized by data processing resource requirements and the initial replacement blade server having data processing resources that do not match the data processing resource requirements; and transferring by the system management server the data processing workload from the initial replacement blade server to a subsequent replacement blade server, where the subsequent replacement blade server has data processing resources that better match the data processing resource requirements than do the data processing resources of the initial replacement blade server.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of example embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of example embodiments of the invention.
Example methods, apparatus, and products for cascading failover of blade servers in a data center are described with reference to the accompanying drawings, beginning with
A server, as the term is used in this specification, refers generally to a multi-user computer that provides a service (e.g. database access, file transfer, remote access) or resources (e.g. file space) over a network connection. The term ‘server,’ as context requires, refers inclusively to the server's computer hardware as well as any server application software or operating system software running on the server. A server application is an application program that accepts connections in order to service requests from users by sending back responses. A server application can run on the same computer as the client application using it, or a server application can accept connections through a computer network. Examples of server applications include file server, database server, backup server, print server, mail server, web server, FTP servers, application servers, VPN servers, DHCP servers, DNS servers, WINS servers, logon servers, security servers, domain controllers, backup domain controllers, proxy servers, firewalls, and so on.
Blade servers are self-contained servers, designed for high density. A blade enclosure houses multiple blade servers and provides services such as power, cooling, networking, various interconnects and management—though different blade providers have differing principles regarding what should and should not be included in the blade itself—and sometimes in the enclosure altogether. Together, a set of blade servers are installed in a blade enclosure or ‘blade center’ for a blade system. As a practical matter, all computers, including blade servers, are implemented with electrical components requiring power that produces heat. Components such as processors, memory, hard drives, power supplies, storage and network connections, keyboards, video components, a mouse, and so on, merely support the basic computing function, yet they all add bulk, heat, complexity, and moving parts that are more prone to failure than solid-state components. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (DC power), virtualized (iSCSI storage, remote console over IP), or discarded entirely (serial ports). The blade itself becomes simpler, smaller, and amenable to dense installation with many blade servers in a single blade enclosure and many, many blade servers in a data center.
The example system of
The system of
Stored in RAM (168) is a system management server application program (182), a set of computer program instructions that operate the system management server so as to automatically under program control carry out processes required to manage servers in the data center, including capacity planning, asset tracking, preventive maintenance, diagnostic monitoring, troubleshooting, firmware updates, blade server failover, and so on. An example of a system management server application program (126) that can be adapted for use in cascading failover of blade servers in a data center is IBM's ‘IBM Director.’
Also stored in RAM (168) in the example system management server of
The system management server also maintains in memory blade configuration information (200) for the blade servers in the data center. Such blade configuration information includes:
Also stored in RAM (168) is an operating system (154). Operating systems useful for cascading failover of blade servers in a data center according to embodiments of the present invention include UNIX™, Linux™, Microsoft XP™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. The operating system (154), the system management server application (182), the server failover module (184), and the blade configuration information (200) in the example of
The system management server (152) of
The example system management server (152) of
The example system management server (152) of
The example system of
Each blade server (104, 106) in this example is mapped to data storage (111), including remote computer boot storage (110), through a storage area network (‘SAN’) (112). The boot storage (110) is ‘remote’ in the sense that all the system-level software, such as a kernel and other operating system software, that is needed to operate each server is stored, not on a server (106) as such, but remotely from the server across a storage area network (‘SAN’) (112) on storage exposed to the blade servers through the SAN. The only boot-related software permanently stored on the blade servers (104, 106) themselves is a thin piece of system-level firmware required to initiate a boot from remote storage.
The SAN (112) is a network architecture that attaches remote computer storage devices (111) such as magnetic disks, optical disks, and disk arrays, for example, to blade servers so that, to the blade server's operating system, the remote storage devices appear as locally attached disk drives. The remote boot storage (110) that can be mapped to the blade servers in this example is exposed by the SAN (112) to each server (104, 106) as a separate virtual drive. Such virtual drives are often referred to or referenced by a so-called logical unit number or ‘LUN.’ A LUN is an address for an individual disk drive and by extension, the disk device itself A LUN, or the remote storage identified by a LUN, is normally not an entire disk drive but rather a virtual partition (or volume) of a RAID set—in such an example embodiment a virtual disk drive that organized a portion of RAID (Redundant Array of Inexpensive Drives) storage and presents it to an operating system on a server as an actual disk drive. Many if not most SANs use the SCSI protocol for communication between servers and disk drive devices, though they do not use its low-level physical interface, instead typically using a mapping layer. The mapping layer may be implemented, for example, with Fibre Channel (Fibre Channel Protocol or ‘FCP’ is Fibre Channel's SCSI interface), iSCSI (mapping SCSI over TCP/IP), HyperSCSI (mapping SCSI over Ethernet), Advanced Technology Attachment (‘ATA’) over Ethernet, InfiniBand (which supports mapping SCSI over InfiniBand and/or mapping TCP/IP over InfiniBand), and other mapping layers as will occur to those of skill in the art.
The example system management server (152) of
In the example of
Also in the example of
The system management server (152) in typical embodiments selects the initial replacement server from a standby pool as having data processing resources that, among other servers in the pool, most closely match the data processing resource requirements. Nevertheless, in this example, the system is left with a mismatch between the data processing resource requirements and the data processing resources of the initial replacement server (114).
The contents of the standby pool (104) are dynamic. Standby servers in the pool are removed from the standby pool when they are assigned as active servers to execute data processing workloads. Active servers that complete the processing of a data processing workload are returned to the standby pool. Failing servers that are repaired are returned to server first by placing in the standby pool and then assigning them as active servers for a workload. And so on. The system management server (152) monitors the availability of resources provided by the standby servers in the standby pool as servers exit and enter the standby pool, comparing available resources to the active workloads.
When, in the process of comparing the data processing resource requirement of the active workloads with the resources provided by servers in the standby pool, the system management server identifies a server in the standby pool having data processing resources (217) that better match the data processing resource requirements (213) than do the data processing resources (215) of the initial replacement blade server, the system management server transfers the data processing workload (211) from the initial replacement blade server to a subsequent replacement blade server (115). That is, in such an embodiment, the subsequent replacement blade server (115) has data processing resources that better match the data processing resource requirements than do the data processing resources of the initial replacement blade server. This is the sense in which the failover is ‘cascading,’ in that the system management server transfers the workload at least twice, once to an initial replacement blade server having resources that do not match the data processing resource requirements of a workload on a failing server, and at least once more to at least one subsequent replacement blade server that has data processing resources that better match the data processing resource requirements than do the data processing resources of the initial replacement blade server. The system management server carries out a transfer of a workload by capturing and storing the processing state of a failing blade server or an initial replacement blade server, its memory contents, processor register values, pertinent memory addresses, network addresses, and so on, powering down the failing blade server, powering on either an initial replacement blade server or a subsequent replacement blade server, initializing the replacement blade server with the stored processing state of the failing blade server, and continuing execution of the workload on an initial replacement blade server or a subsequent replacement blade server.
For further explanation of failover of blade servers in a data center according to embodiments of the present invention, here is an example of cascading failover using three blade servers labeled A, B, and C:
When a resource is described in terms of terabytes (‘TB’), readers will recognize that resource as a form of memory, RAM, long-term storage, or the like. In this example, server A provides 200 TB of resource X, which is taken as a data processing resource requirement of a workload running on server A, and, when server A fails, a system management server transfers a data processing workload from server A to server B. Server A is taken down for servicing. Server B is a server from a standby pool, and server B provides 500 TB of resource X, a quantity of resource X that is entirely adequate to meet, indeed, exceeds, the needs of the workload on server A. Server B was selected for the transfer in this example because no other standby servers were available, although the transfer to server B results in an inefficient use of resources because server B provides much more of resource X than is needed by the workload. Server C later comes online in the standby pool, and the system management server then determines that server C with its 300 TB of resource X provides a better match for the data processing resource requirements of the workload than the data processing resources of the initial replacement blade server, server B, which is presently running the workload. The system management server therefore transfers the workload in cascade to server C, returning server B to the standby pool.
For even further explanation of failover of blade servers in a data center according to embodiments of the present invention, here is an example of cascading failover using four blade servers labeled A, B, C, and D:
Upon failure of server A, a system management server transfers a workload executing on server A to a server from a standby pool, Server B. Server A provides and the workload executing on server A requires 200 TB of resource X. Server B is selected because no other backup servers are available in the standby pool—or because server B provides the currently best match of resources to requirements—despite the fact that server B's resources of 500 TB of resource X substantially exceeds what is actually required. Server B takes up execution of the workload, and server A is taken down for servicing. Server C enters the standby pool and is determined at 300 TB of resource X to provide a more exact resource match for the workload that is now running on server B. The system management server transfers the workload in cascade from server B to server C and returns server B to the standby pool. Similarly, when an even better match from server D becomes available in the standby pool, the system management server transfers the workload in a second cascade to server D and return server C to the standby pool. Server D with its 200 TB of resource X could in fact be server A repaired and returned to availability in the standby pool, or server D could be some other server entirely.
The arrangement of servers and other devices making up the example system illustrated in
Networks in such data processing systems may support many data communications protocols, including for example TCP (Transmission Control Protocol), IP (Internet Protocol), HTTP (HyperText Transfer Protocol), WAP (Wireless Access Protocol), HDTP (Handheld Device Transport Protocol), and others as will occur to those of skill in the art. Various embodiments of the present invention may be implemented on a variety of hardware platforms in addition to those illustrated in
For further explanation,
Virtual machine managers are sometimes referred to as hypervisors, and virtual machine managers that can be adapted for use in cascading failover according to embodiments of the present invention include the IBM hypervisor named PR/SM, Oracle's VM Server for SPARC, Citrix's XenServer, Linux's KVM, Vmware's ESX/ESXi, Microsoft Hyper-V hypervisor, and others as will occur to those of skill in the art. For further explanation, an example of data processing resource requirements (213) implemented as virtual machine metadata (227) describing the data processing resource requirements of virtual machines is set forth here in Table 1:
The example of Table 1 implements virtual machine metadata describing data processing resource requirements of two virtual machines, VM001 and VM002, where virtual machine VM001 has resource requirements of 10 GB of RAM, 1 IBM Power Processor, and 100 TB of Disk Storage and virtual machine VM002 has resource requirements of 20 GB of RAM, 3 Intel Pentium Processors, and 200 TB of Disk Storage. This example records resource requirements for only two virtual machines, but readers will recognize that such an implementation could record resource requirements for any number of virtual machines. This example of virtual machine metadata is implemented with a table, but readers will recognize that a variety of data structures can be utilized to implement storage of virtual machine metadata, including, for example, linked lists, arrays, and C-style ‘structs.’
In another example alternative method of deriving data processing resource requirements for a workload executing on a failing blade server, the system management server (152) derives (221) the data processing resource requirements (213) based upon actual data processing resource utilization (229) of the data processing workload (211). In such an example, virtualization is optional; the workload (211) can run in a virtual machine or on an operating system installed directly on the hardware of a server. The system management server tracks or monitors and records the facts, for example, that a workload actually uses a particular quantity of RAM, particular computer processors or portions of the run time of particular processors, a particular quantity of disk storage, and so on. Then the system management server characterizes the data processing resource requirements of the data processing workload (211) as the actual resources provided by the physical blade server to which the workload (211) is assigned for execution. For further explanation, an example of data processing resource requirements (213) derived as actual data processing resource utilization is set forth here in Table 2:
Each record in the example of Table 2 represents data processing resource requirements derived from actual resource utilization of various data processing workloads. Table 2 describes data processing resource requirements of two data processing workloads, W001 and W002, where workload W001 has resource requirements of 10 GB of RAM, 1 IBM Power Processor, and 100 TB of Disk Storage and workload W002 has resource requirements of 20 GB of RAM, 3 Intel Pentium Processors, and 200 TB of Disk Storage. This example Table 2 records resource requirements for only two data processing workloads, but readers will recognize that such an implementation could record resource requirements for any number of data processing workloads. This example of resource utilization taken as data processing resource requirements is implemented with a table, but readers will recognize that a variety of data structures can be utilized to implement storage of actual resource utilization, including, for example, linked lists, arrays, and C-style ‘structs.’
In a further example alternative method of deriving data processing resource requirements for a workload executing on a failing blade server, the system management server (152) derives (229) the data processing resource requirements (213) based upon actual data processing resources (231) provided by blade servers (108) upon which one or more data processing workloads (211) execute. In such an example, again, virtualization is optional; the workload (211) can run in a virtual machine or on an operating system installed directly on the hardware of a server. Either way, it is the actual data processing resources (231) provided by the physical server itself (108) that is taken by the system management server as the data processing resource requirements (213) for the data processing workload (211). The system management server tracks or monitors and records the facts, for example, that blade servers upon which workloads execute actually provide particular quantities of RAM, particular computer processors, particular quantities of disk storage, and so on. Then the system management server characterizes the data processing resource requirements of data processing workloads (211) as the actual resource utilization of the physical blade servers to which the workloads (211) are assigned for execution.
Such utilization of actual resources as resource requirements can be implemented with a table similar to Table 2, with the exception that the right column would set forth, rather than resource utilization, descriptions of actual resources provided by blade servers upon which corresponding workloads were installed.
The method of
For further explanation,
The method of
The method of
The method of
Each record in Table 3 represents an active server executing a data processing workload. Each active server is identified by a value in the “Svr ID” column. Each active server's data processing resources are described in the column labeled “Server Resources.” The workload assigned to each active server is identified by a value in the “Workload ID” column. The data processing resource requirements for each workload are listed in the “Workload Resource Requirements” column. And the
“Candidate” column sets forth a Boolean indication, “Yes” or “No,” whether each server's data processing resources are a good match for the data processing resource requirements of the workload assigned to that server. In this particular example, the resources provided by active servers S001 and S002 do match the data processing resource requirements of the corresponding workloads W001 and W002, and the corresponding “Candidate” values, “No,” indicate that servers S001 and S002 are not candidates for workload transfer to a subsequent replacement blade server. Also in this particular example, the resources provided by active servers S003 and S004 far exceed and therefore do not match the data processing resource requirements of the corresponding workloads W003 and W004, and the corresponding “Candidate” values, “Yes,” indicate that servers S003 and S004 are good candidates for workload transfer to a subsequent replacement blade server.
The method of
Each record in Table 4 represents an available standby blade server (322) in a standby pool (104). Each record identifies an available standby server with a “Server ID” and provides for each server a description of the “Data Processing Resources” provided by that server. The system management server can monitor the pool of standby servers for availability of better matches, for example, by comparing the “Data Processing Resources” descriptions in Table 1 with the contents of the “Workload Resource Requirements” column in Table 3. In this example, the data processing resources provided by standby servers S010, S011, and S014 provide no better matches than the initial replacement servers already assigned to any of the workloads W001, W002, W003, and W004 according to Table 3. Server S013, however, does provide a better match for some of the workloads described in Table 3.
The method of
Having selected a subsequent replacement blade server, the method of
Example embodiments of the present invention are described largely in the context of a fully functional computer system for cascading failover of blade servers in a data center. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6963828 | McDonald et al. | Nov 2005 | B1 |
6990606 | Schroiff et al. | Jan 2006 | B2 |
7050961 | Lee et al. | May 2006 | B1 |
7234073 | Roytman et al. | Jun 2007 | B1 |
7281154 | Mashayekhi et al. | Oct 2007 | B2 |
7451347 | Subbaraman et al. | Nov 2008 | B2 |
7523345 | Schroiff et al. | Apr 2009 | B2 |
7747717 | Colrain | Jun 2010 | B2 |
7757116 | Brown et al. | Jul 2010 | B2 |
7873702 | Shen et al. | Jan 2011 | B2 |
7900206 | Joshi et al. | Mar 2011 | B1 |
7937617 | Nagineni et al. | May 2011 | B1 |
7953843 | Cherkasova | May 2011 | B2 |
7971094 | Benn et al. | Jun 2011 | B1 |
8055933 | Jaehde et al. | Nov 2011 | B2 |
8060599 | Cherkasova et al. | Nov 2011 | B2 |
8527996 | Murthy et al. | Sep 2013 | B2 |
8566549 | Burke et al. | Oct 2013 | B1 |
8738961 | Jain et al. | May 2014 | B2 |
8793365 | Arsovski et al. | Jul 2014 | B2 |
20020073354 | Schroiff et al. | Jun 2002 | A1 |
20030187967 | Walsh et al. | Oct 2003 | A1 |
20030204539 | Bergsten | Oct 2003 | A1 |
20050021573 | McDermott et al. | Jan 2005 | A1 |
20050268156 | Mashayekhi et al. | Dec 2005 | A1 |
20050278453 | Cherkasova | Dec 2005 | A1 |
20060015773 | Singh et al. | Jan 2006 | A1 |
20060080568 | Subbaraman et al. | Apr 2006 | A1 |
20060085428 | Bozeman et al. | Apr 2006 | A1 |
20070036178 | Hares et al. | Feb 2007 | A1 |
20080256167 | Branson et al. | Oct 2008 | A1 |
20080256384 | Branson et al. | Oct 2008 | A1 |
20080285435 | Abdulla et al. | Nov 2008 | A1 |
20090012963 | Johnson et al. | Jan 2009 | A1 |
20100312979 | Kavuri et al. | Dec 2010 | A1 |
20110214009 | Aggarwal et al. | Sep 2011 | A1 |
20120047394 | Jain et al. | Feb 2012 | A1 |
20120136971 | Cherkasova et al. | May 2012 | A1 |
20130124267 | O'Sullivan et al. | May 2013 | A1 |
20130227244 | Jung et al. | Aug 2013 | A1 |
20140173332 | Bennah et al. | Jun 2014 | A1 |
20140173336 | Bennah et al. | Jun 2014 | A1 |
20140376362 | Selvaraj et al. | Dec 2014 | A1 |
Entry |
---|
IBM, “Failover and Failback Operations”, IBM.com (online), accessed Aug. 30, 2012, 1pp., URL: http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp?topic=%2Fcom.ibm.storage.ssic.help.doc%2Ff2c—pprcfailbackov—1v262p.html. |
Kirvan, P., “Dealing with failback problems”, TechTarget.com (online), Jul. 2012, [accessed Sep. 12, 2012], 16pp., URL: http://searchdisasterrecovery.techtarget.com/tip/Dealing-with-failback-problems. |
Office Action, U.S. Appl. No. 13/717,031, Aug. 15, 2014, pp. 1-15. |
Number | Date | Country | |
---|---|---|---|
20140173329 A1 | Jun 2014 | US |