Failure recovery resolution in transplanting high performance data intensive algorithms from cluster to cloud

Information

  • Patent Grant
  • 9626261
  • Patent Number
    9,626,261
  • Date Filed
    Wednesday, November 26, 2014
    9 years ago
  • Date Issued
    Tuesday, April 18, 2017
    7 years ago
Abstract
A method of providing failure recovery capabilities to a cloud environment for scientific HPC applications. An HPC application with MPI implementation extends the class of MPI programs to embed the HPC application with various degrees of fault tolerance. An MPI fault tolerance mechanism realizes a recover-and-continue solution. If an error occurs, only failed processes re-spawn, the remaining living processes remain in their original processors/nodes, and system recovery costs are thus minimized.
Description
TECHNICAL FIELD

The present disclosure is generally directed to high performance computing (HPC) and cloud computing, and more specifically to fault tolerance to provide reliability of virtual clusters on clouds where high-performance and data-intensive computing paradigms are deployed.


BACKGROUND

High-performance computing (HPC) provides accurate and rapid solutions for scientific and engineering problems based on powerful computing engines and the highly parallelized management of computing resources. Cloud computing as a technology and paradigm for the new HPC era is set to become one of the mainstream choices for high-performance computing customers and service providers. The cloud offers end users a variety of services covering the entire computing stack of hardware, software, and applications. Charges can be levied on a pay-per-use basis, and technicians can scale their computing infrastructures up or down in line with application requirements and budgets. Cloud computing technologies provide easy access to distributed infrastructures and enable customized execution environments to be easily established. The computing cloud allows users to immediately access required resources without capacity planning and freely release resources that are no longer needed.


Each cloud can support HPC with virtualized Infrastructure as a Service (IaaS). IaaS is managed by a cloud provider that enables external customers to deploy and execute applications. FIG. 1 shows the layer correspondences between cluster computing and cloud computing models. The main challenges facing HPC-based clouds are cloud interconnection speeds and the noise of virtualized operating systems. Technical problems include system virtualization, task submission, cloud data input/output (I/O), security and reliability. HPC applications require considerable computing power, high performance interconnections, and rapid connectivity for storage or file systems, such as supercomputers that commonly use InfiniBand and proprietary interconnections. However, most clouds are built around heterogeneous distributed systems connected by low performance interconnection mechanisms, such as O-Gigabit Ethernet, which do not offer optimal environments for HPC applications. Table 1 below shows the comparison of technical characters between cluster computing and cloud computing models. Differences in infrastructures between cluster computing and cloud computing have increased the need to develop and deploy fault tolerance solutions on cloud computers.
















Cloud Computing
Cluster Computing




















Performance
1.
Computation cost
I.
Computation cost


factors
2.
Storage cost
2.
Communication latencies



3.
Data transfer cOSI (in or out
3.
Datu dependencies




for each service
4.
Synchronization


Performance
I.
Specifying a particular service
l.
Dcfining the data size


Tuning

for a particular task;

to be distributed



2.
Archiving intermediate dura on
2.
Scheduling the send nnd




a particular storage device;

receive workload



3.
Choosing a set of locations for
3.
Task synchronization




input and output data.


Fault
1.
Rcseud
J.
Checkpointing protocols


Tolerance
2.
Reroute
2.
Membership protocol



3.
graph scheduling
3.
systelJl synchronization



4.
QoS









Goal
Minimizing the total cost of
Minimizing the total



execution while meeting all the
exeecution lime; performing



user-specified constraints.
on users' hardware platforms,


Reliability
No
Yes


Task size
Single large
Small and medium


Scalable
No
Yes


Switching
Low
High


Application
HPC, HTC
SME interactive









SUMMARY

This disclosure is directed to a failure recovery solution for transplanting high-performance data-intensive algorithms from the cluster to the cloud.


According to one example embodiment, a method provides failure recovery capabilities to a cloud environment for scientific HPC applications. An HPC application with MPI implementation extends the class of MPI programs to embed the HPC application with various degrees of fault tolerance (FT). An MPI FT mechanism realizes a recover-and-continue solution; if an error occurs, only failed processes re-spawn, the remaining living processes remain in their original processors/nodes, and system recovery costs are thus minimized.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like numbers designate like objects, and in which:



FIG. 1 illustrates layer correspondences between cluster computing and cloud computing;



FIG. 2 illustrates three IaaS layers forming network resources from bottom to top: cloud resources and TCP networks; hosts and virtual machines; guest MPI applications;



FIG. 3 illustrates pseudo-code of an algorithm setting up a TCP connection using MPI initialization;



FIG. 4 shows the component launch and monitoring steps and automated recovery flow process:



FIG. 5 illustrates pseudo-code of an algorithm for MPI error handler setup, and the main function;



FIG. 6 illustrates pseudo-code for MPI spawning new communications; and



FIG. 7A-7D illustrates a MPI/TCP failure recovery process establishing new TCP long connections.





DETAILED DESCRIPTION

Failure Model for HPC on Cloud


An HPC cloud platform provides a comprehensive set of integrated cloud management capabilities to meet users' HPC requirements. Deployed on top of a HPC cloud, the software manages the running of computing and data-intensive distributed applications on a scalable shared grid, and accelerates parallel applications to accelerate results and improve the utilization of available resources. An HPC cloud enables the self-service creation and management of multiple flexible clusters to deliver the performance required by computing-intensive workloads in multi-tenant HPC environments. This disclosure provides a failure recovery solution using a typical Message Passing Interface-Transmission Control Protocol (MPI-TCP) model.


HPC Fault Tolerant Model on Cloud


MPI provides a message-passing application programmer interface and supports both point-to-point and collective communication. MPI has remained the dominant model used for HPC for several decades due to its high performance, scalability, and portability.


Many of the current big data applications use Remote Procedure Call (RPC) to establish TCP connections for high-performance data intensive computing. Typical examples, such as MapReduce and Pregel, require long TCP connections to build up virtual cluster networks over the Internet or cloud. Hadoop RPC forms the primary communication mechanism between the nodes in the Hadoop cluster. The Hadoop Distributed File System (HDFS) enables multiple machines to implement functions. Hadoop NameNode receives requests from HDFS clients in the form of Hadoop RPC requests over a TCP connection. The listener object listens to the TCP port that serves RPC requests from the client.


In comparison, GraphLab simultaneously uses MPJ (MPI for Java) and TCP, and simplifies the update process because users do not need to explicitly define the information flow from Map to Reduce and can just modify data in-place. For iterative computations, Graphlab's knowledge of the dependency structure directly communicates modified data to the destination. GraphLab presents a simpler API and data graph to programmers, and informs GraphLab of the program's communication needs. This implementation model uses collective synchronous MPJ operations for communication.


Based on the applications listed above, this disclosure provides a modeled three-layer IaaS networked computing platform as shown at 10 in FIG. 2, which, from bottom to top, are: cloud resources and TCP networks 12; hosts and virtual machines (VMs) 14; and guest MPI applications 16 (all collectively referred to herein as network resources). The cloud provider is responsible for administering services and cloud resources, such as hardware and VMs, that customers use to deploy and execute applications. FIG. 2 summarizes the vertical cloud architecture and the scope of each cloud participant of the network resources. This disclosure identifies three types of failure of the network resources in the cloud platform: hardware/network failure, virtual machine failure, and application failure. Each of the above layers has exclusive fault tolerant functions; however, for optimal performance, collaborative failure management approaches including best effort must be considered.


Failure Detection


At the application level 16, MPI fault tolerance or virtual machine sensors can detect an application or virtual machine failure. Both the application layers 16 and virtual machine layers 14 collaborate to precisely locate errors. Errors can have three origins: MPI application, the virtual machine, and TCP network/hardware.


At the network/hardware level 12, TCP connections can be long-term as certain users maintain connections for hours or even days at a time. The duration of TCP connections provides an excellent parallel platform for a group of virtual machines to run like a cluster on a cloud. If a problem occurs, heartbeating can check whether the network connection is alive because a connected network periodically sends small packets of data (heartbeats). If the peer does not receive a heartbeat for a specified time period, the connection is considered broken. However, the TCP protocol does not provide heartbeats and TCP endpoints are not notified of broken connections, causing them to live indefinitely, forever trying to communicate with the inaccessible peer. Higher level applications must then reinitialize certain applications. For many scientific computing applications, especially those with high-availability requirements, this missing failure notification is a critical issue that urgently requires a recovery mechanism.


Failure Recovery


If an error originates from the program at the application layer 16, the program itself should be able to self-recover e.g. the map-reduce implementation replicates data on the HDFS. If a node fails, tasks that are using the data on the failed node can restart on another node that hosts a replica of the data.


If an error occurs on a virtual machine due to a hardware host failure in layer 14, the cloud administration starts a new virtual machine with the same features, allowing users to redeploy tasks and restart and synchronize the new machine. In line with an application's properties, the checkpointing and recovery process is required after a new virtual machine is generated. The checkpointing process periodically takes system snapshots and stores application status information in persistent storage units. If a failure occurs, the most recent status can be retrieved and the system recovered. User directed checkpointing requires the application programmer to form the checkpoint and write out any data needed to restart the application. Checkpoints must be saved to persistent storage units, which are typically cloud-based, that will not be affected by the failure of a computing element. However, there are two disadvantages in this scenario: first, the user is responsible for ensuring that all data is saved; second, the checkpoints must be taken at particular points in the program.


MPI/TCP Infrastructure when Severing High Performance Data-Intensive Computing on Cloud


This disclosure provides a method to add failure recovery capabilities to a cloud environment for scientific HPC applications. An HPC application with MPI implementation is able to extend the class of MPI programs to embed the HPC application with various degrees of fault tolerance (FT). An MPI FT mechanism realizes a recover-and-continue solution; if an error occurs, only failed processes re-spawn, the remaining living processes remain in their original processors/nodes, and system recovery costs are thus minimized.


MPI and TCP


Users can initialize a low-level MPI/TCP communication model by enabling the communication group to use the MPI COMM to collect distributed system data, and then deliver it to the RPC to create a long-term TCP connection. Executing a distributed application over TCP connections and on a virtual cluster involves a similar process that requires three steps: 1) Initialize communicator groups using MPI; 2) Pass the data to RPC; 3) All devices with TCP connections complete connection setup and enter the established state. TCP software can then operate normally. FIG. 3 shows pseudo-code for the steps of setting up a TCP connection using MPI initialization. The pseudo-code describes how MPI and TCP jointly build a Hadoop cluster.


Fault Tolerant MPI Semantics and Interfaces


The MPI Forum's Fault Tolerance Working Group has defined a set of semantics and interfaces to enable fault tolerant applications and libraries to be portably constructed on top of the MPI interface, which enables applications to continue running and using MPI if one or multiple processes in the MPI universe fail. This disclosure assumes that MPI implementation provides the application with a view of the failure detector that is both accurate and complete. The application is notified of a process failure when it attempts to communicate either directly or indirectly with the failed process using the function's return code and error handler set on the associated communicator. The application must explicitly change the error handler to MPI_ERRORS_RETURN on all communicators involved in fault handling on the application.


MPI Recovery Procedure


To minimize the impact of the failure recovery process on an MPI/TCP task running on a cloud infrastructure, this disclosure provides a component that automates the launch and monitoring processes, periodically checks MPI health, and stops and re-launches the MPI/TCP process if a failure is detected. The component implements the following launch and monitoring steps and automated recovery flow process 400 as shown in FIG. 4.


Step 401. The MPI_INIT pings and establishes connections with each virtual machine, builds a communication group comprising all communicators, and ensures that the communicators are up and available.


Step 402. The MPI process sends the size n node numbers, node names, folder path in which the MPI process will run, and file names with application instructions.


Step 403. RPC initializes independent, long-term TCP connections.


Step 404. Parallel execution enables each node to deploy multiple threads. Anode is deemed to have failed if a virtual machine is in down status. MPI implementation must be able to return an error code if a communication failure such as an aborted process or failed network link occurs.


Step 405. The management process uses MPI_Comm_Spawn to create workers and return an intercommunicator. This simplifies intercommunication formation in the scenario of parallel workers because one MPI_Comm_Spawn command can create multiple workers in a single intercommunicator's remote group. MPI_Comm_Spawn replaces dead workers, and continues processing with no fewer workers than before.


Step 406. A parallel worker's processes can inter-communicate using an ordinary intercommunicator, enabling collective operations. Fault tolerance resides in the overall manager/worker structure.


Step 407. The MPI process sends the size n node numbers node names, folder path in which the MPI process will run, and file names with application instructions. RPC initializes independent, long-term TCP connections. Checkpoints are copied from cloud storage.


Step 408. Parallel execution enables each virtual machine (VM) to deploy multiple threads. Note that the component is independent of any particular MPI application.


Fault Tolerance Implementation


Focusing on communication-level fault tolerance issues, FIG. 5 and FIG. 6 illustrate an example of a common scenario based on a well-known master/worker communication program. The scenario covers program control management failure detection and termination detection. FIG. 5 illustrates the general procedure for setting up a fault tolerance MPI/TCP working environment using inter-communicators and MPI ERROR Handlers. FIG. 6 shows how MPJ responds by spawning new nodes and removing dead nodes when a failure occurs.



FIGS. 7A-7D shows a diagram of the MPI/TCP failure recovery process.



FIG. 7A illustrates a set of virtual machines running in parallel.



FIG. 7B illustrates Node 2 fails.



FIG. 7C illustrates MPI locating a new node 3.



FIG. 7D illustrates establishing a new TCP long connection.


TCP Recovery


After the MPI connection is recovered, an RPC procedure is initialized. A client node calls its client stub using parameters pushed to the stack. The client stub packs these parameters into a message, and makes a system call to send the message from the client machine to the server machine. The operating system on the server machine passes the incoming packets to the server stub, and the server stub unpacks the parameters from the message. The server stub calls the server procedure, which forms the basis of establishing the TCP connection.


Higher Level Applications


An HPC-based cloud example is the design of a distributed master/worker finite element method (FEM) computing process. The FEM process involves mesh generation, refinement, and matrix assembly. The uncertainty of mesh refinement complicates the following operations: distributing basic functional modules that incorporate message synchronization, distributing matrices' data, and collecting data; however, a MapReduce HDFS can maintain the consistency of FEM meshes in a distributed environment. Assuming that the computing capability of each node in a cloud is identical, the process for solving this problem is to map all tasks to a set of cloud mesh data. An independent task assigned to a worker process has a well-defined life cycle: First, the master sends a task to a worker, which the worker takes and works on; second, the worker returns the results after completing the task.


The fault tolerant scheme collaborates with checkpoint/restart techniques to handle failures during distributed processing. At least three lists task lists must be created: (1) Waiting (2) In progress (3) Done. The manager-part program should mark an intercommunicator as dead when a send or receive operation fails, maintain the task in-progress list, and send the operation to the next free worker. Global performance tuning optimization can be constructed from the timing and fault tolerant modes to identify the global minimum execution time for correct computing results.


While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure.


Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.

Claims
  • 1. A computing node for performing fault tolerance at an infrastructure as a service (IaaS) layer on a cloud computing platform having network resources, comprising: a component, including a processor, collecting system distributed data of the cloud computing platform using a message passing interface (MPI);the component establishing long-term transmission control protocol (TCP) interconnections of the cloud computing platform using a remote procedure call (RPC);the component automatically detecting a failure of one of the network resources in a cluster of the network resources including a plurality of MPI nodes in an MPI communication group, the MPI communication group including MPI communicators;the component recovering the failure by adding a new network resource in place of the failed network resource using combined MPI and RPC functionalities and reinitializing the MPI communication group to include the new network resource determined as a new MPI communicator in the MPI communication group; andthe component delivering information indicative of the new MPI communicator to the RPC.
  • 2. The computing node as specified in claim 1, wherein detecting the failure in the cluster of the network resources comprises: the component calling the MPI nodes;the component delivering information indicative of a failed MPI node to a MPI master node in order to spawn the new network resource as a new MPI node; andthe MPI master node broadcasting the information of the failed MPI node to all of the MPI nodes in the MPI communication group such that each MPI node is updated with the information.
  • 3. The computing node as specified in claim 2, further comprising: the component determining the new MPI node as the new MPI communicator according to information at the MPI master node;the component establishing a new connection with RPC to the new MPI node;the component spawning the new MPI communicator on the new MPI node; andthe component updating the new MPI communicator with group member information and parallel processing information.
  • 4. A computing node as specified in claim 3, further comprising: the component establishing checkpoints during parallel processing periodically; andthe component saving data of each checkpoint on a cloud storage.
  • 5. The computing node as specified in claim 4, further comprising: the component updating the new MPI node with current checkpoint data from the cloud storage; andthe component updating all of the MPI communication group members with the current checkpoint data from the cloud storage.
  • 6. The computing node as specified in claim 4, wherein: the cloud storage has a definition in MPI; andthe cloud storage is one of the MPI communication group members such that all of the MPI nodes recognize the cloud storage and can copy data to/from the cloud storage.
  • 7. The computing node as specified in claim 2, further comprising: defining a threshold time T allowing the component to determine whether one of the MPI nodes has failed, whereinin response to the master MPI node determining no response from the MPI node, the master MPI node waits a time length of time T,the component does not spawn the new MPI node in response to the MPI node with no response is recovered and responds to the master MPI node within the time T, andthe component spawns the new MPI node to replace the failed MPI node in response to the MPI node with no response not being recovered within time T.
  • 8. The computing node as specified in claim 7 wherein a time T_opt represents a time to establish the new MPI node, spawn the new MPI node, update the new MPI node information, and update the new MPI node with checkpoint data, wherein: in response to T >T_opt, the component spawns the new MPI node to replace the failed MPI node before the time length of time T expires.
  • 9. The computing node as specified in claim 8, wherein in response to T<=T_opt, the master MPI node waits until the time length of time T to decide if the non-responsive MPI node has failed.
  • 10. A of computing node performing failure recovery in a parallel cloud high performance computing (HPC) system having nodes, comprising: a component, including a processor, establishing connections with a plurality of virtual machines (VMs) having communicators, and building a communication group that includes the communicators;an message passing interface (MPI) process sending node numbers, node names, a folder path on which a MPI process can run, and file names with application instructions;a remote procedure call (RPC) initializing independent, long-term transmission control protocol (TCP) connections;the MPI process returning an error code to the component in response to a communication failure occurring in one of the communicators;the component spawning a new communicator to replace the failed communicator in response to a failure in one of the communicators;the RPC re-initializing independent, long-term TCP connections for the failed communicator; andthe MPI process loading checkpoint data from storage and importing the checkpoint data to the new communicator.
  • 11. A method of performing fault tolerance at an infrastructure as a service (IaaS) layer on a cloud computing platform having network resources, comprising: collecting system distributed data of the cloud computing platform using a message passing interface (MPI);establishing long-term transmission control protocol (TCP) interconnections of the cloud computing platform using a remote procedure call (RPC);automatically detecting a failure of one of the network resources in a cluster of the network resources including a plurality of MPI nodes in an MPI communication group, the MPI communication group including MPI communicators;recovering the failure by adding a new network resource in place of the failed network resource using combined MPI and RPC functionalities and reinitializing the MPI communication group to include the new network resource determined as a new MPI communicator in the MPI communication group; anddelivering information indicative of the new MPI communicator to the RPC.
  • 12. The method as specified in claim 11, detecting the failure in the cluster of the network resources comprises calling the MPI nodes;delivering information indicative of a failed MPI node to a MPI master node in order to spawn the new network resource as a new MPI node; andbroadcasting the information of the failed MPI node to all of the MPI nodes in the MPI communication group such that each MPI node is updated with the information.
  • 13. The method as specified in claim 12, further comprising: determining the new MPI node as the new MPI communicator according to information at the MPI master node;establishing a new connection with RPC to the new MPI node;spawning the new MPI communicator on the new MPI node; andupdating the new communicator with group member information and parallel processing information.
  • 14. The method as specified in claim 13, further comprising: establishing checkpoints during parallel processing periodically; andsaving data of each checkpoint on a cloud storage.
  • 15. The method as specified in claim 14, further comprising: updating the new MPI node with current checkpoint data from the cloud storage; andupdating all of the MPI communication group members with the current checkpoint data from the cloud storage.
  • 16. The method as specified in claim 14, wherein: the cloud storage has a definition in MPI; andthe cloud storage is one of the MPI communication group members such that all of the MPI nodes recognize the cloud storage and can copy data to/from the cloud storage.
  • 17. The method as specified in claim 12, further comprising: defining a threshold time T allowing the component to determine whether one of the MPI nodes has failed, whereinin response to the master MPI node determining no response from the MPI node, the master MPI node waits a time length of time T,the component does not spawn the new MPI node in response to the MPI node with no response is recovered and responds to the master MPI node within the time T, andthe component spawns the new MPI node to replace the failed MPI node in response to the MPI node with no response not being recovered within time T.
  • 18. The method as specified in claim 17 wherein a time T_opt represents a time to establish the new MPI node, spawn the new MPI node, update the new MPI node information, and update the new MPI node with checkpoint data, wherein: in response to T >T_opt, the component spawns the new MPI node to replace the failed MPI node before the time length of time T expires.
  • 19. The method as specified in claim 18, wherein in response to T<=T_opt, the master MPI node waits until the time length of time T to decide if the non-responsive MPI node has failed.
PRIORITY CLAIM

This application claims priority of U.S. Provisional Patent Application Ser. No. 61/910,018 filed Nov. 27, 2013 entitled A FAILURE RECOVERY RESOLUTION IN TRANSPLANTING HIGH PERFORMANCE DATA INTENSIVE ALGORITHMS FROM CLUSTER TO CLOUD, the teaching of which are incorporated herein by reference in its entirety.

US Referenced Citations (40)
Number Name Date Kind
5959969 Croslin Sep 1999 A
6539542 Cousins Mar 2003 B1
6782537 Blackmore Aug 2004 B1
6865591 Garg Mar 2005 B1
7941479 Howard et al. May 2011 B2
8676760 Shen Mar 2014 B2
9286261 Tzelnic Mar 2016 B1
9348661 Archer May 2016 B2
20030187915 Sun Oct 2003 A1
20040172626 Jalan Sep 2004 A1
20060117212 Meyer Jun 2006 A1
20070025351 Cohen Feb 2007 A1
20070288935 Tannenbaum Dec 2007 A1
20080273457 Sun Nov 2008 A1
20090006810 Almasi Jan 2009 A1
20090037998 Adhya Feb 2009 A1
20090043988 Archer Feb 2009 A1
20100023723 Archer Jan 2010 A1
20100099426 Lozinski Apr 2010 A1
20100122268 Jia May 2010 A1
20100228760 Chen Sep 2010 A1
20110099420 MacDonald McAlister Apr 2011 A1
20120036208 Beisel Feb 2012 A1
20120079490 Bond Mar 2012 A1
20120089968 Varadarajan Apr 2012 A1
20120124430 Dharmasanam May 2012 A1
20120159236 Kaminsky Jun 2012 A1
20120226943 Alderman Sep 2012 A1
20130073743 Ramasamy Mar 2013 A1
20130159364 Grider Jun 2013 A1
20130159487 Patel Jun 2013 A1
20130238785 Hawk et al. Sep 2013 A1
20130311543 Howard Nov 2013 A1
20140056121 Johnsen Feb 2014 A1
20140278623 Martinez Sep 2014 A1
20140337843 Delamare Nov 2014 A1
20150106820 Lakshman Apr 2015 A1
20150256484 Cameron Sep 2015 A1
20150379864 Janchookiat Dec 2015 A1
20160055045 Souza Feb 2016 A1
Foreign Referenced Citations (2)
Number Date Country
1719831 Jan 2006 CN
101369241 Feb 2009 CN
Related Publications (1)
Number Date Country
20150149814 A1 May 2015 US
Provisional Applications (1)
Number Date Country
61910018 Nov 2013 US