APPARATUS, SYSTEM, AND METHOD FOR VOLUME-LEVEL RESTORATION OF CLUSTER SERVER DATA

Information

  • Patent Application
  • 20080177801
  • Publication Number
    20080177801
  • Date Filed
    January 22, 2007
    17 years ago
  • Date Published
    July 24, 2008
    16 years ago
Abstract
An apparatus, system, and method are disclosed for restoring cluster server data at a volume level. A setup module opens at least one source volume of a cluster server for a volume-level restore, flushes each buffer for the at least one source volume, closes the at least one source volume, disables file system checks for the cluster disks, saves disk signatures of the cluster disks, and disables device-level checks for the cluster disks. A copy module copies data with a volume-level restore from the at least one snapshot volume to the at least one source volume. A reset module rewrites the saved disk signatures to the cluster disks, re-enables the device-level checks for the cluster disks, and resets at least one volume attribute on the at least one source volume.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


This invention relates to restoring data and more particularly relates to restoring cluster server data at a volume-level.


2. Description of the Related Art


A data processing system may employ cluster servers for critical data processing tasks. Two or more servers may be configured as cluster servers. Each cluster server may monitor other cluster servers. If a first cluster server hangs or otherwise is unable to complete one or more tasks, a second cluster server may identify the problem and resume processing functions for the first cluster server.


Cluster servers often employ one or more cluster disks. The cluster disks may store data for each of the cluster servers. The storage capacity of the cluster disks may be divided among one or more logical volumes. The cluster disk logical volumes are referred to herein as source volumes.


The source volumes and cluster disks may employ a number of data locks and other safeguards to assure that data used by a first cluster server is not overwritten by a second cluster server. As a result, the plurality of cluster servers may share the source volumes and the cluster disks.


Because cluster servers typically perform critical tasks, a snapshot of the data stored on the source volumes is often backed up to one or more snapshot volumes. A plurality of snapshot instances may be stored on the snapshot volumes.


Occasionally, the cluster servers may require that data from a snapshot instance be restored to the source volumes. Unfortunately, because of the large numbers of discrete files and the large amount of data that must be restored, recovering data from the snapshot instance may be a lengthy process. Yet because of the critical nature of the tasks performed by the cluster server, a lengthy recovery process may be unacceptable.


SUMMARY OF THE INVENTION

From the foregoing discussion, there is a need for an apparatus, system, and method that restores cluster server data at volume level. Beneficially, such an apparatus, system, and method would allow cluster server data to be rapidly restored, minimizing service outages.


The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available data restoration methods. Accordingly, the present invention has been developed to provide an apparatus, system, and method for restoring cluster server data that overcome many or all of the above-discussed shortcomings in the art.


The apparatus to restore cluster server data at a volume level is provided with a plurality of modules configured to functionally execute the steps of opening a source volume, flush each buffer, closing the source volume, disabling file system checks, saving disk signatures, disabling device-level checks, copying data, rewriting the disk signatures, re-enabling the device-level checks, and resetting volume attributes. These modules in the described embodiments include a setup module, a copy module, and a reset module.


In one embodiment, the setup module prepares an application for recovery and directs the application to quiesce a data set of a source volume of a cluster server. The setup module opens the source volume for a volume-level restore. In addition, the setup module flushes each buffer of the source volume and closes the source volume. The setup module further disables file system checks for cluster disks associated with the source volume, saves disk signatures of the cluster disks, and disables device-level checks for the cluster disks.


The copy module copies data with a volume-level restore from a snapshot volume to the source volume. The reset module rewrites the saved disk signatures to the cluster disks. In addition, the reset module re-enables the device-level checks for the cluster disks and resets at least one volume attribute on the at least one source volume.


In one embodiment, the reset module unmounts the source volume, mounts the source volume, and re-enables the file system checks for the cluster disks. In addition, the reset module may direct the application to run a recovery operation. The apparatus allows the data to be rapidly restored to the source volume, minimizing the time that the needed data is not available on the source volume.


A system of the present invention is also presented to restore cluster server data. The system may be embodied in a cluster server. In particular, the system, in one embodiment, includes a plurality of cluster servers, cluster disks, snapshot disks, and a computer.


The plurality of cluster servers may execute tasks in a cluster environment. The cluster disks include at least one source volume that stores data for the cluster servers. The snapshot disks include at least one volume that stores a backup instance of the at least one source volume.


The computer may be a cluster server of the plurality of cluster servers. The computer includes a setup module, a copy module, and a reset module. The setup module opens the at least one source volume of a cluster server for a volume-level restore, flushes each buffer for the at least one source volume, closes the at least one source volume, disables file system checks for the cluster disks, saves disk signatures of the cluster disks, and disables device-level checks for the cluster disks.


The copy module copies data with a volume-level restore from W at least one snapshot volume to the at least one source volume. The reset module rewrites the saved disk signatures to the cluster disks, re-enables the device-level checks for the at cluster disks, and resets at least one volume attribute on the at least one source volume. The system restores the data to the at least one source volume from the at least one snapshot volume.


A method of the present invention is also presented for restoring cluster server data. The method in the disclosed embodiments substantially includes the steps to carry out the functions presented above with respect to the operation of the described apparatus and system. In one embodiment, the method includes opening a source volume, flush each buffer, closing the source volume, disabling file system checks, saving disk signatures, disabling device-level checks, copying data, rewriting the disk signatures, re-enabling the device-level checks, and resetting volume attributes.


A setup module opens the at least one source volume of a cluster server for a volume-level restore, flushes each buffer for at least one source volume, closes the at least one source volume, disables file system checks for the cluster disks, saves disk signatures of the cluster disks, and disables device-level checks for the cluster disks. A copy module copies data with a volume-level restore from the at least one snapshot volume to the at least one source volume. A reset module rewrites the saved disk signatures to the cluster disks, re-enables the device-level checks for the cluster disks, and resets at least one volume attribute on the at least one source volume. The method rapidly restores the data from the at least one snapshot volume to the at least one source volume.


References throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in the at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.


The embodiment of the present invention restores cluster server data. In addition, the present invention may reduce the time required to restore the data, shortening the time the data is unavailable to the cluster servers. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.


These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:



FIG. 1 is a schematic block diagram illustrating one embodiment of a cluster server system in accordance with the present invention;



FIG. 2 is a schematic block diagram illustrating one embodiment of a cluster server apparatus of the present invention;



FIGS. 3 and 4 are a schematic flow chart diagram illustrating one embodiment of a data restoration method of the present invention;



FIG. 5 is a schematic block diagram illustrating one embodiment of saving disk signatures of the present invention;



FIG. 6 is a schematic block diagram illustrating one embodiment of restored data of the present invention; and



FIG. 7 is a schematic block diagram illustrating one embodiment of rewriting disk signatures of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.


Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.


Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within the modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including different storage devices.


Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristics described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.


Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.



FIG. 1 is a schematic block diagram illustrating one embodiment of a cluster server system 100 in accordance with the present invention. The cluster server system 100 includes one or more hosts 110, a cluster connection 115, one or more cluster servers 120, cluster disks 125, and snapshot disks 130. The cluster disks 125 and snapshot disks 130 are representative of one or more hard disk drives configured as logical unit numbers (LUNS). One of skill in the art will recognize that the cluster disks 125 and snapshot disks 130 may also be configured as one or more optical storage devices, holographic storage devices, semiconductor storage devices, and the like. Although for simplicity two cluster servers 120 and one cluster connection 115 are shown, any number of cluster servers 120 and cluster connections 115 may be employed.


In one embodiment, the cluster servers 120 execute MICROSOFT® Cluster Server. The operating system may include Volume Shadow Copy Service application program interfaces as are well known to those of skill in the art.


The cluster servers 120 may provide computing services for the hosts 110. For example, the cluster servers 120 may execute tasks including application programs, data managements programs, and the like. The cluster servers 120 work closely together.


The hosts 110 may be computer workstations, servers, mainframe computers, and the like. The hosts 110 may communicate with the cluster servers 120 through the cluster connection 115. The cluster connection 115 may be a router, a server, or the like. The cluster servers 120 may also communicate through a private cluster connection 135.


The cluster servers 120 may provide high availability computing services. Thus if a first cluster server 120a is unavailable, a second cluster server 120b may be used in place of the first cluster server 120a. The cluster servers 120 may also work together to balance data processing tasks among the cluster servers 120. Thus if the first cluster server 120a is processing too many tasks, some tasks may be shifted to the second cluster server 120b. The cluster servers 120 may also monitor each other, so that if the first cluster server 120a fails, the second cluster server 120b may assume the computing tasks of the first cluster server 120a.


The cluster servers 120 may each store data to the cluster disks 125, facilitating the sharing of tasks among the cluster servers 120. The cluster disks 125 may be organized as one or more source volumes. The cluster server system 100 may employ locks and other logical restrictions to prevent the first cluster server 120a from overwriting data of the second cluster server 120b.


Because cluster server systems 100 typically process important tasks, the data of the cluster disks 125 may be backed up to the snapshot disks 130. The snapshot disks 130 may also be organized as one or more snapshot volumes. If data of one or more source volumes is ever lost and/or corrupted, a backup instance of the data may be recovered from the snapshot volumes.


In one embodiment, the cluster disks 125 and/or the snapshot disks 130 are organized as a storage area network (SAN). Alternatively, the cluster disks 125 and the snapshot disks 130 may be organized within a single SAN.


Unfortunately, copying a large amount of data and large number of files from the snapshot disks 130 to the cluster disks 125 may require an excessive time interval. As a result, the cluster server system 100 may be unable to provide computing services or provide computing services at a reduced level. The present invention efficiently restores the data from the snapshot volumes to the source volumes.



FIG. 2 is a schematic block diagram illustrating one embodiment of a cluster server apparatus 200 of the present invention. The apparatus 200 may be embodied in one or more computer program products executing on a cluster server 120 of FIG. 1. The description of the apparatus 200 refers to elements of FIG. 1, like numbers referring to like elements.


In one embodiment, the setup module 205 prepares an application for recovery. The application executes on a cluster server 120. The setup module 205 further directs the application to quiesce a data set of a source volume of a cluster server 120. The source volume may reside on the cluster disks 125.


The setup module 205 opens the source volume. In addition, the setup module 205 flushes each buffer of the source volume and closes the source volume. The buffers may temporarily store data written to the source volume and/or read from the source volume. The setup module 205 further disables file system checks for the cluster disks 125 associated with the source volume. In addition, the setup module 205 saves disk signatures of the cluster disks 125 and disables device-level checks for the cluster disks 125 as will be described hereafter.


The copy module 210 copies data with a volume-level restore from a snapshot volume to the source volume. The reset module 215 rewrites the saved disk signatures to the cluster disks 125. In addition, the reset module 215 re-enables the device-level checks for the cluster disks 125 and resets at least one volume attribute on the at least one source volume.


In one embodiment, the reset module 215 unmounts the source volume, mounts the source volume, and re-enables the file system checks for the cluster disks 125. In addition, the reset module may direct the application to run a recovery operation. The cluster server apparatus 200 allows the data to be restored to the source volume.


The schematic flow chart diagrams that follow are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and the symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.



FIGS. 3 and 4 are a schematic flow chart diagram illustrating one embodiment of a data restoration method 300 of the present invention. The data restoration method 300 substantially includes the steps to carry out the functions presented above with respect to the operation of the described apparatus and system of FIGS. 1 and 2. In one embodiment, the method is implemented with a computer program product comprising a computer readable medium having a computer readable program. A computer such as a cluster server 120 may execute the computer readable program.


The data restoration method 300 begins, and in one embodiment, the copy module 210 backs up 302 data from the at least one source volume to the at least one snapshot volume. In a certain embodiment, there is a corresponding snapshot volume for each source volume. The copy module 210 may regularly back up 302 data according to the policy. For example, the copy module 210 may back up 302 data hourly.


The setup module 205 may prepare 305 an application for recovery. The application may be a database application program, a web services application, and the like. In one embodiment, the application is MICROSOFT® Exchange Server.


The setup module 205 further directs 310 the application to quiesce a data set of the at least one source volume of a cluster server 120. In addition, the setup module 205 may dismount a data set used by the application and residing on a source volume.


The setup module 205 opens 315 the at least one source volume. For example, the source volume may be configured to receive data from the at least one snapshot volume, wherein the data from the snapshot volume may overwrite the current data of the at least one source volume.


The setup module 205 flushes 320 each buffer of the at least one source volume. The buffers may be configured as a semiconductor memory, a reserved section of a hard disk drive, and the like. The opening 315 of the at least one source volume may enable the flushing 320 of the buffers. In addition, the setup module 205 closes 322 the at least one source volume.


The setup module 205 further disables 325 file system checks for the cluster disks 125 associated with the source volume. The file system checks may determine if the cluster disks 125 are functioning. In one embodiment, the setup module 205 disables 325 the file system checks by placing the cluster disks 125 in a normal cluster maintenance mode, such that functions such as “looksalive,” “isalive,” and the like cannot be used. The setup module 205 further saves 330 disk signatures of the cluster disks 125 as will be described hereafter.


Turning now to FIG. 4, the setup module 205 disables 335 device-level checks for the cluster disks 125. In one embodiment, the setup module 205 disables 335 the device-level checks by placing the cluster disks 125 in an extended maintenance mode. With the cluster disks 125 in extended maintenance mode, a cluster server 120 may be unable to write to the cluster disks 125.


The copy module 210 copies 340 data with a volume-level restore from the at least one snapshot volume to the at least one source volume. In one embodiment, the copy module 210 overwrites each source volume with the data of a snapshot volume. The copy module 210 may rapidly copy 340 the data using one or more methods as is well know to those of skill in the art. In addition, in some embodiments such as an International Business Machines Corporation (IBM) SAN Volume Controller, IBM DS6000-series, and IBM DS8000-series, the source volumes can be used before the data copy completes.


The reset module 215 rewrites 345 the saved disk signatures to the cluster disks 125 as will be described hereafter. In addition, the reset module 215 may re-enable 350 the device-level checks for the cluster disks 125. In one embodiment, the reset module 215 re-enables 350 the device-level checks by removing the cluster disks 125 from the extended maintenance mode.


The reset module 215 further resets 355 the at least one volume attribute on the at least one source volume. In one embodiment, resetting 355 the at least volume attribute makes the at least one source volumes writable. The reset module 215 unmounts 360 the at least one source volume. In addition, the reset module 215 mounts 365 the at least one source volume. Unmounting 360 and mounting 365 the at least one source volume may enable the cluster servers 120 to write to the source volumes.


The reset module 215 re-enables 370 the file system checks for the cluster disks 125. In one embodiment, the reset module 215 removes the cluster disks 125 from the normal cluster maintenance mode to re-enable 370 file system checks. In addition, the reset module 215 may direct 375 the application to run a recovery operation. For example, the reset module 215 may direct 375 MICROSFT® Exchange Server to mount application databases.


The data restoration method 300 restores the cluster server data. By preparing the source volumes for receiving a volume-level restore, the data restoration method 300 improves the restoration of data.



FIG. 5 is a schematic block diagram illustrating one embodiment of saving disk signatures 500 of the present invention. The depiction of saving disk signatures 500 illustrates step 330 of FIG. 3. In addition, the description of saving disk signatures 500 refers to elements of FIGS. 1-3, like numbers referring to like elements.


The cluster disks 125 comprise disk signatures 505. The disk signatures 505 identify the cluster disks 125. Each storage device of the cluster disks 125 may have a disk signature 505. The setup module 205 saves 330 the disk signatures 505 to a memory module 510. The memory module 510 may be a dynamic random access memory (DRAM) residing in a cluster server 120.


With the disk signatures 505 stored 330 to the memory module 510, the copy module 210 may rapidly copy 340 data from the snapshot disks 130 to the cluster disks 125. Although the disk signatures 505 may be overwritten during the copy 340 operation, the disk signatures 505 are still available for restoration to the cluster disks 125 as will be described hereafter.



FIG. 6 is a schematic block diagram illustrating one embodiment of restored data 600 of the present invention. The memory module 510 is of FIG. 5 is shown. The cluster disks 125 of FIG. 5 is also shown overwritten with the data from the snapshot disks 130, referred to herein as cluster disks with restored data 605. The description of the restored data 600 refers to elements of FIGS. 1-5, like numbers referring to like elements.


The cluster disks with restored data 605 include the data needed by the cluster servers 120. However, the cluster servers 120 may not access the data as the cluster disks with restored data 605 have the disk signatures of the snapshot disks 130.



FIG. 7 is a schematic block diagram illustrating one embodiment of rewriting disk signatures 700 of the present invention. The memory module 510 and cluster disks with restored data 605 of FIG. 6. The description of rewriting the disk signatures 700 refers to elements of FIGS. 1-6, like numbers referring to like elements.


The reset module 215 rewrites 345 the saved disk signatures 505 to the cluster disks with restored data 605. With the disk signatures 505, the cluster disks with restored data 605 can be accessed by the cluster servers 120, completing the restoration of cluster server data.


The present invention efficiently restores data for cluster servers 120.


The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. An apparatus to restore cluster server data at a volume level, the apparatus: comprising: a setup module configured to open at least one source volume of a cluster server for a volume-level restore, flush each buffer for the at least one source volume, close the at least one source volume, disable file system checks for cluster disks associated with the at least one source volume, save disk signatures of the cluster disks, and disable device-level checks for the cluster disks;a copy module configured to copy data with a volume-level restore from at least one snapshot volume to the at least one source volume; anda reset module configured to rewrite the saved disk signatures to the cluster disks, re-enable the device-level checks for the cluster disks, and reset at least one volume attribute on the at least one source volume.
  • 2. The apparatus of claim 1, the setup module further configured to prepare an application for recovery.
  • 3. The apparatus of claim 2, the reset module further configured to direct the application to run a recovery operation.
  • 4. The apparatus of claim 3, wherein the application is configured as a Microsoft® Exchange Server.
  • 5. The apparatus of claim 1, the setup module further configured to direct an application to quiesce a data set of the at least one source volume.
  • 6. The apparatus of claim 1, the reset module further configured to: unmount the at least one source volume;mount the at least one source volume; andre-enable the file system checks for the cluster disks.
  • 7. A computer program product comprising a computer useable medium having a computer readable program, wherein the computer readable program when executed on a computer causes the computer to: open at least one source volume of a cluster server for a volume-level restore;flush each buffer for the at least one source volume;close the at least one source volume;disable file system checks for cluster disks associated with the at least one source volume;save disk signatures of the cluster disks;disable device-level checks for the cluster disks;copy data with a volume-level restore from at least one snapshot volume to the at least one source volume;rewrite the saved disk signatures to the cluster disks;re-enable the device-level checks for the cluster disks; andreset at least one volume attribute on the at least one source volume.
  • 8. The computer program product of claim 7, wherein the computer readable code is further configured to cause the computer to prepare an application for recovery.
  • 9. The computer program product of claim 8, wherein the computer readable code is further configured to cause the computer to direct the application to quiesce a data set of the at least one source volume.
  • 10. The computer program product of claim 8, wherein the computer readable code is further configured to cause the computer to: unmount the at least one source volume;mount the at least one source volume; andre-enable the file system checks for the cluster disks.
  • 11. The computer program product of claim 10, wherein the computer readable code is further configured to cause the computer to direct the application to run a recovery operation.
  • 12. The computer program product of claim 7, wherein the computer readable code is further configured to cause the computer to back up the data to the at least one snapshot volume using Volume Shadow Copy Service application program interfaces.
  • 13. A system to restore cluster server data at a volume level, the system comprising: a plurality of cluster servers;cluster disks comprising at least one source volume configured to store data for the plurality of cluster servers;snapshot disks comprising at least one snapshot volume configured to store a backup instance of the at least one source volume;a computer comprising a setup module configured to open the at least one source volume for a volume-level restore, flush each buffer for the at least one source volume, close the at least one source volume, disable file system checks for the cluster disks, save disk signatures of the cluster disks, and disable device-level checks for the cluster disks;a copy module configured to copy data with a volume-level restore from the at least one snapshot volume to the at least one source volume; anda reset module configured to rewrite the saved disk signatures to the cluster disks, re-enable the device-level checks for the cluster disks, and reset at least one volume attribute on the at least one source volume.
  • 14. The system of claim 13, wherein the computer is configured as a cluster server of the plurality of cluster servers.
  • 15. The system of claim 13, the setup module further configured to prepare an application for recovery.
  • 16. The system of claim 13, the reset module further configured to direct the application to run a recovery operation.
  • 17. The system of claim 13, the setup module further configured to direct an application to quiesce a data set of the at least one source volume.
  • 18. The system of claim 13, the reset module further configured to: unmount the at least one source volume;mount the at least one source volume; andre-enable the file system checks for the cluster disks.
  • 19. A method for deploying computer infrastructure, comprising integrating computer-readable code into a computing system, wherein the code in combination with the computing system is capable of performing the following: preparing an application for recovery;directing the application to quiesce a data set of at least one source volumeopening the at least one source volume of a cluster server for a volume level restore;flushing each buffer for the at least one source volume;closing the at least one source volume;disabling file system checks for cluster disks associated with the at least one source volume;saving disk signatures of the cluster disks;disabling device-level checks for the cluster disks;copying data with a volume-level restore from at least one snapshot volume to the at least one source volume;rewriting the saved disk signatures to the cluster disks;re-enabling the device-level checks for the cluster disks;resetting at least one volume attribute on the at least one source volume;unmounting the at least one source volume;mounting the at least one source volume;re-enabling the file system checks for the cluster disks; anddirecting the application to run a recovery operation
  • 20. The method of claim 19, wherein the method comprises backing up the data to the at least one snapshot volume using Volume Shadow Copy Service application program interfaces.