Systems and methods for managing replicated database data

Information

  • Patent Grant
  • 9396244
  • Patent Number
    9,396,244
  • Date Filed
    Tuesday, March 31, 2015
    9 years ago
  • Date Issued
    Tuesday, July 19, 2016
    8 years ago
Abstract
Systems and methods for replicating database data and generating read-only copies of the replicated data in a clean shutdown state. For example, systems can include a tracking module (e.g., a filter driver) that monitors transactions from a database application to a source storage device to generate log entries having at least one marker indicating a known good state of the application. The systems further include a computer coupled to a target storage device comprising a database and log files. The computer processes the transactions, based on the log entries, to replicate data to the target storage device; performs a first snapshot on data stored in the database and log files; replays into the database data stored in the log files; performs another snapshot on the database; and reverts the database back to a state in which the database existed at the time of the first snapshot.
Description
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet, or any correction thereto, are hereby incorporated by reference into this application under 37 CFR 1.57.


BACKGROUND OF THE INVENTION

1. Field of the Invention


The present disclosure relates to systems and methods for replicating data of one or more database applications.


2. Description of the Related Art


Electronic information has become an integral part of business operations such that many banks, insurance companies, brokerage firms, financial service providers, and a variety of other businesses rely on computer networks to store, manipulate, and display information that is constantly subject to change. Oftentimes, the success or failure of an important transaction may turn on the availability of electronic information that is both accurate and current. Accordingly, businesses seek reliable, cost-effective ways to protect and later access the information stored on their computer networks.


Many approaches to protecting data involve creating a copy of the data, such as backing up and/or replicating a database to one or more storage devices. Data shadowing and mirroring, or duplexing, provide for copying but can require lengthy amounts of time, consume valuable processing power and/or occupy large amounts of storage space for large databases. Moreover, such storage management systems can have a significant impact on the source or primary system.


To address these drawbacks, certain storage systems utilize snapshot techniques to preserve a read-only copy of database data. In general, a snapshot records the state of a storage device, database file system, or volume at a certain point in time. That is, the snapshot may be used to provide a point-in-time image of a live storage volume. Additional operations can then be performed using the snapshot copy without affecting the performance of the live volume.


In certain circumstances, however, a snapshot of a database does not necessarily provide easily accessible data. For instance, snapshots of data from a MICROSOFT EXCHANGE database generated while EXCHANGE is online results in a copy of the data that is in a “dirty shutdown” state, which prevents the data from being read or otherwise accessed by standard application programming interfaces (APIs). Although, one option is to shut down EXCHANGE each time a snapshot is to be performed on the database, such an option is not practical as the shutdowns can be time-consuming and costly.


SUMMARY OF THE INVENTION

In view of the foregoing, a need exists for improved systems and methods for database replication. For instance, there is a need for systems and methods for generating copies of database data in a clean shutdown state without taking the native database application offline. Moreover, a need exists for systems and methods that provide snapshots of database data in a useful condition without requiring the native application, or its accompanying APIs, for access to the database data.


In certain embodiments of the invention, systems and methods are disclosed for performing substantially continuous replication of a database, such as a MICROSOFT EXCHANGE database, and for providing usable read-only copies of the replicated database. For instance, snapshot systems and methods are disclosed that provide a snapshot of a replicated MICROSOFT EXCHANGE database that reflects data in a recoverable and clean shutdown state and that can be accessed or otherwise read using standard APIs without the need for the MICROSOFT EXCHANGE application program.


Such systems and methods allow known good replication copies to be viewed as copies of production volume data. For example, this technology, in certain embodiments, further allows a management component in the computing system to directly access, copy, restore, backup or otherwise manipulate the replication copies of production data as if the data was the production data of the source system, thereby improving various system performance characteristics such as access time, reducing memory requirements and reducing impact on source, or client, applications.


In certain embodiments, a method is disclosed for managing replicated data in a database system. The method comprises monitoring data transactions associated with a database application, the data transactions operative to write data to at least one source storage device. The method further includes copying the data to a target storage device based at least in part on the data transactions, wherein the target storage device comprises a target database and target transaction log files, and wherein said monitoring and copying is performed without shutting down the database application. The method also includes: (i) generating a first snapshot of at least a portion of the data stored in the target database and transaction log files; (ii) replaying, into the target database, data stored in the target transaction log files as one or more transaction logs; (iii) generating a second snapshot of at least a portion of the target database that is indicative of stored data from the database application in a recoverable state; and then (iv) reverting the target database back to a state in which the target database existed at the time of the first snapshot.


In certain embodiments, a system is disclosed for performing data management operations in a computer network environment. The system includes a database application configured to execute on a source computer and a first storage device coupled to the source computer to receive data transactions from the database application. The system also includes a module, such as a filter driver, configured to monitor the data transactions and to generate log entries based on the data transactions, at least one of the log entries having a marker indicative of a time of a known good state of the database application. The system further includes a second storage device comprising a target database and target transaction log files and a target computer coupled to the second storage device. The target computer is configured to: (i) process, based on the log entries, the data transactions to replicate data to the second storage device; (ii) perform a first snapshot operation on data stored in both the target database and the target transaction log files; (iii) replay into the target database data stored in the target transaction log files; (iv) perform a second snapshot operation on at least a portion of the target database; and (v) revert the target database back to a state in which the target database existed at the time of the first snapshot.


In certain embodiments, a method is disclosed for copying data generated on a source database system in a computer network. The method comprises processing replication data indicative of data transactions generated by a database application executing on a source system, the database transactions being directed to operations on a source database of a source storage device; replaying the replication data on a target storage device to copy the data transactions to the target storage device, the target storage device comprising a target database and target transaction log files; creating a first read-only copy of at least a portion of data stored in the target database and the target transaction log files; committing to the target database transaction logs stored in the target transaction log files, said committing following said creating the first read-only copy; creating, following said committing, a second read-only copy of at least a portion of the target database, the second read-only copy being indicative of database data from the database application in a recoverable state; and reverting the target database to a state in which the target database existed at the time of said creating the first read-only copy.


In certain embodiments, a system is disclosed for performing data management operations in a computer network environment. The system includes means for processing replication data indicative of database operations generated by a database application executing on a source computer, the database operations being directed to data on a source storage device; means for replaying the replication data on a target storage device to copy data to the target storage device, the target storage device comprising a target database and transaction log files; means for performing a first snapshot operation on data stored in both the target database and the target transaction log files; means for replaying into the target database data stored in the transaction log files; said means for performing being further configured to perform a second snapshot operation on at least a portion of the target database following replaying of the data stored in the transaction log files, the second snapshot being indicative of database data from the database application in a clean shutdown state; and means for reverting the target database back to a state in which the target database existed at the time of performing the first snapshot, wherein at least one of said means is executed on one or more computing devices.


For purposes of summarizing the disclosure, certain aspects, advantages and novel features of the inventions have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, the invention may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a block diagram of a database replication system according to certain embodiments of the invention.



FIG. 2 illustrates a block diagram of an exemplary embodiment of a database replication system configured to provide a snapshot of data in a clean shutdown state.



FIG. 3 illustrates a flowchart of an exemplary embodiment of a database replication process usable by the database replication systems of FIGS. 1 and 2.



FIG. 4 illustrates a flowchart of an exemplary embodiment of a double snapshot process of the data replication process of FIG. 3.



FIG. 5 illustrates a timeline of states of data during the database replication process of FIG. 3, according to certain embodiments of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Systems and methods disclosed herein provide for the offline access of a replicated database without needing to run the native database application. Moreover, disclosed systems and methods provide for the replication of database data without shutting down the native database application to put the replicated data in a “clean shutdown” state.


In certain embodiments of the invention, systems and methods are disclosed for performing substantially continuous replication of a database, such as a MICROSOFT EXCHANGE database. For instance, certain embodiments provide for the generation of one or more snapshots of a replicated MICROSOFT EXCHANGE database, wherein the snapshot data allows for offline access to the replicated database. For instance, in certain embodiments, the replicated database can be read using JET or like APIs in place of slower APIs (e.g., MAPI) associated with MICROSOFT EXCHANGE. As a result, the direct reading of the database allows for much faster access to the replicated data.


In general, replication of database data can pose particular drawbacks. For example, standard replication of data from certain database applications can result in data that is in a recoverable state, wherein committed transactions in the database can be recovered in case of a crash. In such a state, particular computing operations of the application are complete to a point such that further operation, recovery and/or rolling back of the application can occur, based on the committed transaction data This point of referential integrity is generally referred to herein as a known good state of the application data.


However, without shutting down the database application, the data resulting therefrom is not in a clean shutdown state (i.e., transaction logs exist that are not committed to the database). Moreover, with MICROSOFT EXCHANGE databases, for instance, data in a dirty shutdown state is evidenced by a particular bit value and prevents standard APIs from accessing the data.


The features of the systems and methods will now be described with reference to the drawings summarized above. Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings, associated descriptions, and specific implementation are provided to illustrate embodiments of the invention and not to limit the scope of the disclosure.


In addition, methods and processes described herein are not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state.


Moreover, certain embodiments of the invention described herein can utilize one or more intelligent data replication processes and/or components that replicate application-specific data from a source system to a destination system. Certain examples of such processes and components are described in U.S. patent application Ser. No. 11/640,826, filed Dec. 18, 2006, published as U.S. Patent Application Publication No. 2007-0185938 A1, and issued as U.S. Pat. No. 7,651,593, which is hereby incorporated herein by reference in its entirety to be considered part of this specification.



FIG. 1 illustrates a block diagram of a database replication system 100 according to certain embodiments of the invention. As shown, the replication system 100 comprises a source system 102 capable of communicating with a target system 104 by sending and/or receiving data over a network 106. For instance, in certain embodiments, the target system 104 receives and/or stores a replicated copy of at least a portion of data, such as application-specific data, associated with the source system 102.


In certain embodiments, the source system 102 comprises one or more computing devices capable of processing data and can include, for example, a server computer, a workstation, a personal computer, a cell phone, a portable computing device, a handheld computing device, a personal digital assistant (PDA) or the like.


The illustrated network 106 advantageously comprises any means for communicating data between two or more systems or components. It certain embodiments, the network 106 comprises a computer network. For example, the network 106 may comprise a public network such as the Internet, virtual private network (VPN), token ring or TCP/IP based network, wide area network (WAN), local area network (LAN), an intranet network, point-to-point link, a wireless network, cellular network, wireless data transmission system, two-way cable system, interactive kiosk network, satellite network, broadband network, baseband network, combinations of the same or the like. In embodiments wherein the source system 102 and target system 104 are part of the same computing device, the network 106 may represent a communications socket or other suitable internal data transfer path or mechanism.


As shown, the source system 102 comprises one or more applications 108 residing on and/or being executed by a computing device. For instance, the applications 108 may comprise software applications that interact with a user to process data and may include, for example, database applications (e.g., SQL applications), word processors, spreadsheets, financial applications, management applications, e-commerce applications, browsers, combinations of the same or the like.


The source system 102 further comprises one or more processes, such as filter drivers 110, that interact with data (e.g., production data) associated with the applications 108. For instance, the filter driver 110 can comprise a file system filter driver, an operating system driver, a filtering program, a data trapping program, an application, a module of the application(s) 108, an API, or other like software module or process that, among other things, monitors and/or intercepts particular application requests targeted at a database, a file system, another file system filter driver, a network attached storage (“NAS”), a storage area network (“SAN”), mass storage and/or other memory or raw data. In some embodiments, the filter driver 110 may reside in the I/O stack of the application 108 and may intercept, analyze and/or copy certain data traveling from the application 108 to storage.


In certain embodiments, the filter driver 110 can intercept data modification operations that include changes, updates and new information (e.g., data writes) with respect to the application(s) 108 of interest. For example, the filter driver 110 may locate, monitor and/or process one or more of the following with respect to a particular application 108, application type or group of applications: data management operations (e.g., data write operations, file attribute modifications), logs or journals (e.g., NTFS change journal), configuration files, file settings, control files, other files used by the application 108, combinations of the same or the like. In certain embodiments, such data may also be gathered from files across multiple storage systems within the source system 102. Furthermore, the filter driver 110 may be configured to monitor changes to particular files, such as files identified as being associated with data of the applications 108.


In certain embodiments, multiple filter drivers 110 may be deployed on the source system 102, each filter driver 110 being dedicated to data of a particular application 108. In such embodiments, not all information associated with the source system 102 may be captured by the filter drivers 110 and, thus, the impact on system performance may be reduced. In other embodiments, the filter driver 110 may be suitable for use with multiple application types and/or may be adaptable or configurable for use with multiple applications 108.


The illustrated source system 102 further comprises one or more source storage devices 120. In certain embodiments, the source storage device(s) 120 are configured to store production data associated with one or more of the applications 108. The source storage device 120 may include any type of media capable of storing data. For example, the source storage device 120 may comprise magnetic storage (such as a disk or a tape drive) or other type of mass storage. In certain embodiments, the source storage device 120 may be internal and/or external to (e.g., remote to) the computing device(s) having the applications 108 and the filter drivers 110.


As illustrated in FIG. 1, the source storage devices 120 further comprise at least one database 122 and transaction log files 124. In particular, the transaction log files 124 can comprise a set of changes, such as, for example, insertions, deletions and updates received from the application(s) 108 that are to be applied to the data in the database 122.


In certain embodiments, the transaction log files 124 are stored on at least one dedicated disk so that the logs are not affected by any disk failures that can potentially corrupt the database 122. For instance, the transaction log files 124 can be stored on a high-performance disk while the database 122 is stored on one or more slower disks. In other embodiments, the transaction log files 124 and the database 122 can be maintained on the same storage medium.


As further illustrated in FIG. 1, the target system 104 comprises a replication module 128 that communicates with one or more target storage devices 130. In certain embodiments, the target system 104 comprises any computing device capable of processing data and includes, for example, a server computer, a workstation, a personal computer or the like.


In certain embodiments, the replication module 128 is configured to monitor and/or manage the copying of data from the source system 102 to the target system 104, such as data obtained by the filter drivers 110. For example, the replication module can receive one or more log files or entries, data transactions, or other like replication data indicative of the data transactions or operations being generated by the application 108 to modify data stored on the source storage device(s) 120. In yet other embodiments, the replication module 128 is a “dumb” server or terminal that receives and executes instructions from the source system 102.


The target storage device(s) 130 may include any type of media capable of storing data, such as replication data sent from the source system 102. For example, the target storage device 130 may comprise magnetic storage (such as a disk or a tape drive) or other type of mass storage. In certain embodiments, the target storage device 130 may be internal and/or external to the computing device(s) having the replication module 128.


In certain embodiments, the source storage device 120 and/or the target storage device 130 may be implemented as one or more storage “volumes” that include physical storage disks defining an overall logical arrangement of storage space. For instance, disks within a particular volume may be organized as one or more groups of redundant array of independent (or inexpensive) disks (RAID). In certain embodiments, either or both of the storage devices 120, 130 can include multiple storage devices of the same or different media.


As shown in FIG. 1, the target storage device(s) 130 at least comprise a database 132 and transaction log files 134. In particular, the target database 132 is generally synchronized with the source database 122. Likewise, the target transaction log files 134 can comprise a replication of those log files present in the transaction log files 124 of the source storage device 120. As discussed above, in certain embodiments, the database 132 and the transaction log files 134 are advantageously maintained on separate media such that the transaction log files 134 can be stored on a relatively high-performance storage medium. In yet other embodiments, the database 132 and transaction log files 134 are stored in different volumes of the same disk.


As further shown, the replication system 100 is structured to generate read-only copies 136 of the replicated data stored in the target storage device(s) 130. For instance, such copies 136 can be generated according to one or more schedules, storage policies (e.g., user- or system-defined), or the like. In certain embodiments, the replication module 128 coordinates the generation of the copies 136. For example, the replication module 128 can comprise a data agent module executing on the replication system 100 that manages read-only copies (e.g., snapshots) of the replicated data. In yet other embodiments, a storage manager, a stand-alone application, or the like, manages the generation of the read-only copies 136.


In certain embodiments, the read-only copies 136 advantageously comprise a plurality of snapshots. For instance, the snapshots can reflect point-in-time copies of the database data. In certain embodiments, the snapshots allow for access to and/or manipulation of database data without affecting the production data on the source storage device(s) 120. Moreover, in certain embodiments, the snapshots record database data in a clean shutdown mode and allow for offline access of the data without the need for the native database application 108.


In certain embodiments, the source system 102 communicates with the associated target system 104 to verify that the two systems are synchronized. For instance, the source system 102 may receive from the target system 104 an identification (e.g., unique serial number) of a transaction log last committed by the target system 104. The source system 102 may then compare the received identification with state of the data in the source storage device(s) 120.



FIG. 2 illustrates further details of a replication system 200 in accordance with certain embodiments of the invention. In particular, FIG. 2 illustrates additional details of an embodiment of the replication system 100 of FIG. 1. Thus, for ease of reference and description, elements of the replication system 200 of FIG. 2 will not be redescribed in detail if they were described above. Rather, the elements in the embodiment of FIG. 2 will be given a reference numeral that retains the same last two digits as the reference numeral used in the embodiment of FIG. 1, and the last two digits will be preceded with a numeral “2.” Thus, the replication system 200 generally corresponds to the replication system 100 with certain differences that will be illuminated in the following discussion.


As shown, the replication system 200 comprises a source system 202 that communicates with a target system 204 via a network 206. In particular, the target system 204 is structured to replicate database data originating from and/or stored by the source system 202.


The illustrated source system 202 comprises a database application 208 that interacts with one or more users or programs to generate production data. For instance, the database application 208 can comprise one or more applications that require transactions to be played or committed in order for the data of the application 208 to be at consistent point or state.


For exemplary purposes, the database application 208 will often be described hereinafter with reference to a MICROSOFT EXCHANGE environment. In yet other embodiments, other types of database applications and/or collaboration applications can be used, such as, for example, ACTIVE DIRECTORY, ORACLE, other JET-based applications or the like.


With continued reference to the source system 202, a filter driver 210 tracks file changes being generated by the application 208. For example, the filter driver 210 can be configured to populate data relating to transactions of the database application 208 in source logs to be replicated to the target system 204.


In certain embodiments, the filter driver 210 may be deployed in the stack as an I/O buffer and/or process in the data path between the database application 208 and a memory 212. In such embodiments, the filter driver 210 may intercept, snoop, supervise, trap, process or otherwise be cognizant of some or all operations (e.g., data modification operations, file modification operations, read operations and the like) from the database application 208 to its associated location(s) on the source storage device(s) 220.


In embodiments of the invention having multiple database applications 208, multiple filter drivers 210 can be used such that each filter driver 210 corresponds to a single database application 208. In such embodiments, data relating to each database application 208 of interest can be written to a particular log file established for that application. In yet other embodiments, a single filter driver 210 can communicate with multiple database applications 208.


In certain embodiments, the transactions from the database application 208 are preferably stored in the memory 212 after or while being logged by the filter driver 210. A database engine 214 then commits the transactions stored in the memory 210 to disk (source storage device(s) 220).


The source storage device(s) 220 comprise one or more media for storing production data relating to the database application 208. As shown, the source storage device 220 includes a database or information store 222 for storing the committed transaction data from the database application 208. The source storage device(s) 220 further include transaction log files 224 that represent a sequence of files to keep a secure copy on disk of the data (e.g., uncommitted transaction logs) in the memory 212.


As described above with reference to FIG. 1, the replication system 200 is configured to replicate data, such as through a continuous data replication process, to target storage device(s) 230 of the target system 204. To facilitate and/or coordinate such replication, the target system 204 includes a replication module 228.


In certain embodiments, the replication module 228 comprises and/or communicates with one or more replay threads, processes or routines that populate data to the target storage device(s) 230 during the replication process. For instance, the replication module 228 can replay data from log files that are received from the source system 202. In certain embodiments, the target storage device(s) 230 comprise one or more databases 232 and transaction log files 234 that store a copy of the data residing in, respectively, the source database(s) 222 and the transaction log files 224.


The target system 204 is further configured to generate one or more snapshots 236 of data stored in the target storage device(s) 230. In certain embodiments, the snapshot 236 comprises a read-only copy of the data stored in the database 232 and the transaction log files 234, as discussed in more detail above.


In certain embodiments, the replication system 200 can optionally include one or more additional components for further coordinating and/or facilitating replication between the source system 202 and the target system 204. For instance, FIG. 2 further illustrates the source system 202 comprising an optional data agent 216.


In certain embodiments, the data agent 216 comprises a module responsible for performing data and/or storage tasks related to the source system 202. For example, the data agent 216 may manage and/or coordinate the compilation of and/or transferring of replication data from the source system 202. In other embodiments, the data agent 216 may provide archiving, migrating, and/or recovery of system data.


In certain embodiments, the source system 202 comprises a plurality of data agents 216, each of which performs data management operations related to data associated with each database application 208. In such embodiments, the data agent 216 may be aware of the various logs, files, folders, registry files and/or system resources that are impacted by a particular database application 208.


In certain embodiments, the data agent 216 is configured to perform data management operations in accordance with one or more “storage policies” or other preferences. In certain embodiments, a storage policy includes a data structure or other information having a set of preferences and/or other storage criteria for performing a storage operation. The preferences and storage criteria may include, but are not limited to, information regarding storage locations and/or timing, relationships between system components, network pathways, retention policies, data characteristics, compression or encryption requirements, preferred system components, combinations of the same or the like.


In certain further embodiments, the replication system 200 can optionally include a storage manager 238 that communicates with the source system 202 and the target system 204. In certain embodiments, the storage manager 238 directs the performance of one or more storage operations and, in particular, the replication of data from the source system 202 to the target system 204. In further embodiments, the storage manager 238 may perform one or more of the operations or functions described above with respect to the data agent 216 and/or the replication module 228. For instance, the storage manager 238 may direct and/or coordinate the performance of one or more storage operations on the replicated data (e.g., snapshots of the replicated data) of the target storage device(s) 230.


In certain embodiments, the storage manager 238 maintains an index (not shown), such as a cache, for storing information relating to: logical relationships and associations between components of the replication system 200, user preferences, management tasks, and/or other useful data. For example, the storage manager 238 may use its index to track the location and timestamps of one or more snapshots of the replicated data.


The storage manager 238 may also use its index to track the status of data management operations to be performed, storage patterns associated with the system components such as media use, storage growth, network bandwidth, Service Level Agreement (“SLA”) compliance levels, data protection levels, storage policy information, storage criteria associated with user preferences, retention criteria, storage operation preferences, and/or other storage-related information. The index may typically reside on the storage manager's hard disk and/or other database.


In certain embodiments, the storage manager 238 further communicates with a database (not shown) for storing system management information relating to the replication of data. For instance, the storage manager database may be configured to store storage and/or restore policies, user preferences, the status or location of system components or data, combinations of the same and the like. In yet other embodiments, the storage manager database may be configured to store information described above with respect to the storage manager index. In yet other embodiments, at least a portion of the storage manager index may be stored on the storage manager database.


In other embodiments, the storage manager 238 may alert the user or system when a particular resource of the replication system 200 is unavailable or congested or when components are unavailable due to hardware failure, software problems, or other reasons. In certain embodiments, the storage manager 238 may utilize replication system data to suggest solutions to such problems when they occur or even before they occur. For example, the storage manager 238 might alert the user that a storage device in the target system 204 is full or otherwise congested, and then suggest, based on job and data storage information contained in its index cache, an alternate storage device. In yet further embodiments, the storage manager 238 or other system component may take action to remedy the problem at issue. For example, the storage manager 238 may perform load balancing, error correction, or the like, based on information received regarding the target system 204.


In certain embodiments, the storage manager 238 may include other components and/or modules. For example, the storage manager 238 may include a jobs agent module (not shown) that monitors the status of storage operations that have been performed, that are being performed, or that are scheduled to be performed in the replication system 200.


Moreover, the storage manager 238 may include an interface agent module (not shown). In certain embodiments, the interface agent module may provide presentation logic, such as a graphical user interface (“GUI”), an API, or other interface by which users and system processes may be able to retrieve information about the status of storage operations and issue instructions to the replication system 200 regarding the performance of storage operations. For example, a user may modify the schedule of a number of pending snapshot copies or other types of copies. As another example, a user may use the GUI to view the status of all storage operations currently pending in the replication system 200 or the status of particular components in the replication system 200.


Additional details of storage manager modules useful with embodiments of the replication systems 100, 200 described herein are disclosed in U.S. patent application Ser. No. 09/354,063, filed Jul. 15, 1999, now U.S. Pat. No. 7,389,311, which is hereby incorporated herein by reference in its entirety.



FIG. 3 illustrates a flowchart of an exemplary embodiment of a database replication process 300. In certain embodiments, the process 300 creates a crash consistent copy of database data in a clean shutdown state such that the data can be read or otherwise utilized without needing the native database application or its accompanying APIs. In certain embodiments, the replication process 300 can be performed by the database replication systems 100, 200 of FIGS. 1 and 2. For exemplary purposes, the replication process 300 will be described hereinafter with respect to the components of the replication system 200 of FIG. 2.


The replication process 300 begins at Block 305, wherein the database application 208 is quiesced. For example, in certain embodiments, the Volume Shadow Copy Service (VSS) offered by MICROSOFT can be used to temporarily quiesce write requests from the database application 208, such as MICROSOFT EXCHANGE, to the source storage device 220. For instance, in such embodiments, the data agent 216 can insert a marker in the application data (e.g., transaction logs) that indicates the data is in a recoverable (e.g., crash consistent, stable) state.


At Block 310, the replication module 228 suspends the replay threads on the target system 204 that are replicating the data to the target storage device 230. In certain embodiments, such suspension occurs based on the threads' detection of the markers inserted by the VSS service. At this point, the target storage device(s) 230 and the source storage device(s) 220 are preferably synchronized and/or include data that is in a crash recoverable state, although the data is not in a “clean shutdown” state.


With the replay threads suspended, the replication module 228 initiates a first snapshot of the target volume, including the database 232 and the transaction log files 234 (Block 315). In certain embodiments, the first snapshot includes a read-only copy of the database application data in a recoverable state. However, as discussed above, this data is generally not in a clean shutdown state and does not allow for access of the data without the use of the native application (e.g., MICROSOFT EXCHANGE) to interpret the data.


At Block 320, the copy of the uncommitted transaction logs stored in the target transaction log files 234 is replayed into the target database 232. In certain embodiments, such replaying is performed through the execution of one or more JET APIs. After this point, the copy of the database application 208 data in the target database 232 is advantageously in a clean shutdown state since the pending transaction logs have been committed. That is, the copy of the data in the database 232 is in a state that can be read offline without the use of the native database application 208.


At Block 325, the replication module 228 initiates a second snapshot of the target volume. Because the target transaction logs have been replayed into the target database 232, the second snapshot advantageously comprises a read-only copy of application data in both a recoverable and clean shutdown state.


However, with the replaying of the copy of the transaction logs at Block 320, the data on the target storage device(s) 230 becomes out of sync with the production data stored on the source storage device(s) 220. Thus, prior to resuming replication between the source storage device(s) 220 and the target storage device(s) 230, the replication process 300 reverts the target volume back to the state of the first snapshot (Block 330). In certain embodiments, this revert process is performed by taking a difference (e.g., file changes that have been cached) of the first and second snapshots, as is discussed in more detail with respect to FIG. 4.


Following Block 330, the replication module 228 thaws the replay threads and allows the replication between the source and target storage devices 220, 230 to resume (Block 335).


Although the database replication process 300 has been described with reference to a particular arrangement, other embodiments of the invention can include more, fewer or different blocks or states. For instance, in other embodiments, instead of snapshots, other types of read-only copies can be performed on the data in the target storage device(s) 230.


In yet other embodiments, at Block 305, the data agent 216 and/or the filter driver 210 can be advantageously configured to pause, or quiesce, the database application 208 during data replication. For instance, the data agent 216 may cause the application 208 to temporarily suspend data management operations to the source storage device 220 once the application 208 reaches a known “good,” “stable” or “recoverable” state.


In certain embodiments, the data agent 216 instructs the quiescing of the application 208 through an application programming interface (API). When the application 208 has placed itself in a known good state, the application 208 may send an acknowledgment to the data agent 216.


In certain embodiments, once the data management operations are suspended, the filter driver 210 and/or data agent 216 then inserts a logical marker or tag in the source log file denoting that a “consistency point” or “consistency recovery point” has been reached. In some embodiments, the consistency point indicates the time at which the application 208 is at a known good state.


Moreover, in certain embodiments, the target system 204 is further capable of performing one or more data management operations, such as, for example, storage operations (e.g., backup), search operations, data classification, combinations of the same or the like on the replicated data at certain consistency points. Performing data management operations on the replicated data allows for the processing of copies of application data without significantly impacting the resources of the source system. Furthermore, when copying the replicated data at consistency points, the copied data presumably represents a known good state of the application.


In certain embodiments of the invention, at Block 305 the application 208 is periodically quiesced based on particular criteria. For instance, the quiescing of the application 208 may be based on one or more system- or user-defined preferences (e.g., every five minutes). The periodic quiescing of the application 208 may be based on the desired frequency of performing replication, backup or other data modification operations on the subject data. For instance, applications 208 dealing with data-sensitive information may necessitate more frequent quiescing (and creation of consistency points) than other types of applications.


In yet other embodiments, quiescing of the application 108 may be performed based on an automatic reporting procedure. For instance, a module of the replication system 200 may be configured to gather, receive and/or analyze information associated with a failure rate and/or health of applicable servers. Additional details of such status monitoring are provided in U.S. patent application Ser. No. 11/120,619, filed May 2, 2005, now published as U.S. Patent Application Publication No. 2006-0053261 A1, and issued as U.S. Pat. No. 7,343,453, which is hereby incorporated herein by reference in its entirety.



FIG. 4 illustrates a flowchart of an exemplary embodiment of a double snapshot process 400 that may be used when performing certain blocks (e.g., Blocks 315 to 330) of the data replication process 300 of FIG. 3. For exemplary purposes, the double snapshot process 400 will be described hereinafter with respect to the components of the replication system 200 of FIG. 2.


At Block 405, the process 400 performs a first snapshot of the target volume, including the target database 232 and the target transaction log files 234. Following the first snapshot, the transaction logs are replayed into the target database 232 to put the copy of the data in a clean shutdown state. During the replay of the transaction logs (e.g., by a JET API), the overwritten bits in the target database 232 can be moved to a first cache location (Block 410).


The process 400 then performs a second snapshot of the target volume, wherein the target volume represents the application data in a clean shutdown state (Block 415). Following the second snapshot, the process 400 reverts the target volume back to the state in which it existed at the time of the first snapshot. To do so, at Block 420, the new bits added between the first and second snapshots (i.e., during the replaying of the transaction logs) are moved to a second cache location (Block 425). The overwritten bits stored in the first cache location are then moved back to the target volume (Block 425).


At Block 430, the second snapshot is preserved as a consistent recovery point in a clean shutdown state. In certain embodiments, a customized driver is used to preserve the snapshot (e.g., the snapshot is not deleted).


In certain embodiments, the first and second cache locations can be on different storage devices or volumes. In yet other embodiments, the first and second cache locations can be located on the same storage device.



FIG. 5 illustrates an exemplary timeline 500 of states of data during the database replication process 300 of FIG. 3, according to certain embodiments of the invention. In particular, the timeline 500 includes a simplified representation of the states of files on a source system (e.g., source storage device 220) and a target system (e.g., target storage device 230) during replication.


As shown, at State A, the source database includes File A and File B and the source log files include a transaction relating to the creation of File C and the modification of File B (i.e., File B′). With reference to the target system, the target database and the target log files are synchronized with the source system.


At State B, a first snapshot (i.e., Snapshot A) is taken of the target system. As discussed above with reference to Block 315 of the replication process 300, the application data from the target database is in a dirty shutdown state.


At State C, an additional transaction log for the creation of a File D has been added to the source log files. Moreover, the target transaction logs (Files B′ and C) have been replayed from the target log files to the target database (see, e.g., Block 320). Thus, the target database includes File A, File B′ and File C. At this point, the data in the target database is in a clean shutdown state, and no outstanding transaction logs are left in the target log files. As further illustrated, the system then takes a second snapshot (Snapshot B) of at least the data in the target database to create a consistent, recoverable copy of the database application data that can be accessed and utilized without the native database application which created the data (see, e.g., Block 325).


At State D, the target database is reverted back to the state in which it existed at the time Snapshot A was taken (see, e.g., Block 330), and the target log files are repopulated with transactions relating to Files B′ and C. Moreover, at State E, a transaction log relating to File D is replicated from the source system to the target system.


At State F, the transaction logs in both the source and target log files relating to File B′ and File C are replayed, respectively, into the source and target databases. Moreover, an additional transaction log relating to File E is added to the source log files and replicated to the target system such that the source and target systems are re-synchronized.


In certain embodiments of the invention, data replication systems and methods disclosed herein may be used in a modular storage management system, embodiments of which are described in more detail in U.S. Pat. No. 7,035,880, issued Apr. 5, 2006, which is hereby incorporated herein by reference in its entirety. For example, the data replication system may be part of a storage operation cell that includes combinations of hardware and/or software components directed to performing storage operations on electronic data. Exemplary storage operation cells usable with embodiments of the invention include CommCells as embodied in the QNet storage management system and the QiNetix storage management system by CommVault Systems, Inc. (Oceanport, N.J.), and as further described in U.S. patent application Ser. No. 10/877,831, filed Jun. 25, 2004, now published as U.S. Patent Application Publication No. 2005-0033800 A1, issued as U.S. Pat. No. 7,454,569, which is hereby incorporated herein by reference in its entirety.


Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein.


Embodiments of the invention are also described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the acts specified in the flowchart and/or block diagram block or blocks.


While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure.

Claims
  • 1. A system for performing data management operations, the system comprising: at least one target computer comprising computer hardware that communicates with at least one source computer to receive information about a source database created by a native application, the source database stored on a first storage device;the target computer replicates at least a portion of the source database to generate a target database on a second storage device, the target database existing in a first state at a first time, wherein the target database in the first state is in a first format that is readable by use of the native application;the target computer copies at least a portion of the target database in the first state to create a first copy, the first copy stored in the first state;at a second time subsequent to creating the first copy, the target computer commits a plurality of data transactions to the target database that puts the target database into a second state, wherein the target database in the second state is readable offline without use of the native application;the target computer copies at least a portion of the target database in the second state to create a second copy that includes data corresponding to the plurality of committed data transactions, wherein the second copy is stored in the second state that is readable offline without use of the native application; andthe target computer reverts the target database in the second state back to the first state that is in a first format that is readable by use of the native application based on differences between the first copy in the first state and the second copy in the second state.
  • 2. The system of claim 1 wherein the target database is put into the second state without taking the native database application offline.
  • 3. The system of claim 1 wherein the target computer commits the plurality of data transactions into the target database prior to the plurality of data transactions being committed into the source database.
  • 4. The system of claim 1 wherein at least one of the plurality of data transactions comprise a marker indicative of a time of a known good state of a database application.
  • 5. The system of claim 1 wherein copying the portion of the target database comprises a point-in-time snapshot operation.
  • 6. The system of claim 1 wherein the target computer further comprises a replication module having a plurality of threads that commit the plurality of data transactions.
  • 7. The system of claim 1 wherein the target computer comprises at least one application programming interface that commits the plurality of data transactions.
  • 8. The system of claim 1 further comprising a filter driver that monitors the plurality of data transactions associated with the source database and generates log entries.
  • 9. The system of claim 1 wherein the second copy is a read-only copy of the target database.
  • 10. The system of claim 1 wherein creating the target database occurs without shutting down a database application.
  • 11. A method for performing data management operations, the method comprising: replicating at least a portion of a source database stored on a first storage device, to generate a target database on a second storage device, the source database created by a native application, the target database existing in a first state at a first time, wherein the target database in the first state is in a first format that is readable by use of the native application;copying at least a portion of the target database in the first state to create a first copy, the first copy stored in the first state;at a second time subsequent to creating the first copy, committing a plurality of data transactions to the target database that puts the target database into a second state, wherein the target database in the second state is readable offline without use of the native application;copying at least a portion of the target database in the second state to create a second copy that includes data corresponding to the plurality of committed data transactions, wherein the second copy is stored in the second state that is readable offline without use of the native application; andreverting the target database in the second state back to the first state that is in a first format that is readable by use of the native application based on differences between the first copy in the first state and the second copy in the second state.
  • 12. The method of claim 11 further comprising putting the target database is put into the second state without taking the native database application offline.
  • 13. The method of claim 11 wherein at least one of the plurality of data transactions comprise a marker indicative of a time of a known good state of a database application.
  • 14. The method of claim 11 wherein copying the portion of the target database comprises a point-in-time snapshot operation.
  • 15. The method of claim 11 wherein committing the plurality of data transactions comprises using a replication module having a plurality of threads.
  • 16. The method of claim 11 wherein committing the plurality of data transactions comprises using a programming interface.
  • 17. The method of claim 11 further comprising monitoring the plurality of data transactions associated with the source database with a filter driver and generating log entries.
  • 18. The method of claim 11 wherein the second copy is a read-only copy of the target database.
  • 19. The method of claim 11 wherein creating the target database occurs without shutting down a database application.
  • 20. The method of claim 11 wherein creating the target database occurs without shutting down a database application.
US Referenced Citations (642)
Number Name Date Kind
4296465 Lemak Oct 1981 A
4686620 Ng Aug 1987 A
4995035 Cole et al. Feb 1991 A
5005122 Griffin et al. Apr 1991 A
5093912 Dong et al. Mar 1992 A
5133065 Cheffetz et al. Jul 1992 A
5193154 Kitajima et al. Mar 1993 A
5212772 Masters May 1993 A
5226157 Nakano et al. Jul 1993 A
5231668 Kravitz Jul 1993 A
5239647 Anglin et al. Aug 1993 A
5241668 Eastridge et al. Aug 1993 A
5241670 Eastridge et al. Aug 1993 A
5263154 Eastridge et al. Nov 1993 A
5265159 Kung Nov 1993 A
5276860 Fortier et al. Jan 1994 A
5276867 Kenley et al. Jan 1994 A
5287500 Stoppani, Jr. Feb 1994 A
5301351 Jippo Apr 1994 A
5311509 Heddes et al. May 1994 A
5317731 Dias et al. May 1994 A
5321816 Rogan et al. Jun 1994 A
5333315 Saether et al. Jul 1994 A
5347653 Flynn et al. Sep 1994 A
5369757 Spiro et al. Nov 1994 A
5403639 Belsan et al. Apr 1995 A
5410700 Fecteau et al. Apr 1995 A
5448724 Hayashi Sep 1995 A
5455926 Keele et al. Oct 1995 A
5487072 Kant Jan 1996 A
5491810 Allen Feb 1996 A
5495607 Pisello et al. Feb 1996 A
5504873 Martin et al. Apr 1996 A
5544345 Carpenter et al. Aug 1996 A
5544347 Yanai et al. Aug 1996 A
5546536 Davis et al. Aug 1996 A
5555404 Torbjornsen et al. Sep 1996 A
5559957 Balk Sep 1996 A
5559991 Kanfi Sep 1996 A
5598546 Blomgren Jan 1997 A
5604862 Midgely et al. Feb 1997 A
5606693 Nilsen et al. Feb 1997 A
5615392 Harrison et al. Mar 1997 A
5619644 Crockett et al. Apr 1997 A
5638509 Dunphy et al. Jun 1997 A
5642496 Kanfi Jun 1997 A
5668986 Nilsen et al. Sep 1997 A
5673381 Huai et al. Sep 1997 A
5675511 Prasad et al. Oct 1997 A
5677900 Nishida et al. Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5687343 Fecteau et al. Nov 1997 A
5689706 Rao et al. Nov 1997 A
5699361 Ding et al. Dec 1997 A
5719786 Nelson et al. Feb 1998 A
5720026 Uemura et al. Feb 1998 A
5729743 Squibb Mar 1998 A
5737747 Vishlitzky et al. Apr 1998 A
5742792 Yanai et al. Apr 1998 A
5745753 Mosher, Jr. Apr 1998 A
5751997 Kullick et al. May 1998 A
5758359 Saxon May 1998 A
5761677 Senator et al. Jun 1998 A
5761734 Pfeffer et al. Jun 1998 A
5764972 Crouse et al. Jun 1998 A
5765173 Cane et al. Jun 1998 A
5778395 Whiting et al. Jul 1998 A
5790114 Geaghan et al. Aug 1998 A
5790828 Jost Aug 1998 A
5802265 Bressoud et al. Sep 1998 A
5805920 Sprenkle et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5813017 Morris Sep 1998 A
5829046 Tzelnic et al. Oct 1998 A
5860104 Witt et al. Jan 1999 A
5875478 Blumenau Feb 1999 A
5875481 Ashton et al. Feb 1999 A
5878408 Van Huben et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5901327 Ofek May 1999 A
5907621 Bachman et al. May 1999 A
5907672 Matze et al. May 1999 A
5924102 Perks Jul 1999 A
5926836 Blumenau Jul 1999 A
5933104 Kimura Aug 1999 A
5933601 Fanshier et al. Aug 1999 A
5950205 Aviani, Jr. Sep 1999 A
5956519 Wise et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5970233 Liu et al. Oct 1999 A
5970255 Tran et al. Oct 1999 A
5974563 Beeler, Jr. Oct 1999 A
5987478 See et al. Nov 1999 A
5991779 Bejar Nov 1999 A
5995091 Near et al. Nov 1999 A
6003089 Shaffer et al. Dec 1999 A
6009274 Fletcher et al. Dec 1999 A
6012090 Chung et al. Jan 2000 A
6021415 Cannon et al. Feb 2000 A
6021475 Nguyen et al. Feb 2000 A
6023710 Steiner et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6049889 Steely, Jr. et al. Apr 2000 A
6052735 Ulrich et al. Apr 2000 A
6058066 Norris et al. May 2000 A
6061692 Thomas et al. May 2000 A
6072490 Bates et al. Jun 2000 A
6076148 Kedem et al. Jun 2000 A
6088697 Crockett et al. Jul 2000 A
6094416 Ying Jul 2000 A
6105129 Meier et al. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6131095 Low et al. Oct 2000 A
6131148 West et al. Oct 2000 A
6131190 Sidwell Oct 2000 A
6137864 Yaker Oct 2000 A
6148377 Carter et al. Nov 2000 A
6148412 Cannon et al. Nov 2000 A
6154787 Urevig et al. Nov 2000 A
6154852 Amundson et al. Nov 2000 A
6158044 Tibbetts Dec 2000 A
6161111 Mutalik et al. Dec 2000 A
6163856 Dion et al. Dec 2000 A
6167402 Yeager Dec 2000 A
6175829 Li et al. Jan 2001 B1
6195695 Cheston et al. Feb 2001 B1
6205450 Kanome et al. Mar 2001 B1
6212512 Barney et al. Apr 2001 B1
6212521 Minami et al. Apr 2001 B1
6230164 Rikieta et al. May 2001 B1
6260068 Zalewski et al. Jul 2001 B1
6260069 Anglin Jul 2001 B1
6269431 Dunham Jul 2001 B1
6275953 Vahalia et al. Aug 2001 B1
6279078 Sicola et al. Aug 2001 B1
6292783 Rohler Sep 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6304880 Kishi Oct 2001 B1
6311193 Sekido et al. Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6328766 Long Dec 2001 B1
6330570 Crighton Dec 2001 B1
6330642 Carteau Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6350199 Williams et al. Feb 2002 B1
RE37601 Eastridge et al. Mar 2002 E
6353878 Dunham Mar 2002 B1
6356801 Goodman et al. Mar 2002 B1
6363464 Mangione Mar 2002 B1
6366986 St. Pierre et al. Apr 2002 B1
6366988 Skiba et al. Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6374363 Wu et al. Apr 2002 B1
6389432 Pothapragada et al. May 2002 B1
6397308 Ofek et al. May 2002 B1
6418478 Ignatius et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6434681 Amangau Aug 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6466950 Ono Oct 2002 B1
6473775 Kusters et al. Oct 2002 B1
6487561 Ofek et al. Nov 2002 B1
6487644 Huebsch et al. Nov 2002 B1
6487645 Clark et al. Nov 2002 B1
6502205 Yanai et al. Dec 2002 B1
6516314 Birkler et al. Feb 2003 B1
6516327 Zondervan et al. Feb 2003 B1
6516348 MacFarlane et al. Feb 2003 B1
6519679 Devireddy et al. Feb 2003 B2
6538669 Lagueux, Jr. et al. Mar 2003 B1
6539462 Mikkelsen et al. Mar 2003 B1
6542468 Hatakeyama Apr 2003 B1
6542909 Tamer et al. Apr 2003 B1
6542972 Ignatius et al. Apr 2003 B2
6564228 O'Connor May 2003 B1
6564229 Baweja et al. May 2003 B1
6564271 Micalizzi, Jr. et al. May 2003 B2
6581143 Gagne et al. Jun 2003 B2
6604118 Kleiman et al. Aug 2003 B2
6604149 Deo et al. Aug 2003 B1
6611849 Raff et al. Aug 2003 B1
6615223 Shih et al. Sep 2003 B1
6629189 Sandstrom Sep 2003 B1
6631477 LeCrone et al. Oct 2003 B1
6631493 Ottesen et al. Oct 2003 B2
6647396 Parnell et al. Nov 2003 B2
6647473 Golds et al. Nov 2003 B1
6651075 Kusters et al. Nov 2003 B1
6654825 Clapp et al. Nov 2003 B2
6658436 Oshinsky et al. Dec 2003 B2
6658526 Nguyen et al. Dec 2003 B2
6662198 Satyanarayanan et al. Dec 2003 B2
6665815 Goldstein et al. Dec 2003 B1
6681230 Blott et al. Jan 2004 B1
6691209 O'Connell Feb 2004 B1
6721767 De Meno et al. Apr 2004 B2
6728733 Tokui Apr 2004 B2
6732124 Koseki et al. May 2004 B1
6732125 Autrey et al. May 2004 B1
6742092 Huebsch et al. May 2004 B1
6748504 Sawdon et al. Jun 2004 B2
6751635 Chen et al. Jun 2004 B1
6757794 Cabrera et al. Jun 2004 B2
6760723 Oshinsky et al. Jul 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6789161 Blendermann et al. Sep 2004 B1
6792472 Otterness et al. Sep 2004 B1
6792518 Armangau et al. Sep 2004 B2
6799258 Linde Sep 2004 B1
6820035 Zahavi Nov 2004 B1
6836779 Poulin Dec 2004 B2
6839724 Manchanda et al. Jan 2005 B2
6871163 Hiller et al. Mar 2005 B2
6871271 Ohran et al. Mar 2005 B2
6880051 Timpanaro-Perrotta Apr 2005 B2
6886020 Zahavi et al. Apr 2005 B1
6892211 Hitz et al. May 2005 B2
6912482 Kaiser Jun 2005 B2
6925476 Multer et al. Aug 2005 B1
6925512 Louzoun et al. Aug 2005 B2
6938135 Kekre et al. Aug 2005 B1
6938180 Dysert et al. Aug 2005 B1
6941393 Secatch Sep 2005 B2
6944796 Joshi et al. Sep 2005 B2
6952705 Knoblock et al. Oct 2005 B2
6952758 Chron et al. Oct 2005 B2
6954834 Slater et al. Oct 2005 B2
6968351 Butterworth Nov 2005 B2
6973553 Archibald, Jr. et al. Dec 2005 B1
6978265 Schumacher Dec 2005 B2
6981177 Beattie Dec 2005 B2
6983351 Gibble et al. Jan 2006 B2
6993539 Federwisch et al. Jan 2006 B2
7003519 Biettron et al. Feb 2006 B1
7003641 Prahlad et al. Feb 2006 B2
7007046 Manley et al. Feb 2006 B2
7020669 McCann et al. Mar 2006 B2
7032131 Lubbers et al. Apr 2006 B2
7035880 Crescenti et al. Apr 2006 B1
7039661 Ranade May 2006 B1
7051050 Chen et al. May 2006 B2
7062761 Slavin et al. Jun 2006 B2
7065538 Aronoff et al. Jun 2006 B2
7068597 Fijolek et al. Jun 2006 B1
7082441 Zahavi et al. Jul 2006 B1
7085787 Beier et al. Aug 2006 B2
7085904 Mizuno et al. Aug 2006 B2
7093012 Olstad et al. Aug 2006 B2
7096315 Takeda et al. Aug 2006 B2
7103731 Gibble et al. Sep 2006 B2
7103740 Colgrove et al. Sep 2006 B1
7106691 Decaluwe et al. Sep 2006 B1
7107298 Prahlad et al. Sep 2006 B2
7107395 Ofek et al. Sep 2006 B1
7111021 Lewis et al. Sep 2006 B1
7111189 Sicola et al. Sep 2006 B1
7120757 Tsuge Oct 2006 B2
7130860 Pachet Oct 2006 B2
7130970 Devassy et al. Oct 2006 B2
7139932 Watanabe Nov 2006 B2
7155465 Lee et al. Dec 2006 B2
7155633 Tuma et al. Dec 2006 B2
7158985 Liskov Jan 2007 B1
7177866 Holenstein et al. Feb 2007 B2
7181477 Saika et al. Feb 2007 B2
7188292 Cordina et al. Mar 2007 B2
7191198 Asano et al. Mar 2007 B2
7194454 Hansen et al. Mar 2007 B2
7194487 Kekre et al. Mar 2007 B1
7200620 Gupta Apr 2007 B2
7203807 Urabe et al. Apr 2007 B2
7209972 Ignatius et al. Apr 2007 B1
7225204 Manley et al. May 2007 B2
7225208 Midgley et al. May 2007 B2
7225210 Guthrie, II May 2007 B2
7228456 Lecrone et al. Jun 2007 B2
7231391 Aronoff et al. Jun 2007 B2
7231544 Tan et al. Jun 2007 B2
7234115 Sprauve et al. Jun 2007 B1
7246140 Therrien et al. Jul 2007 B2
7246207 Kottomtharayil et al. Jul 2007 B2
7250963 Yuri et al. Jul 2007 B2
7257689 Baird Aug 2007 B1
7269612 Devarakonda et al. Sep 2007 B2
7269641 Powers et al. Sep 2007 B2
7272606 Borthakur et al. Sep 2007 B2
7275138 Saika Sep 2007 B2
7275177 Amangau et al. Sep 2007 B2
7278142 Bandhole et al. Oct 2007 B2
7284153 Okbay et al. Oct 2007 B2
7287047 Kavuri Oct 2007 B2
7293133 Colgrove et al. Nov 2007 B1
7296125 Ohran Nov 2007 B2
7315923 Retnamma et al. Jan 2008 B2
7318134 Oliveira et al. Jan 2008 B1
7340652 Jarvis et al. Mar 2008 B2
7343356 Prahlad et al. Mar 2008 B2
7343365 Farnham et al. Mar 2008 B2
7343453 Prahlad et al. Mar 2008 B2
7343459 Prahlad et al. Mar 2008 B2
7346623 Prahlad et al. Mar 2008 B2
7346751 Prahlad et al. Mar 2008 B2
7356657 Mikami Apr 2008 B2
7359917 Winter et al. Apr 2008 B2
7363444 Ji Apr 2008 B2
7370232 Safford May 2008 B2
7373364 Chapman May 2008 B1
7380072 Kottomtharayil et al. May 2008 B2
7383293 Gupta et al. Jun 2008 B2
7389311 Crescenti et al. Jun 2008 B1
7392360 Aharoni et al. Jun 2008 B1
7395282 Crescenti et al. Jul 2008 B1
7401064 Arone et al. Jul 2008 B1
7409509 Devassy et al. Aug 2008 B2
7415488 Muth et al. Aug 2008 B1
7428657 Yamasaki Sep 2008 B2
7430587 Malone et al. Sep 2008 B2
7433301 Akahane et al. Oct 2008 B2
7440982 Lu et al. Oct 2008 B2
7454569 Kavuri et al. Nov 2008 B2
7457980 Yang et al. Nov 2008 B2
7461230 Gupta et al. Dec 2008 B1
7464236 Sano et al. Dec 2008 B2
7467167 Patterson Dec 2008 B2
7467267 Mayock Dec 2008 B1
7469262 Baskaran et al. Dec 2008 B2
7472238 Gokhale Dec 2008 B1
7472312 Jarvis et al. Dec 2008 B2
7475284 Koike Jan 2009 B2
7484054 Kottomtharayil et al. Jan 2009 B2
7490207 Amarendran Feb 2009 B2
7496589 Jain et al. Feb 2009 B1
7496690 Beverly et al. Feb 2009 B2
7500053 Kavuri et al. Mar 2009 B1
7500150 Sharma et al. Mar 2009 B2
7502902 Sato Mar 2009 B2
7509316 Greenblatt et al. Mar 2009 B2
7512601 Cucerzan et al. Mar 2009 B2
7516088 Johnson et al. Apr 2009 B2
7519726 Palliyil et al. Apr 2009 B2
7523483 Dogan Apr 2009 B2
7529745 Ahluwalia et al. May 2009 B2
7529748 Wen et al. May 2009 B2
7529782 Prahlad et al. May 2009 B2
7529898 Nguyen et al. May 2009 B2
7532340 Koppich et al. May 2009 B2
7533181 Dawson et al. May 2009 B2
7536291 Retnamma et al. May 2009 B1
7539707 Prahlad et al. May 2009 B2
7539835 Kaiser May 2009 B2
7543125 Gokhale Jun 2009 B2
7546324 Prahlad et al. Jun 2009 B2
7546364 Raman et al. Jun 2009 B2
7565572 Yamasaki Jul 2009 B2
7581077 Ignatius et al. Aug 2009 B2
7593966 Therrien et al. Sep 2009 B2
7596586 Gokhale et al. Sep 2009 B2
7606841 Ranade Oct 2009 B1
7606844 Kottomtharayil Oct 2009 B2
7607037 LeCrone et al. Oct 2009 B1
7613748 Brockway et al. Nov 2009 B2
7613750 Valiyaparambil et al. Nov 2009 B2
7617253 Prahlad et al. Nov 2009 B2
7617262 Prahlad et al. Nov 2009 B2
7617321 Clark Nov 2009 B2
7617369 Bezbaruah et al. Nov 2009 B1
7617541 Plotkin et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7627617 Kavuri et al. Dec 2009 B2
7634477 Hinshaw Dec 2009 B2
7636743 Erofeev Dec 2009 B2
7651593 Prahlad et al. Jan 2010 B2
7661028 Erofeev Feb 2010 B2
7668798 Scanlon et al. Feb 2010 B2
7669029 Mishra et al. Feb 2010 B1
7672979 Appellof et al. Mar 2010 B1
7673000 Smoot et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7689467 Belanger et al. Mar 2010 B1
7694086 Bezbaruah et al. Apr 2010 B1
7702533 Barnard et al. Apr 2010 B2
7702670 Duprey et al. Apr 2010 B1
7707184 Zhang et al. Apr 2010 B1
7716171 Kryger May 2010 B2
7734715 Hyakutake et al. Jun 2010 B2
7739235 Rousseau et al. Jun 2010 B2
7809691 Karmarkar et al. Oct 2010 B1
7810067 Kaelicke et al. Oct 2010 B2
7831553 Prahlad et al. Nov 2010 B2
7831622 Prahlad et al. Nov 2010 B2
7840533 Prahlad et al. Nov 2010 B2
7840537 Gokhale et al. Nov 2010 B2
7870355 Erofeev Jan 2011 B2
7904681 Bappe Mar 2011 B1
7930476 Castelli et al. Apr 2011 B1
7962455 Erofeev Jun 2011 B2
7962709 Agrawal Jun 2011 B2
8005795 Galipeau et al. Aug 2011 B2
8024294 Kottomtharayil Sep 2011 B2
8078655 Grubov et al. Dec 2011 B2
8121983 Prahlad et al. Feb 2012 B2
8166263 Prahlad Apr 2012 B2
8190565 Prahlad et al. May 2012 B2
8195623 Prahlad et al. Jun 2012 B2
8204859 Ngo Jun 2012 B2
8219524 Gokhale Jul 2012 B2
8271830 Erofeev Sep 2012 B2
8285684 Prahlad et al. Oct 2012 B2
8291101 Yan et al. Oct 2012 B1
8352422 Prahlad et al. Jan 2013 B2
8463751 Kottomtharayil Jun 2013 B2
8489656 Erofeev Jul 2013 B2
8504515 Prahlad et al. Aug 2013 B2
8504517 Agrawal Aug 2013 B2
8572038 Erofeev Oct 2013 B2
8589347 Erofeev Nov 2013 B2
8655850 Ngo et al. Feb 2014 B2
8656218 Erofeev Feb 2014 B2
8666942 Ngo Mar 2014 B2
8725694 Kottomtharayil May 2014 B2
8725698 Prahlad et al. May 2014 B2
8726242 Ngo May 2014 B2
8745105 Erofeev Jun 2014 B2
8793221 Prahlad et al. Jul 2014 B2
8805818 Zane et al. Aug 2014 B2
8868494 Agrawal Oct 2014 B2
8935210 Kottomtharayil Jan 2015 B2
9002785 Prahlad et al. Apr 2015 B2
9002799 Ngo et al. Apr 2015 B2
9003374 Ngo Apr 2015 B2
9020898 Prahlad et al. Apr 2015 B2
9047357 Ngo Jun 2015 B2
9208210 Erofeev Dec 2015 B2
20010029512 Oshinsky et al. Oct 2001 A1
20010029517 De Meno et al. Oct 2001 A1
20010032172 Moulinet et al. Oct 2001 A1
20010035866 Finger et al. Nov 2001 A1
20010042222 Kedem et al. Nov 2001 A1
20010044807 Kleiman et al. Nov 2001 A1
20020002557 Straube et al. Jan 2002 A1
20020004883 Nguyen et al. Jan 2002 A1
20020019909 D'Errico Feb 2002 A1
20020023051 Kunzle et al. Feb 2002 A1
20020040376 Yamanaka et al. Apr 2002 A1
20020042869 Tate et al. Apr 2002 A1
20020049626 Mathias et al. Apr 2002 A1
20020049718 Kleiman et al. Apr 2002 A1
20020049738 Epstein Apr 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020062230 Morag et al. May 2002 A1
20020069324 Gerasimov et al. Jun 2002 A1
20020083055 Pachet et al. Jun 2002 A1
20020091712 Martin et al. Jul 2002 A1
20020103848 Giacomini et al. Aug 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020112134 Ohran et al. Aug 2002 A1
20020120741 Webb et al. Aug 2002 A1
20020124137 Ulrich et al. Sep 2002 A1
20020133511 Hostetter et al. Sep 2002 A1
20020133512 Milillo et al. Sep 2002 A1
20020161753 Inaba et al. Oct 2002 A1
20020174107 Poulin Nov 2002 A1
20020174139 Midgley et al. Nov 2002 A1
20020174416 Bates et al. Nov 2002 A1
20020181395 Foster et al. Dec 2002 A1
20030005119 Mercier et al. Jan 2003 A1
20030018657 Monday Jan 2003 A1
20030023893 Lee et al. Jan 2003 A1
20030028736 Berkowitz et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030079018 Lolayekar et al. Apr 2003 A1
20030097296 Putt May 2003 A1
20030126200 Wolff Jul 2003 A1
20030131278 Fujibayashi Jul 2003 A1
20030135783 Martin et al. Jul 2003 A1
20030161338 Ng et al. Aug 2003 A1
20030167380 Green et al. Sep 2003 A1
20030177149 Coombs Sep 2003 A1
20030177321 Watanabe Sep 2003 A1
20030187847 Lubbers et al. Oct 2003 A1
20030225800 Kavuri Dec 2003 A1
20040006572 Hoshino et al. Jan 2004 A1
20040006578 Yu Jan 2004 A1
20040010487 Prahlad et al. Jan 2004 A1
20040015468 Beier et al. Jan 2004 A1
20040039679 Norton et al. Feb 2004 A1
20040078632 Infante et al. Apr 2004 A1
20040098425 Wiss et al. May 2004 A1
20040107199 Dairymple et al. Jun 2004 A1
20040117438 Considine et al. Jun 2004 A1
20040117572 Welsh et al. Jun 2004 A1
20040133634 Luke et al. Jul 2004 A1
20040139128 Becker et al. Jul 2004 A1
20040158588 Pruet Aug 2004 A1
20040193625 Sutoh Sep 2004 A1
20040193953 Callahan et al. Sep 2004 A1
20040205206 Naik et al. Oct 2004 A1
20040212639 Smoot et al. Oct 2004 A1
20040215724 Smoot et al. Oct 2004 A1
20040225437 Endo et al. Nov 2004 A1
20040230615 Blanco et al. Nov 2004 A1
20040230829 Dogan et al. Nov 2004 A1
20040236958 Teicher et al. Nov 2004 A1
20040249883 Srinivasan et al. Dec 2004 A1
20040250033 Prahlad et al. Dec 2004 A1
20040254919 Giuseppini Dec 2004 A1
20040260678 Verbowski et al. Dec 2004 A1
20040267777 Sugimura et al. Dec 2004 A1
20040267835 Zwilling et al. Dec 2004 A1
20040267836 Armangau et al. Dec 2004 A1
20050015409 Cheng et al. Jan 2005 A1
20050027892 McCabe et al. Feb 2005 A1
20050033800 Kavuri et al. Feb 2005 A1
20050044114 Kottomtharayil et al. Feb 2005 A1
20050055445 Gupta et al. Mar 2005 A1
20050060613 Cheng Mar 2005 A1
20050071389 Gupta et al. Mar 2005 A1
20050071391 Fuerderer et al. Mar 2005 A1
20050080928 Beverly et al. Apr 2005 A1
20050086443 Mizuno et al. Apr 2005 A1
20050108292 Burton et al. May 2005 A1
20050114406 Borthakur et al. May 2005 A1
20050131900 Palliyll et al. Jun 2005 A1
20050138306 Panchbudhe et al. Jun 2005 A1
20050144202 Chen Jun 2005 A1
20050172073 Voigt Aug 2005 A1
20050187982 Sato Aug 2005 A1
20050187992 Prahlad et al. Aug 2005 A1
20050188109 Shiga et al. Aug 2005 A1
20050188254 Urabe et al. Aug 2005 A1
20050193026 Prahlad et al. Sep 2005 A1
20050198083 Saika et al. Sep 2005 A1
20050228875 Monitzer et al. Oct 2005 A1
20050246376 Lu et al. Nov 2005 A1
20050246510 Retnamma et al. Nov 2005 A1
20050254456 Sakai Nov 2005 A1
20050268068 Ignatius et al. Dec 2005 A1
20060005048 Osaki et al. Jan 2006 A1
20060010154 Prahlad et al. Jan 2006 A1
20060010227 Atluri Jan 2006 A1
20060010341 Kodama Jan 2006 A1
20060020616 Hardy et al. Jan 2006 A1
20060034454 Damgaard et al. Feb 2006 A1
20060036901 Yang et al. Feb 2006 A1
20060047805 Byrd et al. Mar 2006 A1
20060047931 Saika Mar 2006 A1
20060092861 Corday et al. May 2006 A1
20060107089 Jansz et al. May 2006 A1
20060120401 Harada et al. Jun 2006 A1
20060129537 Torii et al. Jun 2006 A1
20060136685 Griv et al. Jun 2006 A1
20060155946 Ji Jul 2006 A1
20060171315 Choi et al. Aug 2006 A1
20060174075 Sutoh Aug 2006 A1
20060215564 Breitgand et al. Sep 2006 A1
20060230244 Amarendran et al. Oct 2006 A1
20060242371 Shono et al. Oct 2006 A1
20060242489 Brockway et al. Oct 2006 A1
20070033437 Kawamura Feb 2007 A1
20070043956 El Far et al. Feb 2007 A1
20070050547 Sano Mar 2007 A1
20070055737 Yamashita et al. Mar 2007 A1
20070094467 Yamasaki Apr 2007 A1
20070100867 Celik et al. May 2007 A1
20070112897 Asano et al. May 2007 A1
20070113006 Elliott et al. May 2007 A1
20070124347 Vivian et al. May 2007 A1
20070124348 Claborn et al. May 2007 A1
20070130373 Kalwitz Jun 2007 A1
20070143371 Kottomtharayil Jun 2007 A1
20070143756 Gokhale Jun 2007 A1
20070179990 Zimran et al. Aug 2007 A1
20070183224 Erofeev Aug 2007 A1
20070185852 Erofeev Aug 2007 A1
20070185937 Prahlad et al. Aug 2007 A1
20070185938 Prahlad et al. Aug 2007 A1
20070185939 Prahlad et al. Aug 2007 A1
20070185940 Prahlad et al. Aug 2007 A1
20070186042 Kottomtharayil et al. Aug 2007 A1
20070186068 Agrawal Aug 2007 A1
20070226438 Erofeev Sep 2007 A1
20070233756 D'Souza et al. Oct 2007 A1
20070244571 Wilson et al. Oct 2007 A1
20070260609 Tulyani Nov 2007 A1
20070276848 Kim Nov 2007 A1
20070288536 Sen et al. Dec 2007 A1
20080016126 Kottomtharayil et al. Jan 2008 A1
20080016293 Saika Jan 2008 A1
20080059515 Fulton Mar 2008 A1
20080077634 Quakenbush Mar 2008 A1
20080077636 Gupta et al. Mar 2008 A1
20080103916 Camarador et al. May 2008 A1
20080104357 Kim et al. May 2008 A1
20080114815 Sutoh May 2008 A1
20080147878 Kottomtharayil et al. Jun 2008 A1
20080183775 Prahlad et al. Jul 2008 A1
20080205301 Burton et al. Aug 2008 A1
20080208933 Lyon Aug 2008 A1
20080228987 Yagi Sep 2008 A1
20080229037 Bunte et al. Sep 2008 A1
20080243914 Prahlad et al. Oct 2008 A1
20080243957 Prahlad et al. Oct 2008 A1
20080243958 Prahlad et al. Oct 2008 A1
20080244205 Amano et al. Oct 2008 A1
20080250178 Haustein et al. Oct 2008 A1
20080306954 Hornqvist Dec 2008 A1
20080313497 Hirakawa Dec 2008 A1
20090013014 Kern Jan 2009 A1
20090044046 Yamasaki Feb 2009 A1
20090113056 Tameshige et al. Apr 2009 A1
20090150462 McClanahan et al. Jun 2009 A1
20090182963 Prahlad et al. Jul 2009 A1
20090187944 White et al. Jul 2009 A1
20090300079 Shitomi Dec 2009 A1
20090319534 Gokhale Dec 2009 A1
20090319585 Gokhale Dec 2009 A1
20100005259 Prahlad Jan 2010 A1
20100049753 Prahlad et al. Feb 2010 A1
20100094808 Erofeev Apr 2010 A1
20100100529 Erofeev Apr 2010 A1
20100131461 Prahlad et al. May 2010 A1
20100131467 Prahlad et al. May 2010 A1
20100145909 Ngo Jun 2010 A1
20100153338 Ngo et al. Jun 2010 A1
20100179941 Agrawal et al. Jul 2010 A1
20100205150 Prahlad et al. Aug 2010 A1
20100211571 Prahlad et al. Aug 2010 A1
20110066599 Prahlad et al. Mar 2011 A1
20120011336 Saika Jan 2012 A1
20130006942 Prahlad et al. Jan 2013 A1
20140067764 Prahlad et al. Mar 2014 A1
20140164327 Ngo Jun 2014 A1
20140181022 Ngo Jun 2014 A1
20140181029 Erofeev Jun 2014 A1
20140244586 Ngo Aug 2014 A1
20140324772 Prahlad et al. Oct 2014 A1
20150186061 Kottomtharayil Jul 2015 A1
20150199375 Prahlad et al. Jul 2015 A1
20150248444 Prahlad et al. Sep 2015 A1
Foreign Referenced Citations (34)
Number Date Country
2006331932 Dec 2006 AU
2632935 Dec 2006 CA
0259912 Mar 1988 EP
0405926 Jan 1991 EP
0467546 Jan 1992 EP
0774715 May 1997 EP
0809184 Nov 1997 EP
0862304 Sep 1998 EP
0899662 Mar 1999 EP
0981090 Feb 2000 EP
1174795 Feb 2000 EP
1349089 Jan 2003 EP
1349088 Oct 2003 EP
1579331 Sep 2005 EP
1974296 Oct 2008 EP
2256952 Dec 1992 GB
2411030 Aug 2005 GB
05189281 Jul 1993 JP
06274605 Sep 1994 JP
09016463 Jan 1997 JP
11259348 Sep 1999 JP
2000347811 Dec 2000 JP
WO 9303549 Feb 1993 WO
WO 9513580 May 1995 WO
WO 9839707 Sep 1998 WO
WO 9912098 Mar 1999 WO
WO 9914692 Mar 1999 WO
WO 02095632 Nov 2002 WO
WO 03028183 Apr 2003 WO
WO 2004034197 Apr 2004 WO
WO 2005055093 Jun 2005 WO
WO 2005086032 Sep 2005 WO
WO 2007053314 May 2007 WO
WO 2010068570 Jun 2010 WO
Non-Patent Literature Citations (46)
Entry
U.S. Appl. No. 14/038,540, filed Sep. 26, 2013, Erofeev.
Armstead et al., “Implementation of a Campus-Wide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, 1995, pp. 190-199.
Arneson, “Development of Omniserver; Mass Storage Systems,” Control Data Corporation, 1990, pp. 88-93.
Arneson, “Mass Storage Archiving in Network Environments” IEEE, 1998, pp. 45-50.
Ashton, et al., “Two Decades of policy-based storage management for the IBM mainframe computer”, www.research.ibm.com, 19 pages, published Apr. 10, 2003, printed Jan. 3, 2009., www.research.ibm.com, Apr. 10, 2003, pp. 19.
Cabrera, et al. “ADSM: A Multi-Platform, Scalable, Back-up and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA.
Calvett, Andrew, “SQL Server 2005 Snapshots”, published Apr. 3, 2006, http:/www.simple-talk.com/contnet/print.aspx?article=137, 6 pages.
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, 1994, pp. 124-126.
Gait, “The Optical File Cabinet: A Random-Access File system for Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (1988).
Gray, et al. “Transaction processing: concepts and techniques” 1994, Morgan Kaufmann Publishers, USA, 646-655.B7.
Harrington, “The RFP Process: How to Hire a Third Party”, Transportation & Distribution, Sep. 1988, vol. 39, Issue 9, in 5 pages.
http://en.wikipedia.org/wiki/Naive—Bayes—classifier, printed on Jun. 1, 2010, in 7 pages.
IBM, “Intelligent Selection of Logs Required During Recovery Processing”, ip.com, Sep. 16, 2002, 4 pages.
IBM, “Near Zero Impact Backup and Data Replication Appliance”, ip.com, Oct. 18, 2004, 5 pages.
Jander, “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4(Mar. 21, 1998), pp. 64-72.
Kashyap, et al., “Professional Services Automation: A knowledge Management approach using LSI and Domain specific Ontologies”, FLAIRS-01 Proceedings, 2001, pp. 300-302.
Lyon, J., Design considerations in replicated database systems for disaster protection, COMPCON 1988, Feb. 29, 1988, pp. 428-430.
Microsoft Corporation, “Microsoft Exchange Server: Best Practices for Exchange Database Management,” 1998.
Park, et al., “An Efficient Logging Scheme for Recoverable Distributed Shared Memory Systems”, IEEE, 1997, 9 pages.
Rosenblum et al., “The Design and Implementation of a Log-Structure File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991).
The Oracle8 Replication Manual, Part No. A58245-01; Chapters 1-2; Dec. 1, 1997; obtained from website: http://download-west.oracle.com/docs/cd/A64702—01/doc/server.805/a58245/toc.htm on May 20, 2009.
Veritas Software Corporation, “Veritas Volume Manager 3.2, Administrator's Guide,” Aug. 2001, 360 pages.
Wiesmann, M, Database replication techniques: a three parameter classification, Oct. 16, 2000, pp. 206-215.
Final Office Action for Japanese Application No. 2003531581, Mail Date Mar. 24, 2009, 6 pages.
International Search Report and Written Opinion dated Nov. 13, 2009, PCT/US2007/081681.
First Office Action for Japanese Application No. 2003531581, Mail Date Jul. 8, 2008, 8 pages.
International Preliminary Report on Patentability, PCT Application No. PCT/US2009/066880, mailed Jun. 23, 2011, in 9 pages.
Canadian Office Action dated Sep. 24, 2012, Application No. 2,632,935, 2 pages.
European Examination Report; Application No. 06848901.2, Apr. 1, 2009, pp. 7.
Examiner's First Report ; Application No. 2006331932, May 11, 2011 in 2 pages.
Canadian Office Action dated Dec. 29, 2010, Application No. CA2546304.
Examiner's Report for Australian Application No. 2003279847, Dated Dec. 9, 2008, 4 pages.
First Office Action in Canadian application No. 2,632,935 dated Feb. 16, 2012, in 5 pages.
International Search Report dated May 15, 2007, PCT/US2006/048273.
Second Examination Report in EU Appl. No. 06 848 901.2-2201 dated Dec. 3, 2010.
International Search Report and Written Opinion dated Mar. 25, 2010, PCT/US2009/066880.
International Search Report and Written Opinion issued in PCT Application No. PCT/US2011/030396, mailed Jul. 18, 2011, in 20 pages.
International Preliminary Report on Patentability and Written Opinion in PCT/US2011/030396 mailed Oct. 2, 2012.
International Search Report and Written Opinion issued in PCT Application No. PCT/US2011/38436, mailed Sep. 21, 2011, in 18 pages.
International Preliminary Report on Patentability and Written Opinion in PCT/US2011/038436 mailed Dec. 4, 2012.
International Search Report dated Dec. 28, 2009, PCT/US204/038324.
International Search Report and Written Opinion dated Jan. 11, 2006 , PCT/US2004/038455.
Exam Report in Australian Application No. 2009324800 dated Jun. 17, 2013.
U.S. Appl. No. 14/592,770, filed Jan. 8, 2015, Kottomtharayil.
U.S. Appl. No. 14/645,982, filed Mar. 12, 2015, Prahlad, et al.
U.S. Appl. No. 14/668,752, filed Mar. 25, 2015, Prahalad, et al.
Related Publications (1)
Number Date Country
20150205853 A1 Jul 2015 US
Provisional Applications (1)
Number Date Country
61121418 Dec 2008 US
Continuations (3)
Number Date Country
Parent 14193945 Feb 2014 US
Child 14675506 US
Parent 13523709 Jun 2012 US
Child 14193945 US
Parent 12424115 Apr 2009 US
Child 13523709 US