Current storage management systems employ a number of different methods to perform storage operations on electronic data. For example, data can be stored in primary storage as a primary copy or in secondary storage as various types of secondary copies including, as a backup copy, a snapshot copy, a hierarchical storage management copy (“HSM”), as an archive copy, and as other types of copies.
A primary copy of data is generally a production copy or other “live” version of the data which is used by a software application and is generally in the native format of that application. Primary copy data may be maintained in a local memory or other high-speed storage device that allows for relatively fast data access if necessary. Such primary copy data is typically intended for short term retention (e.g., several hours or days) before some or all of the data is stored as one or more secondary copies, for example to prevent loss of data in the event a problem occurred with the data stored in primary storage.
Secondary copies include point-in-time data and are typically for intended for long-term retention (e.g., weeks, months or years depending on retention criteria, for example as specified in a storage policy as further described herein) before some or all of the data is moved to other storage or discarded. Secondary copies may be indexed so users can browse and restore the data at another point in time. After certain primary copy data is backed up, a pointer or other location indicia such as a stub may be placed in the primary copy to indicate the current location of that data.
One form of secondary copy is a snapshot copy. From an end-user viewpoint, a snapshot may be seen as an instant image of the primary copy data at a given point in time. A snapshot generally captures the directory structure of a primary copy volume at a particular moment in time, and also preserves file attributes and contents. In some embodiments, a snapshot may exist as a virtual file system, parallel to the actual file system. Users typically gain a read-only access to the record of files and directories of the snapshot. By electing to restore primary copy data from a snapshot taken at a given point in time, users may also return the current file system to the prior state of the file system that existed when the snapshot was taken.
A snapshot may be created instantly, using a minimum of file space, but may still function as a conventional file system backup when stored at or near the file system. A snapshot may not actually create another physical copy of all the data, but may simply create pointers that are able to map files and directories to specific disk blocks. The snapshot may be a copy of a set of files and/or directories as they were at a particular point in the past. That is, the snapshot is an image, or representation, of a volume of data at a point in time. A snapshot may be as a secondary copy of a primary volume of data, such as data in a file system, an Exchange server, a SQL database, an Oracle database, and so on. The snapshot may be an image of files, folders, directories, and other data objects within a volume, or an image of the blocks of the volume.
Data storage systems utilize snapshots for a variety of reasons. One typical use of snapshots is to copy a volume of data without disabling access to the volume for a long period. After performing the snapshot, the data storage system can then copy the data set by leveraging the snapshot of the data set. Thus, the data storage system performs a full backup of a primary volume when a primary volume is active and generating real-time data. Although performing a snapshot (i.e., taking an image of the data set) is a fast process, the snapshot is typically not an effective or reliable backup copy of a data set, because it does not actually contain the content of the data set. Restoring data from snapshots can be especially cumbersome, because a restoration process cannot restore the data set using snapshots alone. Recovery of individual files or folders can be especially cumbersome, because typical systems often recover an entire snapshot in order to restore an individual file or folder imaged by the snapshot.
However, the speed of performing, or taking, a snapshot can often be a great benefit to data storage systems that are required to store large amounts of data. Thus, utilizing snapshots in ways other than those described above may provide significant utility to data storage systems, because snapshots are fast, are space efficient, and facilitate performing off host data storage operations, among other advantages.
The need exists for a system that overcomes the above problems, as well as one that provides additional benefits. Overall, the examples herein of some prior or related systems and their associated limitations are intended to be illustrative and not exclusive. Other limitations of existing or prior systems will become apparent to those of skill in the art upon reading the following Detailed Description.
Overview
Described in detail herein is a system and method that employs snapshots as data sources, such as backup copies of data. Instead of treating a snapshot only as a picture of a disk, the system employs snapshots as a data source that can be backed up or otherwise copied to tape or magnetic disk. The system can then seamlessly restore individual files from tape or disk using snapshots. The system creates a data structure, such as an index, that describes what is on a disk (as often defined by a file system for that disk). The index may provide a list of files on the disk, and location information indicating where each file is located, with respect to the snapshot.
In some examples, the system creates a secondary copy of data by storing a snapshot with an index associated with and/or related to the snapshot. The snapshot identifies the data stored in the secondary copy, and the index provides application specific context information that facilitates retrieving data identified by the snapshot. In these examples, the system may store a combination of a snapshot and associated index to storage media, such as to tape or disk, and use the stored combination as a data source, such as a backup copy of a primary volume of data.
The system may create the index in a number of ways, as long as the index can be used in combination with a snapshot to facilitate data storage and/or recovery via the snapshot. For example, an index agent may receive a snapshot of a data set, receive application context information associated with the snapshot, store the snapshot, and store the application context information in an index that identifies individual files from the data set imaged by the snapshot.
In some examples, the system provides for the recovery, or restoration, of data from a snapshot based data source. The recovery may be transparent to a user (that is, the user does not know what mechanism is used during a restore process) and/or seamless with respect to other types of data sources. For example, the system may restore a data set by restoring a full backup of the data set using a snapshot based secondary copy of the data set and incremental backups using other secondary copies.
In some cases, the system restores individual files using snapshots and associated indices. For example, the system may receive a request to restore a specific file or portion of a file, identify a snapshot containing an image of a volume containing the file, look to an index associated with the snapshot to identify the file, and retrieve the file (or a copy of the file) from information in the associated index. Thus, the system facilitates granular recovery of data objects within a data set without requiting a data recovery system to restore entire snapshots or secondary copies.
The system will now be described with respect to various examples. The following description provides specific details for a thorough understanding of, and enabling description for, these examples of the system. However, one skilled in the art will understand that the system may be practiced without these details. In other instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the examples of the system.
The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the system. Certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section.
Suitable System
Referring to
Referring to
Aspects of the system can be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform one or more of the computer-executable instructions explained in detail herein. Aspects of the system can also be practiced in distributed computing environments where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), Storage Area Network (SAN), Fibre Channel, or the Internet. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Aspects of the system may be stored or distributed on computer-readable media, including magnetically or optically readable computer discs, hard-wired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. Indeed, computer implemented instructions, data structures, screen displays, and other data under aspects of the system may be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme). Those skilled in the relevant art will recognize that portions of the system reside on a server computer, while corresponding portions reside on a client computer, and thus, while certain hardware platforms are described herein, aspects of the system are equally applicable to nodes on a network.
For example, the data storage system 200 contains a storage manager 210, one or more clients 111, one or more media agents 112, and one or more storage devices 113. Storage manager 210 controls media agents 112, which may be responsible for transferring data to storage devices 113. Storage manager 210 includes a jobs agent 211, a management agent 212, a database 213, and/or an interface module 214. Storage manager 210 communicates with client(s) 111. One or more clients 111 may access data to be stored by the system from database 222 via a data agent 221. The system uses media agents 112, which contain databases 231, to transfer and store data into storage devices 113. Client databases 222 may contain data files and other information, while media agent databases may contain indices and other data structures that assist and implement the storage of data into secondary storage devices, for example.
The data storage system may include software and/or hardware components and modules used in data storage operations. The components may be storage resources that function to copy data during storage operations. The components may perform other storage operations (or storage management operations) other that operations used in data stores. For example, some resources may create, store, retrieve, and/or migrate primary or secondary data copies. Additionally, some resources may create indices and other tables relied upon by the data storage system and other data recovery systems. The secondary copies may include snapshot copies and associated indices, but may also include other backup copies such as HSM (hierarchical storage management) copies, archive copies, and so on. The resources may also perform storage management functions that may communicate information to higher level components, such as global management resources.
In some examples, the system performs storage operations based on storage policies, as mentioned above. For example, a storage policy includes a set of preferences or other criteria to be considered during storage operations. The storage policy may determine or define a storage location and/or set of preferences about how the system transfers data to the location and what processes the system performs on the data before, during, or after the data transfer. In some cases, a storage policy may define a logical bucket in which to transfer, store or copy data from a source to a data store, such as storage media. Storage policies may be stored in storage manager 210, or may be stored in other resources, such as a global manager, a media agent, and so on. Further details regarding storage management and resources for storage management will now be discussed.
Referring to
Snapshots as Data Sources, such as Backup Copies of a Data Set
The system may store one or more snapshots with an associated index in order to create a snapshot-based data source, such as a secondary copy of a primary volume of data. Data may be stored in various types of volumes, including primary copies or production copies, as well as various secondary copies, such as snapshots, backup copies, archival copies, and so on.
The system creates snapshots of blocks or chunks of data in a data store and an associated index that keeps track of the files imaged by the snapshot (e.g., which blocks are associated with which files and what applications are associated with the files). Thus, a snapshot becomes a way of storing data that includes application specific data. The snapshots and associated index can then be used as auxiliary copies, synthetic full copies, partial or full restores, and other secondary copies. Using snapshots as a data source allow a data storage system to be very flexible. Also, the system can manage the snapshots, such as by backing them up and deleting any original versions from the system.
The system creates snapshots using a variety of mechanisms. In some examples, the system employs hardware-based snapshot mechanisms when creating snapshots. Examples of suitable hardware-based snapshot mechanisms include EMC's Symmetrix and Clarion, Hitachi Data Storage (HDS), Network Appliance's Snapshot, and so on.
In some examples, the system employs software-based snapshot mechanisms. For example, the system may leverage continuous data replication (CDR) or discrete data replication (DDR) when creating snapshots of a volume of data. CDR generates recovery points for a volume, which can be used as a point in time snapshot of a volume. Thus, leveraging the recovery points as snapshots enables the system to generate point-in-time copies (snapshots) of a volume of data while maintaining a live copy of the volume. Of course, other mechanisms are possible.
Further, if the data storage system employs hardware having particular capabilities, such as the ability to take mirror copies or multiple snapshots, that functionality may be utilized by the snapshot and associated index. Further, snapshots may be manipulated with application programming interfaces (APIs) provided by hardware and software providers.
Referring to
The system may employ a number of different mechanisms when moving snapshots to secondary storage, such as magnetic tape. In some examples, the system performs block-level or chunk-based migration or transfer of snapshots from primary storage to secondary storage.
Briefly, block-level migration, or block-based data migration, involves transferring or migrating disk blocks from a primary data store (e.g., a disk partition or volume) to secondary media. Using block-level migration, a data storage system transfers blocks on a disk that have not been recently accessed to secondary storage, freeing up space on the disk. Chunked file migration, or chunk-based data migration, involves splitting a data object into two or more portions of the data object, creating an index that tracks the portions, and storing the data object to secondary storage via the two or more portions. Among other things, the chunk-based migration provides for fast and efficient storage of a data object. Additionally, chunk-based migration facilitates fast and efficient recall of a data object, such as a snapshot of a large database or virtual machine file. For example, if a user modifies a migrated file, chunk-based migration enables a data restore component to only retrieve from, and migrate back to, secondary storage the chunk containing the modified portion of the file, and not the entire file. Further details regarding block-level and/or chunk-based data migration may be found in U.S. Provisional Patent Application No. 61/096,587 filed on Sep. 12, 2008, entitled TRANSFERRING OR MIGRATING PORTIONS OF DATA OBJECTS, SUCH AS BLOCK-LEVEL DATA MIGRATION OR CHUNK-BASED DATA MIGRATION, which is hereby incorporated by reference in its entirety.
The snapshot agent 410 creates, takes, produces, and/or generates a snapshot or multiple snapshots of a data source, such as a primary volume of data or a secondary copy of a primary volume. As discussed herein, the snapshot is a representation of a set of data objects at a given point in time. The snapshot may be a complete image of a data set, or may be an incremental image of a data set. Further details with respect to the snapshot process and the types of snapshots may be found in U.S. patent application Ser. No. 10/990,353, filed on Nov. 15, 2004, entitled SYSTEM AND METHOD FOR PERFORMING AN IMAGE LEVEL SNAPSHOT AND FOR RESTORING PARTIAL VOLUME DATA.
Information regarding a snapshot is stored in a data structure. For example, a data structure may be generally organized like the following data structure:
In the above data structure, the Snapshot Identifiers may include information used to uniquely identify the snapshot. The Snapshot Engine Identifiers may include information used to identify the engine that performed the snapshot. Source Identifiers and Destination Identifiers may include information about the source of the data of which a snapshot was made and where the snapshot is stored, respectively. Creation Time may be a timestamp indicating when the snapshot was made. The Snapshot Group Identifiers may identify a group to which the snapshot belongs. The Snapshot Type may include information identifying a type of the snapshot. The Storage Operation Identifiers may include information identifying a storage operation and/or storage operation elements associated with the snapshot. Flags may include one or more flags or bits set to indicate various types of information regarding the snapshot, and Snapshot Pruning Information may include information about whether or not the snapshot can be pruned.
The index agent 420 creates, generates, and/or builds a data structure, such as an index, to be associated with one or more snapshots. As described more fully below, the index may be a two tier index, may be a three tier index, or may have other index configurations, depending on the needs of the system. The two tier index may include a first entry that contains information identifying a data object, such as a file or folder, and a second entry that identifies where the file or folder is located. As an alternative, the second entry may indicate where an archive file (the file stripped of its native format) is located.
The three tier index includes the first and second entries as well as a third entry that contains the application specific data discussed herein. For example, the third entry, or tier, may contain information identifying an original mount point for an associated snapshot.
The three tier index may track specific files on a snapshot that are of interest. The three tier index describes what is on the disk (or tape), and not just the second tier index description of a file. The third tier may include an entry including information that identifies where to find data when needed within the snapshot based on an indication of what files were on the disk when the snapshot was taken, and where they were located.
For example, the index agent 420 creates the index 425 relative to a file system associated with the disk, so as to explain all the files on that disk and their locations. The index tracks an original mount point, so recovery systems can find network accessible data as and when the data moves among network resources. For example, an original file named “system.txt” may have an original mount point at “E:/mount/snap1/user1/system.txt,” but the snapshot imaging the file may subsequently be remounted at a mount point at “F:/user1/system.txt.” The index, via the third tier, may track such information, such as information associated with movement of the files.
Thus, the file system identifies or presents the files of interest to the index agent to create the new index. The index maps contextual information associated with a snapshot of a volume. The index data identifies an application with files of interest. Alternatively or additionally, the system may employ content indexing functions to discover content and provide that as a separate content index. Further details may be found in U.S. patent application Ser. No. 12/058,457, filed on Mar. 28, 2008, entitled METHOD AND SYSTEM FOR OFFLINE INDEXING OF CONTENT AND CLASSIFYING STORED DATA.
In some examples, the system creates an archive file when creating an archive copy or other secondary copies of a data set, such as a data set originating in a file system. The creation of an archive file enables the system, when storing or restoring data, to have both a logical view and a physical view of stored data. The logical view, represented by the archive file, enables the system to store data having a format that is neutral (or, independent) with respect to data type. The physical view, represented by an index of locations on stored physical media, enables the system to locate the data stored on the physical media as chunks, tape extents, or blocks of the archive file.
The three tier index may include two entries associated with a location of the file, such as information identifying a snapshot that imaged the file as well as information identifying a location on secondary storage that contains the file. An additional entry provides application specific data for the file, such as metadata. Thus, in some cases, the system creates a backup copy of a primary volume that includes a snapshot of the primary volume and a three tier index that contains information associated with an identification of the file, information identifying a location of an archive file associated with the file, and information providing application context information about the file (such as an original mount point for the snapshot).
Referring to
As one example, the index 500 includes information associated with a data object named “Invention.txt.” This information includes a location of the archive file for the data object at “archive1” and information identifying a mount point for the snapshot that imaged the data object, at “C://snap1/user1.” The index 500 may contain information about some files imaged by a snapshot (such as certain files of interest), or may contain information about all the files imaged by the snapshot. The system may build the index as follows.
Referring to
In step 620, the index agent 420 receives or obtains context information associated with the snapshot. The index agent may query some or all of the data storage resources, such as a storage manager or jobs agent, to retrieve data associated with systems and applications that created the snapshot. For example, the index agent may query the Volume Snapshot Service (VSS) used to create the snapshot. The index agent may retrieve information for each of the individual files imaged by the snapshot, for the entire snapshot, or both. The application context information may include information about resources utilized by the snapshot agent (such as mount points), information from or about the file system and/or applications that created the snapshot, and so on.
In addition, the index agent calls the snapshot APIs to identify information associated with the snapshot. Examples of information received from the snapshot APis include unique snapshot identifiers (which may be received from the snapshot hardware or generated by the index agent), source host information identifying the computing resource that originated the underlying data from which the snapshot was created, volume information, client identifiers, path identifiers, creation time, control host identifiers, source information, server identifiers, job identifiers, and so on. For example, the system, via an agent stored on a Microsoft Exchange server, may interact with an external RAID array on the Exchange server via APIs in order to retrieve information associated with snapshots performed on the Exchange server.
In step 630, the media agent stores the snapshot to storage media. For example, the media agent 112 transfers the snapshot 415 to storage media 430 using one or more of the data paths described with respect to
In step 640, the system stores the received application context information in an index that identifies individual files from the data set imaged by the snapshot, and in step 650, stores the index to the storage media. That is, the system builds an index, such as the three tier index described herein, to track information within the snapshot such that an original location of the data imaged by the snapshot can be determined from the index.
As an example, the system, via an index agent, receives a snapshot taken of a primary volume, such as “snap1.” The system queries a VSS, and determines the snapshot occurred at the mount point “D:/users,” and was performed by a mechanism known to the system as “hardsnapB.” The system then stores the snapshot and the associated information to a magnetic tape, named “tape4,” at location “offset100-230” The system then updates an index, such as an index at a media agent that stored the snapshot, to include information associated the name of the tape with the name of the snapshot stored on the tape. Thus, an example index entry may be as follows:
The system may store the entry at the media agent or at other storage resources, such as a global manager. In addition, the system stores the entry along with the snapshot on the tape, to facilitate restoration of the data via the snapshot, effectively creating a copy of data (i.e., a data source), using a snapshot of the data.
In addition to creating the index, the system may add data to an existing archive table file, or other tables, to recognize that a particular data copy is a snapshot. For example, a flag may be set in the archive table file to indicate to the system that a copy is a snapshot-based copy. This may facilitate discovery of the copy, for example.
In some examples, the three-tier or multiple entry index may be stored in different locations across a data storage system. For example, information associated with the location of a snapshot on secondary storage (such as tape offset information) and the application specific information may be stored in a cache of a media agent that transfers the snapshot to the secondary storage, while the snapshot metadata may be stored by a data management component. Of course, the various indices may be stored in other locations in order to meet the needs of the system.
Data Recovery Using Snapshot Based Data Sources
As described herein, the recovery of data, such as individual files, may be performed by restoring data from snapshot based secondary copies, such as backup copies. Referring to
In step 720, the system identifies the snapshot that imaged the selected file. For example, the system may include a table, map or other data structure of file names and associated snapshots, and use the map to identify a snapshot that imaged the file (e.g., table 500 of
In step 740, the system retrieves information from the associated index. For example, the system retrieves the information associated with the selected file, such as information for an archive file associated with the selected file, information associated with the file system that created the selected file, and so on.
In step 750, the system locates and restores the selected file. For example, using the retrieved information from the associated index, the system locates the archive file and application specific information for the selected file, and restores the file.
As an example, a user wishes to restore “email.txt” from a data archive. A data recovery system receives input from the user to restore the file (step 710). The system, via table 500 of
Thus, by utilizing a snapshot based data source as the vehicle for data recovery, the system is able to take advantage of the speed of restoration associated with snapshots with the granularity associated with other backup methods, such as restoring individual files. The descriptive information in the index enables the system to quickly and efficiently identify the specific location of files imaged by the snapshot. That is, the combination of an image of a volume of a data (via a snapshot) and knowledge of the mechanisms and resources used to create the file system (via an associated index) enables the system to restore data quickly and efficiently.
Seamless Restoration of Data
As described herein, in some cases it may be advantageous to mix or use multiple, different data storage operations when creating a secondary copy of a primary volume of data. For example, a data storage system will create a full backup of a volume of data at a first point in time, and then incrementally backup the volume in subsequent points in time, only copying changes or modifications made to the volume of data after the full backup was created. Often, the full backup is more time-intensive and system-sensitive than incremental backups, because more data is being stored and system resources used during the full backup. Thus, a snapshot may be used to create the full backup, and other operations, such as continuous data replication of changes, copy-on-write snapshots, and so on, may be used for the subsequent incremental backups.
Referring to
In step 820, the system creates an incremental copy at a second, subsequent time. The system may employ continuous data protection (CDP) or other copy mechanisms, and may transfer data directly to tape or other storage media. CDP is advantageous because it virtually ensures an error free transfer of data to the tape or to another, often remote, data store. In step 830, the system reviews the volume for any changes to the volume. The system may also review a change journal or other similar data structure. When changes are identified, routine 800 proceeds back to step 820, and performs an additional backup. Thus, a secondary copy of a volume of data is created and constantly updated using backup processes well suited for the various tasks involved.
In order to restore data from such a secondary copy, the system may treat the different data sources in a similar fashion, thereby providing seamless and efficient data recovery to a user. For example, should the user wish to restore the entire volume, the system employs the fast snapshot based recovery of the original data along with the safe, protected recovery of any subsequent changes to the data set. In addition, the system may provide for the restoration of individual files in a fast and effective manner, because the snapshot based full backup is able to restore individual files without performing a full restore. Of course, in some cases it may be advantageous to create incremental copies as a snapshot based data source. Additionally, the system may create a partial secondary copy as a snapshot based data source and the rest of the secondary copy using other data storage operations.
In some examples, the system restores data from different types of secondary copies having different formats, such as snapshots and archive copies of data. The system may create and leverage an index that normalizes, or translates the different data formats during data recovery. The index may include information that identifies the original location of data, information that identifies the current location of the data, and/or information that identifies the type of media containing the data. Thus, during a restore process, the system may review this index in order to determine a relative path to requested data. In some cases, the system may provide a user with some or all versions under management by the system. The system may facilitate searches across the index, such as those described in U.S. patent application Ser. No. 11/931,034, filed on Oct. 31, 2007, entitled METHOD AND SYSTEM FOR SEARCHING STORED DATA, which is incorporated by reference in its entirety.
For example, a request for “file1.doc” causes the system to review an index associating “file1.doc” with an original mount point (D:/snapshot1/) for a snapshot that imaged a volume containing “file1.doc,” and a current location of the snapshot (X:/tape1/snapshot1/file1.doc) now stored in a non-native format. The system can then convert the retrieved copy of the requested file to a native format, identify a path to the original mount point, and provide the requested file.
The system may be employed by current data storage systems that have snapshot capabilities. For example, the index agent and/or the snapshot agent may be introduced into a data storage system that performs snapshots, but does not utilize them as data sources, enabling the data storage system to perform the data storage operations described herein.
Conclusion
From the foregoing, it will be appreciated that specific examples of the data storage system have been described herein for purposes of illustration, but that various modifications may be made without deviating from the spirit and scope of the system. For example, although files have been described, other types of content such as user settings, application data, emails, and other data objects can be imaged by snapshots. Accordingly, the system is not limited except as by the appended claims.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” The word “coupled”, as generally used herein, refers to two or more elements that may be either directly connected, or connected by way of one or more intermediate elements. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
The above detailed description of embodiments of the system is not intended to be exhaustive or to limit the system to the precise form disclosed above. While specific embodiments of, and examples for, the system are described above for illustrative purposes, various equivalent modifications are possible within the scope of the system, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative embodiments may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times.
The teachings of the system provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments.
These and other changes can be made to the system in light of the above Detailed Description. While the above description details certain embodiments of the system and describes the best mode contemplated, no matter how detailed the above appears in text, the system can be practiced in many ways. Details of the system may vary considerably in implementation details, while still being encompassed by the system disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the system should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the system with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the system to the specific embodiments disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the system encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the system under the claims.
While certain aspects of the system are presented below in certain claim forms, the applicant contemplates the various aspects of the system in any number of claim forms. For example, while only one aspect of the system is recited as a means-plus-function claim under 35 U.S.C sec. 112, sixth paragraph, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. § 112, ¶6 will begin with the words “means for”.) Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the system.
This application is a continuation of U.S. patent application Ser. No. 14/618,241, filed Feb. 10, 2015, which is a continuation of U.S. patent application Ser. No. 12/558,947, filed on Sep. 14, 2009, entitled USING A SNAPSHOT AS A DATA SOURCE, now U.S. Pat. No. 8,959,299 which claims priority to U.S. Patent Application No. 61/097,407, filed on Sep. 16, 2008, entitled USING A SNAPSHOT AS A DATA SOURCE, each of which is incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 10/990,353, filed on Nov. 15, 2004, entitled SYSTEM AND METHOD FOR PERFORMING AN IMAGE LEVEL SNAPSHOT AND FOR RESTORING PARTIAL VOLUME DATA, now U.S. Pat. No. 7,539,707 , and U.S. patent application Ser. No. 12/058,487, filed on Mar. 28, 2008, entitled METHOD AND SYSTEM FOR OFFLINE INDEXING OF CONTENT AND CLASSIFYING STORED DATA, now U.S. Pat. No. 8,170,995 , each of which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4686620 | Ng | Aug 1987 | A |
4995035 | Cole et al. | Feb 1991 | A |
5005122 | Griffin et al. | Apr 1991 | A |
5093912 | Dong et al. | Mar 1992 | A |
5133065 | Cheffetz et al. | Jul 1992 | A |
5193154 | Kitajima et al. | Mar 1993 | A |
5212772 | Masters | May 1993 | A |
5226157 | Nakano et al. | Jul 1993 | A |
5239647 | Anglin et al. | Aug 1993 | A |
5241668 | Eastridge et al. | Aug 1993 | A |
5241670 | Eastridge et al. | Aug 1993 | A |
5263154 | Eastridge et al. | Nov 1993 | A |
5276860 | Fortier et al. | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5287500 | Stoppani, Jr. | Feb 1994 | A |
5317731 | Dias et al. | May 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5333315 | Saether et al. | Jul 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5369757 | Spiro et al. | Nov 1994 | A |
5403639 | Belsan et al. | Apr 1995 | A |
5410700 | Fecteau et al. | Apr 1995 | A |
5448724 | Hayashi et al. | Sep 1995 | A |
5485606 | Midgdey et al. | Jan 1996 | A |
5491810 | Allen | Feb 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5504873 | Martin et al. | Apr 1996 | A |
5544345 | Carpenter et al. | Aug 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5559957 | Balk | Sep 1996 | A |
5559991 | Kanfi | Sep 1996 | A |
5604862 | Midgely et al. | Feb 1997 | A |
5619644 | Crockett et al. | Apr 1997 | A |
5638509 | Dunphy et al. | Jun 1997 | A |
5642496 | Kanfi | Jun 1997 | A |
5673381 | Huai et al. | Sep 1997 | A |
5699361 | Ding et al. | Dec 1997 | A |
5720026 | Uemura et al. | Feb 1998 | A |
5729743 | Squibb | Mar 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5758359 | Saxon | May 1998 | A |
5761677 | Senator et al. | Jun 1998 | A |
5764972 | Crouse et al. | Jun 1998 | A |
5765173 | Cane et al. | Jun 1998 | A |
5778395 | Whiting et al. | Jul 1998 | A |
5790114 | Geaghan et al. | Aug 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5813009 | Johnson et al. | Sep 1998 | A |
5813017 | Morris | Sep 1998 | A |
5819292 | Hitz et al. | Oct 1998 | A |
5875478 | Blumenau | Feb 1999 | A |
5878408 | Van Huben et al. | Mar 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5901327 | Ofek | May 1999 | A |
5907672 | Matze et al. | May 1999 | A |
5924102 | Perks | Jul 1999 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
5974563 | Beeler, Jr. | Oct 1999 | A |
6021415 | Cannon et al. | Feb 2000 | A |
6021475 | Nguyen et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6052735 | Ulrich et al. | Apr 2000 | A |
6072490 | Bates et al. | Jun 2000 | A |
6076148 | Kedem et al. | Jun 2000 | A |
6094416 | Ying | Jul 2000 | A |
6101585 | Brown et al. | Aug 2000 | A |
6131095 | Low et al. | Oct 2000 | A |
6131148 | West et al. | Oct 2000 | A |
6131190 | Sidwell | Oct 2000 | A |
6148412 | Cannon et al. | Nov 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6161111 | Mutalik et al. | Dec 2000 | A |
6167402 | Yeager | Dec 2000 | A |
6195695 | Cheston et al. | Feb 2001 | B1 |
6205450 | Kanome | Mar 2001 | B1 |
6212512 | Barney et al. | Apr 2001 | B1 |
6260069 | Anglin | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6275953 | Vahalia et al. | Aug 2001 | B1 |
6301592 | Aoyama et al. | Oct 2001 | B1 |
6311193 | Sekido | Oct 2001 | B1 |
6324581 | Xu et al. | Nov 2001 | B1 |
6328766 | Long | Dec 2001 | B1 |
6330570 | Crighton et al. | Dec 2001 | B1 |
6330642 | Carteau | Dec 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
RE37601 | Eastridge et al. | Mar 2002 | E |
6356801 | Goodman et al. | Mar 2002 | B1 |
6366986 | St. Pierre et al. | Apr 2002 | B1 |
6366988 | Skiba et al. | Apr 2002 | B1 |
6374363 | Wu et al. | Apr 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6434681 | Armangau | Aug 2002 | B1 |
6473775 | Kusters et al. | Oct 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6519679 | Devireddy et al. | Feb 2003 | B2 |
6538669 | Lagueux, Jr. et al. | Mar 2003 | B1 |
6542972 | Ignatius et al. | Apr 2003 | B2 |
6564228 | O'Connor | May 2003 | B1 |
6604118 | Kleiman et al. | Aug 2003 | B2 |
6631477 | LeCrone et al. | Oct 2003 | B1 |
6643671 | Milillo et al. | Nov 2003 | B2 |
6647473 | Golds et al. | Nov 2003 | B1 |
6651075 | Kusters et al. | Nov 2003 | B1 |
6658526 | Nguyen et al. | Dec 2003 | B2 |
6662198 | Satyanarayanan et al. | Dec 2003 | B2 |
6665815 | Goldstein et al. | Dec 2003 | B1 |
6721767 | DeMeno et al. | Apr 2004 | B2 |
6728736 | Hostetter et al. | Apr 2004 | B2 |
6732125 | Autrey et al. | May 2004 | B1 |
6760723 | Oshinsky et al. | Jul 2004 | B2 |
6792518 | Armangau et al. | Sep 2004 | B2 |
6799258 | Linde | Sep 2004 | B1 |
6826661 | Umbehocker et al. | Nov 2004 | B2 |
6832299 | Shimada | Dec 2004 | B2 |
7386532 | Kiessig et al. | Feb 2005 | B2 |
6871271 | Ohran et al. | Mar 2005 | B2 |
6880051 | Timanaro-Perrotta | Apr 2005 | B2 |
6898688 | Martin et al. | May 2005 | B2 |
6912627 | Matsunami et al. | Jun 2005 | B2 |
6915313 | Yao | Jul 2005 | B2 |
6938135 | Kekre et al. | Aug 2005 | B1 |
6948038 | Berkowitz et al. | Sep 2005 | B2 |
6948089 | Fujibayashi | Sep 2005 | B2 |
6954834 | Slater et al. | Oct 2005 | B2 |
6959310 | Eshel et al. | Oct 2005 | B2 |
6981114 | Wu et al. | Dec 2005 | B1 |
6981177 | Beattie | Dec 2005 | B2 |
6993539 | Federwisch et al. | Jan 2006 | B2 |
7003641 | Prahlad et al. | Feb 2006 | B2 |
7035880 | Crescenti et al. | Apr 2006 | B1 |
7080088 | Lau | Jul 2006 | B1 |
7165079 | Chen et al. | Jan 2007 | B1 |
7174352 | Kleiman et al. | Feb 2007 | B2 |
7209972 | Ignatius et al. | Apr 2007 | B1 |
7225204 | Manley et al. | May 2007 | B2 |
7225208 | Midgley et al. | May 2007 | B2 |
7225210 | Guthrie | May 2007 | B2 |
7231544 | Tan et al. | Jun 2007 | B2 |
7234115 | Sprauve et al. | Jun 2007 | B1 |
7237075 | Welsh et al. | Jun 2007 | B2 |
7240219 | Teicher et al. | Jul 2007 | B2 |
7275177 | Armangau et al. | Sep 2007 | B2 |
7296125 | Ohran | Nov 2007 | B2 |
7346623 | Prahlad et al. | Mar 2008 | B2 |
7383538 | Bates et al. | Jun 2008 | B2 |
7395282 | Crescenti et al. | Jul 2008 | B1 |
7406048 | Datta et al. | Jul 2008 | B2 |
7412583 | Burton et al. | Aug 2008 | B2 |
7426052 | Cox et al. | Sep 2008 | B2 |
7454443 | Ram et al. | Nov 2008 | B2 |
7480779 | Tsuji | Jan 2009 | B2 |
7509316 | Greenblatt et al. | Mar 2009 | B2 |
7523278 | Thompson et al. | Apr 2009 | B2 |
7529782 | Prahlad et al. | May 2009 | B2 |
7539707 | Prahlad et al. | May 2009 | B2 |
7539735 | Fruchtman et al. | May 2009 | B2 |
7549028 | Thompson et al. | Jun 2009 | B2 |
7565572 | Yamasaki | Jul 2009 | B2 |
7567991 | Armangau et al. | Jul 2009 | B2 |
7568080 | Prahlad et al. | Jul 2009 | B2 |
7580950 | Kavuri et al. | Aug 2009 | B2 |
7587563 | Teterin et al. | Sep 2009 | B1 |
7596611 | Satish et al. | Sep 2009 | B1 |
7600219 | Tsantillis | Oct 2009 | B2 |
7620666 | Root et al. | Nov 2009 | B1 |
7651593 | Prahlad et al. | Jan 2010 | B2 |
7668884 | Prahlad et al. | Feb 2010 | B2 |
7707184 | Zhang et al. | Apr 2010 | B1 |
7716171 | Kryger | May 2010 | B2 |
7716183 | Lee | May 2010 | B2 |
7725440 | Reed et al. | May 2010 | B2 |
7734578 | Prahlad et al. | Jun 2010 | B2 |
7761456 | Cram et al. | Jul 2010 | B1 |
7840533 | Prahlad et al. | Nov 2010 | B2 |
7840537 | Gokhale et al. | Nov 2010 | B2 |
7844577 | Becker et al. | Nov 2010 | B2 |
7873806 | Prahlad et al. | Jan 2011 | B2 |
7882077 | Gokhale et al. | Feb 2011 | B2 |
7933927 | Dee et al. | Apr 2011 | B2 |
7979389 | Prahlad et al. | Jul 2011 | B2 |
8055625 | Prahlad et al. | Nov 2011 | B2 |
8095511 | Zwilling et al. | Jan 2012 | B2 |
8117410 | Lu et al. | Feb 2012 | B2 |
8140786 | Bunte et al. | Mar 2012 | B2 |
8140794 | Prahlad et al. | Mar 2012 | B2 |
8161077 | Zha et al. | Apr 2012 | B2 |
8170995 | Prahlad et al. | May 2012 | B2 |
8195623 | Prahlad et al. | Jun 2012 | B2 |
8219524 | Gokhale | Jul 2012 | B2 |
8250033 | De Souter et al. | Aug 2012 | B1 |
8285671 | Prahlad et al. | Oct 2012 | B2 |
8307177 | Prahlad et al. | Nov 2012 | B2 |
8401996 | Muller et al. | Mar 2013 | B2 |
8433682 | Ngo | Apr 2013 | B2 |
8433872 | Prahlad et al. | Apr 2013 | B2 |
8442944 | Prahlad et al. | May 2013 | B2 |
8468518 | Wipfel | Jun 2013 | B2 |
8489830 | Wu et al. | Jul 2013 | B2 |
8543998 | Barringer | Sep 2013 | B2 |
8544016 | Friendman et al. | Sep 2013 | B2 |
8578120 | Attarde et al. | Nov 2013 | B2 |
8583594 | Prahlad et al. | Nov 2013 | B2 |
8595191 | Prahlad et al. | Nov 2013 | B2 |
8655846 | Prahlad et al. | Feb 2014 | B2 |
8719767 | Bansod | May 2014 | B2 |
8726242 | Ngo | May 2014 | B2 |
8793222 | Stringham | Jul 2014 | B1 |
8805953 | Murphy et al. | Aug 2014 | B2 |
8898411 | Prahlad et al. | Nov 2014 | B2 |
8959299 | Ngo et al. | Feb 2015 | B2 |
9015181 | Kottomtharayil et al. | Apr 2015 | B2 |
9092500 | Varadharajan et al. | Jul 2015 | B2 |
9268602 | Prahlad et al. | Feb 2016 | B2 |
9298559 | Ngo | Mar 2016 | B2 |
10379957 | Ngo | Aug 2019 | B2 |
10402277 | Ngo et al. | Sep 2019 | B2 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20030018657 | Monday | Jan 2003 | A1 |
20030028514 | Lord et al. | Feb 2003 | A1 |
20030028736 | Berkowitz et al. | Feb 2003 | A1 |
20030033346 | Carlson et al. | Feb 2003 | A1 |
20030140070 | Kaczmarski et al. | Jul 2003 | A1 |
20030158861 | Sawdon et al. | Aug 2003 | A1 |
20030167380 | Green et al. | Sep 2003 | A1 |
20030177149 | Coombs | Sep 2003 | A1 |
20030195886 | Vishlitzky et al. | Oct 2003 | A1 |
20040139125 | Strassburg et al. | Jul 2004 | A1 |
20040170374 | Bender et al. | Sep 2004 | A1 |
20040230566 | Balijepalli et al. | Nov 2004 | A1 |
20040250033 | Prahlad et al. | Dec 2004 | A1 |
20040260678 | Verbowski et al. | Dec 2004 | A1 |
20050086241 | Ram et al. | Apr 2005 | A1 |
20050203864 | Schmidt et al. | Sep 2005 | A1 |
20050216788 | Mani-Meitav et al. | Sep 2005 | A1 |
20060190460 | Chandrasekaran et al. | Aug 2006 | A1 |
20060224846 | Amarendran et al. | Oct 2006 | A1 |
20070043790 | Kryger | Feb 2007 | A1 |
20070185925 | Prahlad et al. | Aug 2007 | A1 |
20070185938 | Prahlad et al. | Aug 2007 | A1 |
20070185939 | Prahland et al. | Aug 2007 | A1 |
20070185940 | Prahlad et al. | Aug 2007 | A1 |
20070220320 | Sen | Sep 2007 | A1 |
20080028009 | Ngo | Jan 2008 | A1 |
20080091655 | Gokhale et al. | Apr 2008 | A1 |
20080183775 | Prahlad et al. | Jul 2008 | A1 |
20080228771 | Prahlad et al. | Sep 2008 | A1 |
20080229037 | Bunte et al. | Sep 2008 | A1 |
20080243879 | Gokhale et al. | Oct 2008 | A1 |
20080243953 | Wu et al. | Oct 2008 | A1 |
20080294605 | Prahlad et al. | Nov 2008 | A1 |
20090182963 | Prahlad et al. | Jul 2009 | A1 |
20090260007 | Beaty et al. | Oct 2009 | A1 |
20090276771 | Nickolov et al. | Nov 2009 | A1 |
20090300641 | Friedman et al. | Dec 2009 | A1 |
20090319534 | Gokhale | Dec 2009 | A1 |
20090319585 | Gokhale | Dec 2009 | A1 |
20100070474 | Lad | Mar 2010 | A1 |
20100070725 | Prahlad et al. | Mar 2010 | A1 |
20100076934 | Pershin et al. | Mar 2010 | A1 |
20100077165 | Lu et al. | Mar 2010 | A1 |
20100082672 | Kottomtharayil et al. | Apr 2010 | A1 |
20100122248 | Robinson et al. | May 2010 | A1 |
20100211547 | Kamei et al. | Aug 2010 | A1 |
20100250824 | Belay | Sep 2010 | A1 |
20100257142 | Murphy et al. | Oct 2010 | A1 |
20100293144 | Bonnet | Nov 2010 | A1 |
20100293146 | Bonnet | Nov 2010 | A1 |
20100312754 | Bear et al. | Dec 2010 | A1 |
20100313185 | Gupta et al. | Dec 2010 | A1 |
20110131187 | Prahlad et al. | Jun 2011 | A1 |
20110153697 | Nickolov et al. | Jun 2011 | A1 |
20110264620 | Prahlad et al. | Oct 2011 | A1 |
20130013563 | Prahlad et al. | Jan 2013 | A1 |
20130246360 | Ngo | Apr 2013 | A1 |
20130332610 | Beveridge | Dec 2013 | A1 |
20140075440 | Prahlad et al. | Mar 2014 | A1 |
20140114922 | Prahlad et al. | Apr 2014 | A1 |
20140279950 | Shapiro et al. | Sep 2014 | A1 |
20150169413 | Ngo et al. | Jun 2015 | A1 |
20150193229 | Bansod et al. | Jul 2015 | A1 |
20160224429 | Prahlad | Aug 2016 | A1 |
20160246680 | Ngo | Aug 2016 | A1 |
20160299908 | Bansod et al. | Oct 2016 | A1 |
20190324860 | Ngo | Oct 2019 | A1 |
Number | Date | Country |
---|---|---|
0259912 | Mar 1988 | EP |
0405926 | Jan 1991 | EP |
0467546 | Jan 1992 | EP |
0774715 | May 1997 | EP |
0809184 | Nov 1997 | EP |
0838758 | Apr 1998 | EP |
0899662 | Mar 1999 | EP |
0981090 | Feb 2000 | EP |
1349088 | Oct 2003 | EP |
1380947 | Jan 2004 | EP |
1579331 | Sep 2005 | EP |
2256952 | Dec 1992 | GB |
2411030 | Aug 2005 | GB |
05189281 | Jul 1993 | JP |
06274605 | Sep 1994 | JP |
09016463 | Jan 1997 | JP |
11259348 | Sep 1999 | JP |
2000347811 | Dec 2000 | JP |
9303549 | Feb 1993 | WO |
9513580 | May 1995 | WO |
9912098 | Mar 1999 | WO |
2001004755 | Jan 2001 | WO |
2002088943 | Nov 2002 | WO |
WO 02088943 | Nov 2002 | WO |
2003028183 | Apr 2003 | WO |
2003046768 | Jun 2003 | WO |
2004034197 | Apr 2004 | WO |
2007021997 | Feb 2007 | WO |
2008080143 | Jul 2008 | WO |
Entry |
---|
“Software Builds and the Virtual Machine,” Dr. Dobb's, Jan. 23, 2008, 2 pages. |
Armstead et al., “Implementation of a Campwide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, Sep. 11-14, 1995, pp. 190-199. |
Arneson, “Mass Storage Archiving in Network Environments,” Digest of Papers, Ninth IEEE Symposium on Mass Storage Systems, Oct. 31, 1988-Nov. 3, 1988, pp. 45-50, Monterey, CA. |
Cabrera et al., “ADSM: A Multi-Platform, Scalable, Backup and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA. |
CNET Reviews, “IPStor Enterprise Edition ZeroImpact Backup Enabler Option—(V.4.0) Manufacturer Description”, May 8, 2004, 1 page. |
CommVault Partner Advantage, “CommVault First to Market with Complete ‘Zero Impact’ Backup Solutions for Mixed Windows and UNIX Environments”, <http://partners.commvault.com/microsoft/microsoft_news_story.asp?id=164>, Sep. 25, 2002, 2 pages. |
CommVault Systems, Inc., “CommVault Galaxy Express 7.0 Backup & Recovery,” copyright date 1999-2007, 4 pages. |
CommVault Systems, Inc., “CommVault QiNetix: Architecture Overview,” CommVault Systems White Paper, 2005, 35 pages. |
CommVault Systems, Inc., “CommVault Simpana Software with SnapBackup,” copyright date 1999-2009, 6 pages. |
Commvault, “Remote Backup,” <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_us/features/ddr/ddr.htm>, internet accessed on Dec. 17, 2009, 8 pages. |
CommVault, “Snap Backup,” <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_us/features/snap_backup/snap_backup.htm>, internet accessed on Dec. 17, 2009, 7 pages. |
CommVault, “Snapshots,” <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_us/features/snapshots/snapshots.htm>, internet accessed on Dec. 15, 2009, 2 pages. |
CommVault, “Volume Shadow Services (VSS),” <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_us//features/snapshots/vss/vss.htm>, internet accessed on Dec. 23, 2009, 1 page. |
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, Jun. 12-16, 1994, pp. 124-126. |
EMC Corporation, “EMC CLARiiON CX Series,” May 2006, 7 pages. |
EMC Corporation, “EMC CLARiiON CX3 UltraScale Series,” Feb. 2008, 6 pages. |
EMC Corporation, “EMC Symmetrix DMX Series,” Jan. 2008, 8 pages. |
European Examination Report in European Application No. 09815090.7, dated Jan. 2, 2017, 8 pages. |
Extended European Search Report in Application No. 09815090.7, dated Oct. 25, 2012, 8 pages. |
FalconStor Software, “Impact-free Backup of Vmware Environments”, http://www.falconstor.com/dmdocuments/HyperTrac_for_VMware_SB_HR.pdf>, 2011, 2 pages. |
FalconStor Software, “Unified Backup & DR for Vmware Environments”, http://www.falconstor.com/dmdocuments/UniBU-DR_CDP_ SB_100520.pdf>, 2001, 2 pages. |
FalconStor Software, “Zero-impact Unified Backup & DR”, <http://www.falconstor.com/solutions/solutions-for-server-virtualization/vmware-solutions/zero-impact-unified-backup-a-dr>, undated, internet accessed May 2, 2012, 1 page. |
Fegreus, CommVault Simpana 8.0, Mar. 3, 2010, http://www.virtual-strategy.com/2010/03/03/commvault-simpana. |
Gait, J., “The Optical File Cabinet: A Random-Access File System for Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (Jun. 1988). |
Garimella, N., “Understanding and Exploiting Snapshot Technology for Data Protection, Part 1: Snapshot Technology Overview,” <http://www.ibm.com/developerworks/tivoli/library/t-snaptsml/index.html>internet accessed on Dec. 22, 2009, 8 pages. |
Harriman-Polanski, CommVault Galaxy Enhances Data Protection, Reprinted from Dell Power Solutions, May 2006. |
Hitachi Data Systems, “Hitachi HiCommand Protection Manager Software,” Feb. 2007, 2 pages. |
International Search Report and Written Opinion for International Application No. PCT/US09/57102, dated Nov. 6, 2009, 14 pages. |
International Search Report and Written Opinion for International Application No. PCT/US10/62146, dated Feb. 18, 2011, 9 pages. |
International Search Report and Written Opinion for International Application No. PCT/US10/62158; dated Feb. 23, 2011, 8 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2004/038323, dated Feb. 19, 2008, 10 pages. |
Jander, M., “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4 (Mar. 21, 1998), pp. 64-72. |
Managing Data More Effectively in Virtualized Environments with CommVault® Simpana® Universal Virtual Software Agent, © 1999-2009. |
Marshall, David, “Veeam's SureBackup transforms VMware image backups,” <http://www.infoworld.com/print/117315>, internet accessed on Mar. 23, 2010, 4 pages. |
Microsoft TechNet, “How Volume Shadow Copy Service Works,” <http://technet.microsoft.com/en-us/library/cc785914(WS.10,printer).aspx>, internet accessed on Dec. 17, 2009, 4 pages. |
Microsoft TechNet, “Overview of Exchange Server Backup Methods,” <http://technet.microsoft.com/en-us/library/aa996125(EXCHG.65,printer).aspx>, internet accessed on Dec. 29, 2009, 3 pages. |
Microsoft TechNet, “What is Volume Shadow Copy Service?” Mar. 28, 2003, 5 pages. |
Microsoft, “Microsoft System Center Data Protection Manager 2007: Microsoft Exchange Server,” undated, 2 pages. |
Microsoft, “Microsoft System Center Data Protection Manager 2007: Microsoft SharePoint Products and Technologies,” undated, 2 pages. |
Microsoft, “Microsoft System Center Data Protection Manager 2007: Product Overview,” undated, 2 pages. |
Microsoft.com, “XADM: Hot Split Snapshot Backups of Exchange,” <http://support.microsoft.com/kb/311898/>, internet accessed on Dec. 29, 2009, 5 pages. |
MSDN, “Backup Sequence Diagram,” <http://msdn.microsoft.com/en-us/library/ms986539(EXCHG.65,printer).aspx>, internet accessed on Dec. 30, 2009, 1 page. |
MSDN, “Exchange Transaction Logs and Checkpoint Files,” <http://msdn.microsoft.com/en-us/library/ms986143(EXCHG.65,printer).aspx>, internet accessed on Dec. 30, 2009, 1 page. |
MSDN, “Identifying Required Transaction Logs,” <http://msdn.microsoft.com/en-us/library/ms986606(EXCHG.65,printer).aspx>, internet accessed on Dec. 30, 2009, 1 page. |
MSDN, “Overview of Processing a Backup Under VSS,” <http://msdn.microsoft.com/en-us/library/aa384589(VS.85,printer).aspx>, internet accessed on Dec. 18, 2009, 3 pages. |
MSExchange.org, “Exchange log disk is full, Prevention and Remedies,” <http://www.msexchange.org/articles/exchange-log-disk-full.html?printversion>, internet accessed on Dec. 30, 2009, 7 pages. |
NetApp, “NetApp SnapManager for Microsoft Exchange,” 2009, 2 pages. |
Network Appliance, Inc., “Network Appliance Snapshot Technology,” copyright 2004, 1 page. |
OpenAir.com, Product Update—Jun. 21, 2001, http://web.archive.org/web/200110071539001http:11www.openair.comlhomeln.s-ub.--p.sub.--update062101 .html, Oct. 2001, 3 pages. |
Oracle Corporation, “Realizing the Superior Value of Oracle ZFS Storage Appliance,” Oracle White Paper, Redwood Shores, California, Mar. 2015, 12 pages. |
Partial Supplementary European Search Report in Application No. 10841622.3, dated Feb. 11, 2015, 5 pages. |
Robinson, Simon, “CommVault Unveils QiNetix to Unite Data Movement with Storage Management”, 451 Research, Oct. 11, 2002, 3 pages. |
Rosenblum et al., “The Design and Implementation of a Log-Structured File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991). |
Tanenbaum, Andrew S. Structured Computer Organization, 1984, Prentice-Hall, Inc. second edition, pp. 10-12. |
Veeam Software, “The New Standard for Data Protection,” internet accessed on Mar. 23, 2010, 2 pages. |
Veritas Software Corporation, “Veritas Volume Manager 3.2, Administrator's Guide,” Aug. 2001, 360 pages. |
Wikipedia.org, “Snapshot (computer storage),” <http://en.wikipedia.org/w/index.php?title=Snapshot_(computer_storage)>, internet accessed on Dec. 15, 2009, 3 pages. |
Dell Storage Engineering,“ Deploying Solaris 11 with EqualLogic Arrays,” Dell, Inc., Feb. 2014, 17 pages. |
Number | Date | Country | |
---|---|---|---|
20200057696 A1 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
61097407 | Sep 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14618241 | Feb 2015 | US |
Child | 16553090 | US | |
Parent | 12558947 | Sep 2009 | US |
Child | 14618241 | US |