Single instancing in a data management system is often the process of attempting to store only a single instance of a file or data object on a storage device. In certain single instancing systems, a separate folder on the file system of the storage device is created for each backup or copy job performed. The files or data objects that are to be stored as a result of the backup or copy job are then placed in the separate folder.
Because there may be numerous computing systems in a data management system, each requiring multiple backup or copy jobs, these techniques may result in the creation of numerous folders, each containing numerous files. For example, if there are hundreds of computing systems, each having thousands of files or data objects to be backed up or copied, backing up or copying all of their files or data objects may potentially result in the creation of millions of files on the secondary storage device.
Certain file systems may not be capable of storing millions of files or more. Other file systems may be well-equipped to handle storing millions of files or more, but may not perform optimally in such situations. Accordingly, a system that provides for the backup or copy of large numbers of files across multiple computing systems would have significant utility.
The need exists for a system that overcomes the above problems, as well as one that provides additional benefits. Overall, the examples herein of some prior or related systems and their associated limitations are intended to be illustrative and not exclusive. Other limitations of existing or prior systems will become apparent to those of skill in the art upon reading the following Detailed Description.
In the drawings, the same reference numbers and acronyms identify elements or acts with the same or similar functionality for ease of understanding and convenience. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number generally refers to the Figure number in which that element is first introduced (e.g., element 102 is first introduced and discussed with respect to
The headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claimed invention.
Overview
Described in detail herein are systems and methods for managing single instanced data (alternatively called deduplicated data) in a data storage network. Using a single instance database and other constructs (e.g. sparse files), data density on archival media (e.g., magnetic tape) is improved, and the number of files per storage operation is reduced. According to one aspect of a method for managing single instancing data, for each storage operation, a chunk folder is created on a storage device that stores single instancing data. The chunk folder contains three container files: 1) a container file that contains data objects that have been single instanced; 2) a container file that contains data objects that were not eligible for single instancing; and 3) an index file used to track the location of data objects within the other files. A second storage operation subsequent to a first storage operation contains references to data objects in the chunk folder created by the first storage operation instead of the data objects themselves.
By storing multiple data objects in a small number of container files (as few as two), the storing of each data object as a separate file on the file system of the storage device can be avoided. This reduces the number of files that would be stored on the file system of the storage device, thereby ensuring that the storage device can adequately store the data of computing devices in the data storage network. Therefore, the file system of the storage device may not necessarily have to contend with storing excessively large numbers of files, such as millions of files or more. Accordingly, these techniques enable very large numbers of data objects to be stored without regard to limitations of the file system of the storage device.
Various examples of the invention will now be described. The following description provides specific details for a thorough understanding and enabling description of these examples. One skilled in the art will understand, however, that the system may be practiced without many of these details. Additionally, some well-known structures or functions may not be shown or described in detail, so as to avoid unnecessarily obscuring the relevant description of the various examples.
The terminology used in the description presented below is intended to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the system. Certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description.
Aspects of the invention will now be described in detail with respect to
Suitable Environments
The media agent 104 includes various components that perform various functions. These components include a data object identification component 110, an identifier generation component 120, an identifier comparison component 125, and a criteria evaluation component 130. The file identification component 110 identifies files or data objects, such as in response to a storage operation. The identifier generation component 120 generates an identifier for the file or data object (identifiers are discussed in more detail below). The identifier comparison component 125 performs comparisons of identifiers of various files or data objects to determine if the files or data objects contain similar data (for example, the identifier comparison component 130 can compare identifiers of two or more files or data objects to determine if the files or data objects contain the same data; metadata such as access control lists (ACLs), descriptive metadata that describes the files or data objects (e.g., file name, file size, file author, etc.) of the two or more files or data objects may differ. The criteria evaluation component 130 evaluates aspects of files or data object against a set of criteria. The media agent 104 may also contain other components that perform other functions.
The clients 102, as part of their functioning, utilize data, which includes files, directories, metadata (e.g., ACLs, descriptive metadata, and any other streams associated with the data), and other data objects. (More details as to the storage operations involving ACLs may be found in the assignee's U.S. patent application Ser. No. 12/058,518, entitled SYSTEM AND METHOD FOR STORAGE OPERATION ACCESS SECURITY, the entirety of which is incorporated by reference herein.) The data on the clients 102 is generally a primary copy (e.g., a production copy). During a copy, backup, archive or other storage operation, the clients 102 send a copy of each data object in the data to the media agent 104. The media agent 104 generates an identifier for each data object.
Examples of identifiers include a hash value, message digest, checksum, digital fingerprint, digital signature or other sequence of bytes that substantially uniquely identifies the file or data object in the data storage system. For example, identifiers could be generated using Message Digest Algorithm 5 (MD5) or Secure Hash Algorithm SHA 512. In some instances, the phrase “substantially unique” is used to modify the term “identifier” because algorithms used to produce hash values may result in collisions, where two different data objects, when hashed, result in the same hash value. However, depending upon the algorithm or cryptographic hash function used, collisions should be suitably rare and thus the identifier generated for a file or data object should be unique throughout the system. The term “probabilistically unique identifier” may also be used. In this case, the phrase “probabilistically unique” is used to indicate that collisions should be low-probability occurrences, and, therefore, the identifier should be unique throughout the system. In some examples, data object metadata (e.g., file name, file size) is also used to generate the identifier for the data object.
After generating the identifier for a data object, the media agent 104 determines whether it should be stored on the storage device 103. The storage device 103 stores a secondary copy (e.g., a backup copy) of the data of the clients 102. To determine this, the media agent 104 accesses the single instance database 105 to check if a copy or instance of the data object has already been stored on the storage device 103. The single instance database 105 utilizes one or more tables or other data structures to store the identifiers of the data objects that have already been stored on the storage device 103. If a copy or instance of the data object has not already been stored on the storage device 103, the media agent 104 sends the copy of the data object to the storage device 103 for storage and adds its identifier to the single instance database 105. If a copy or instance of the data object has already been stored, the media agent 104 can avoid sending another copy to the storage device 103. In this case, the media agent 104 may add a reference (e.g., to an index in the single instance database 105, such as by incrementing a reference count in the index) to the already stored instance of the data object, and may only store a pointer to the data object on the storage device 103. As explained below, adding a reference to the already stored instance of the data object enables storing only a single instance of the data object while still keeping track of other instances of the data object that do not need to be stored.
In some examples, instead of the clients 102 sending the data objects to the media agent 104 and the media agent 104 generating the identifiers, the clients 102 can themselves generate an identifier for each data object and transmit the identifiers to the media agent 104 for lookup in the single instance database 105. If the media agent 104 determines that an instance of a data object has not already been stored on the storage device 103, the media agent 104 can instruct the client 102 to send it a copy of the data object, which it then stores on the storage device 103. Alternatively, the client 102 itself can send the copy of the data object to the storage device 103. More details as to the generation of the identifiers may be found in the assignee's U.S. patent application Ser. No. 12/058,367, entitled SYSTEM AND METHOD FOR STORING REDUNDANT INFORMATION, the entirety of which is incorporated by reference herein. In some examples, the media agent 104 generates the identifier on data already stored on the storage device 103 or on other storage devices (e.g., secondarily stored data is single instanced).
The media agent 104 can support encrypted data objects. For example, one client 102 could generate an identifier for a data object and then encrypt it using one encryption algorithm. Another client 102 could generate an identifier for another data object and then encrypt it using another encryption algorithm. If the two data objects are identical (meaning the two objects have the same data, while their metadata, such as ACLs or descriptors, could be different), they will both have the same identifier. The media agent 104 can then store both encrypted instances of the data object or only a single encrypted instance. In some examples, the media agent 104 stores a key or other mechanism to be used to encrypt and/or decrypt data. The media agent 104 can also support compressed data objects. In general, the same compression algorithm may be used to compress data objects. Therefore, the media agent 104 can generate an identifier for a data object before or after it has been compressed. More details as to how the media agent 104 can support encryption and compression in a single instancing system may be found in the assignee's U.S. patent application Ser. No. 12/145,342, entitled APPLICATION-AWARE AND REMOTE SINGLE INSTANCE DATA MANAGEMENT, the entirety of which is incorporated by reference herein.
Suitable Data Structures and Examples
The chunk folder 202 and the files 204-208 may be equivalent to a directory and files (or folder and files) on a file system. For example, the chunk folder 202 may be a directory and the files 204-208 may be files located within the directory. As another example, the chunk folder 202 may be a file and the files 204-208 may be portions of the file. As another example, the files 204-208 may be collections of bytes grouped together. Those of skill in the art will understand that the chunk folder 202 and the files 204-208 may be comprised in various data structures and are not limited to a directory and files within the directory.
The media agent 104 places data objects in the “S” file 208 that meet certain criteria for single instancing. These criteria may include the following: 1) that the data object has been determined to be data or of type data (as opposed to metadata or of type metadata); and 2) that the data object is larger than a pre-configured size, such as 64 Kb. Type data is generally the payload portion of a file or data object (e.g., a file's contents) and type metadata is generally the metadata portion of the file or data object (e.g., metadata such as file name, file author, etc.). This pre-configured size may be configurable by an administrator or other user with the appropriate permissions. For example, if the administrator wants all data objects of type data be single instanced, the administrator can set the pre-configured size to 0 Kb. As another example, if the administrator wants only data objects of type data greater than 128 Kb to be single instanced, the administrator can set the pre-configured size to 128 Kb.
The media agent 104 determines if a data object meets these criteria by evaluating aspects of the data object (e.g., its type, its size) against the criteria. If so, and the data object has not already been stored on the storage device 103 (which the media agent determines by generating an identifier for the data object and looking up the identifier in the single instance database 105), the media agent 104 places the data object in the “S” file 208. The media agent 104 may also apply other criteria that the data object must meet for single instancing (e.g., criteria based upon characterizing or classifying the data object using techniques such as those described in commonly assigned U.S. patent application Ser. No. 11/564,119 (entitled SYSTEMS AND METHODS FOR CLASSIFYING AND TRANSFERRING INFORMATION IN A STORAGE NETWORK, the entirety of which is incorporated by reference herein).
For each data object that is placed in the “S” file 208, the media agent 104 adds a reference to the data object in the index file 204, called an internal reference. For example, the internal reference may be a pointer or link to the location of the data object in the “S” file. As further described herein, the media agent 104 maintains a primary table that contains all the single instance records of all data objects for which an identifier was created. The media agent 104 may add as the internal reference a record of the already stored instance of the data object from the primary table.
The media agent 104 places data objects in the “N” file 206 that do not meet the above criteria for single instancing. For example, a data object may be metadata (e.g., ACLs for a file that is placed in the “S” file, file descriptor information, etc.). In this case, the data object will be placed in the “N” file. As another example, a data object may be smaller than the pre-configured size, e.g., the data object is smaller than 64 Kb. In this case, the media agent 104 may incur too much overhead to generate its identifier and perform a lookup of the identifier in the single instance database 105. Therefore, the data object is placed in the “N” file. For each data object that is placed in the “N” file 206, the media agent 104 may also add a reference to the data object in the index file 204, called an internal reference. For example, the internal reference may be a pointer or link to the location(s) of the data object in the “N” file. A new “N” file may be created during each storage operation job.
The second storage operation would result in the creation of the second chunk folder 202′ illustrated in
In some cases, instead of always placing data objects in the “N” file 206 that do not meet the above criteria for single instancing, the media agent 104 generates an identifier for the data object, looks up the identifier in the single instance database 105 to see if the data object has already been stored, and if not, places it in the “S” file 208. If the data object has already been stored, the media agent would then add a pointer to the location of the instance of the previously stored data object in the index file 204. For example, this variation on the process could be used to single instance metadata instead of always storing it in the “N” file 206.
One advantage of the data structures 200, 210, 220 illustrated in
By storing multiple data objects in a small number of container files (as few as two), the storing of each data object as a separate file on the file systems of the storage device can be avoided. This reduces the number of files that would be stored on the file systems of the storage device, thereby ensuring that the storage device can adequately store the data of computing devices in the data storage network. Therefore, the file system of the storage device may not necessarily have to contend with storing excessively large numbers of files, such as millions of files or more. Accordingly, these techniques enable very large numbers of data objects to be stored without regard to limitations of the file system of the storage device.
Even if the media agent 104 performs numerous storage operations using these data structures 200, 210, this will result in far fewer files on the storage device 103 than storage operations where each involved data object is stored as a separate file. Another advantage is that the index files 204 could be used to replicate the data stored in the single instance database 105 or reconstruct the single instance database 105 if its data is ever lost or corrupted. This is because the index files 204 may store essentially the same information as what is stored in the single instance database 105.
However, the storage of data objects in containers such as the “N” file 206 and the “S” file 208 may create additional complexities when it comes time to prune or delete data objects involved in previous storage operations. This is because the data objects are not stored as files on the file system and thus cannot be directly referenced by the file system. For example, consider a first storage operation, involving a first file and a second file, and a second storage operation, involving the first file and a third file, both occurring on the same day. Further consider that the first storage operation's files are eligible to be pruned after 15 days and the second storage operation's files are eligible to be pruned after 30 days. Using the techniques described herein, the first storage operation would store the first and second files in an “S” file 208 and the second storage operation would store a pointer to the first file in an “N” file 206 and the third file in another “S” file 208.
After 15 days have elapsed, the first and second files are eligible to be pruned. The first file is referenced by the “N” file 206 of the second storage operation, and cannot yet be pruned. However, the second file, because it is not referenced by any “N” files 206 in any other storage operations, can be pruned. Using the index file 204 corresponding to the “S” file, the media agent 104 locates the second file within the “S” file 208. The media agent 104 can then instruct the operating system (e.g., a Windows operating system, a Unix operating system, a Linux operating system, etc.) of the storage device 103 to convert the “S” file 208 into a sparse file. A sparse file is well-known type of file having data within but not filling the file's logical space (e.g., at the beginning of the file and at the end of the file, and a hole or empty space in between). In converting the “S” file 208 into a sparse file, the portions corresponding to the second file may be zeroed out. These portions are then available for storage of other files or data objects by the operating system on storage devices (e.g., on magnetic disks, but sparse files may be used on other types of storage devices, such as tape or optical disks). Additionally or alternatively, the “S” file may be designated as a sparse file upon its creation.
After 30 days have elapsed, the first and third files are eligible to be pruned. Assuming that there are no intervening storage operations involving files that reference either of these files, both the first and third files can be pruned. The chunk folders 202 corresponding to the first and second storage operations can be deleted, thereby deleting the index files “204”, the “N” files 206” and the “S” files 208 and recovering the space previously allocated for their storage. (The process for pruning data objects is discussed in greater detail with reference to, e.g.,
Accordingly, the data structures 200, 210, 220 illustrated in
After having been stored on the storage device 103, files contained in chunks may be moved to secondary storage, such as to disk drives or to tapes in tape drives. More details as to these operations may be found in the previously-referenced U.S. patent application Ser. No. 12/058,367. In moving chunks to secondary storage, they may be converted into an archive file format. In some examples, the techniques described herein may be used to single instance data already stored on secondary storage.
Referring to
Referring to
As an example, the data structures illustrated in
If the operating system of the media agent 104 supports sparse files, then when the media agent 104 creates container files 1310/1311/1313, it can create them as sparse files. As previously described, a sparse file is type of file that may include empty space (e.g., a sparse file may have real data within it, such as at the beginning of the file and/or at the end of the file, but may also have empty space in it that is not storing actual data, such as a contiguous range of bytes all having a value of zero). Having the container files 1310/1311/1313 be sparse files allows the media agent 104 to free up space in the container files 1310/1311/1313 when blocks of data in the container files 1310/1311/1313 no longer need to be stored on the storage devices 103. In some examples, the media agent 104 creates a new container file 1310/1311/1313 when a container file 1310/1311/1313 either includes 100 blocks of data or when the size of the container file 1310 exceeds 50 Mb. In other examples, the media agent 104 creates a new container file 1310/1311/1313 when a container file 1310/1311/1313 satisfies other criteria (e.g., it contains from approximately 100 to approximately 1000 blocks or when its size exceeds approximately 50 Mb to 1 Gb). Those of skill in the art will understand that the media agent 104 can create a new container file 1310/1311/1313 when other criteria are met.
In some cases, a file on which a storage operation is performed may comprise a large number of data blocks. For example, a 100 Mb file may be comprised in 400 data blocks of size 256 Kb. If such a file is to be stored, its data blocks may span more than one container file, or even more than one chunk folder. As another example, a database file of 20 Gb may comprise over 40,000 data blocks of size 512 Kb. If such a database file is to be stored, its data blocks will likely span multiple container files, multiple chunk folders, and potentially multiple volume folders. As described in detail herein, restoring such files may thus requiring accessing multiple container files, chunk folders, and/or volume folders to obtain the requisite data blocks.
One advantage of the data structures illustrated in
Another advantage is that the data storage system enables a reduction in the amount of blocks of data stored on the storage devices 103, while still maintaining at least one instance of each block of data in primary data. In examples where the data storage system stores a variable number of instances of blocks of data, blocks of data can be distributed across two or more storage devices 103, thereby adding a further aspect of redundancy.
Another advantage is that the metadata files 1306/1307, the metadata index files 1308/1309, the container files 1310/1311/1313, and/or the container index files 1312/1314 could be used to replicate the data stored in the single instance database 105 or reconstruct the single instance database 105 if the data of the single instance database 105 is ever lost and/or corrupted.
The storage of data blocks in the container files may create additional complexities when it comes time to prune (delete) data blocks that the data storage system no longer need retain. This is because the data blocks are not stored as files on the file system on the storage device 103 and thus cannot be directly referenced by the file system. As described in detail with reference to
In some examples, the use of the container index files 1312/1314, the metadata index files 1308/1309, and/or the primary and secondary tables 1200/1250 to track data is a driver, agent or an additional file system that is layered on top of the existing file system of the storage device 103. This driver/agent/additional file system allows the data storage system to efficiently keep track of very large numbers of blocks of data, without regard to any limitations of the file systems of the storage devices 103. Accordingly, the data storage system can store very large numbers of blocks of data.
Accordingly, the data structures illustrated in
Restoring Data
At step 310 the media agent 104 is consulted to determine an archive file ID and an offset of the data object to be restored. The media agent 104 can determine this information from a data structure, such as a tree index (for example, a c-tree may be used, which, in some examples, is a type of self-balancing b-tree), it maintains for each archive file. For example, an archive file may be based on files 1 through n, with file 1 at offset 1, file 2 at offset 2, file n at offset n, and so on. The media agent 104 maintains one tree index per full storage operation cycle. (A storage operation cycle consists of a cycle from one full storage operation of a set of data, including any intervening incremental storage operations, until another full storage operation is performed.)
The media agent 104 may also maintain a multiple-part identifier, such as a five-part identifier, that includes an enterprise or domain identifier (e.g., an identifier of a grouping of clients), a client identifier to identify the client/host, an application type, a storage operation set identifier to identify when the storage operation data was obtained, and a subclient identifier to provide a further level of granularity within an enterprise to identify an origin, location, or use of the data (e.g., a file system on a client could be a subclient, or a database on a client could be a subclient).
Using the data structure maintained for the archive file, the media agent 104 determines the archive file ID and offset within the archive file of the data object to be restored. The media agent 104 then needs to determine which chunk contains the data object. To do so, the media agent 104 consults another server, such as a storage manager (discussed below), that has a data structure that maps the archive file ID and offset to the specific media (as well as the specific chunk file within the specific media, optionally). For example, the storage manager may maintain a database table that maps the archive file ID to specific media, such as the archive file ID to a bar code number for a magnetic tape cartridge storing that archive file.
At step 315, the secondary storage is accessed and the specific media, such as a specific tape cartridge in an automated tape library, is accessed. At step 320 the specific chunk folder 202 is opened, and the index file 204 is accessed. At step 325, the index file 204 is parsed until the stream header corresponding to the data object to be restored is accessed. At step 330, the location of the file is determined from the stream data. The stream data indicates the location of the data object to be restored, which is either in the “S” file 208 within the chunk folder 202 or within an “S” File 208 in another chunk folder 202 (or the “N” file for data objects that did not meet the criteria for single instancing). At step 335 the data object is retrieved or opened, and the data object is read and streamed back to restore it. Each data object may have a piece of data appended to it (e.g., an EOF marker) that indicates to the reader when to stop reading the data object. A similar piece of data may be prepended (e.g., a BOF marker) to the data object. The process 330 then concludes.
Pruning Data
Consider the example of a client for which a storage operation job was performed on Jan. 1, 2008, resulting in the creation of an archive file. A retention policy provides that the archive file has to be retained for 30 days. On Jan. 31, 2008, the archive file becomes prunable and thus can be deleted. Deleting the archive file may require deleting data stored in one or more chunks on one or more media. However, the archive file may not be able to be deleted if it is referenced by data objects within other archive files. This is to avoid orphaning data objects, e.g., by deleting a data object when it is still referenced in another archive file. The system keeps tracks of references to data objects in order to avoid orphaning data objects.
To assist in pruning, the single instance database 105 maintains a primary table and a secondary table. The primary table contains all the single instance records of all data objects for which an identifier was created. For each record in the primary table, the secondary table contains a record that may reference the record in the primary table.
The secondary table 950 has a secondary record ID column 960 which may contain primary keys, an archive file ID column 965 that contains the archive file ID, a file column 970 that contains the same identifier of the file or data object as in the primary table 900, and a referenceIN column 975 that contains an identifier (in the form of an archive file ID and an offset) of a file or data object that references the archive file. The secondary table 950 also has a referenceOUT column 980 that contains an identifier (in the form of an archive file ID and an offset) of a referenced file or data object. The secondary table 950 may also contain other columns (not shown).
If the archive file has references out, the process 400 continues to step 420, where the references out are deleted. At step 425, the media agent 104 determines if the archive files referenced by the references out have other references in. If there are no other references in, at step 430, the media agent 104 prunes the archive files referenced by the references out.
If the archive file does not have any references out (step 415), or if it does, and if the archive files referenced by the references out have other references in (step 425), the process 400 continues at step 435. At this step, the media agent 104 determines if the archive file has references in. If it does have references in, this means the archive file cannot be pruned. The process continues at step 440, where the media agent 104 deletes the references in. At step 445 the media agent adds a reference to the archive file to a deleted archive file table (discussed below).
If the archive file does not have any references in (step 435), the media agent 104 prunes the archive file. The media agent 104 then creates an entry in the deleted archive file table for the pruned archive file (if there wasn't already an entry) and adds a deleted timestamp to the entry. If there is already an entry for the pruned archive file, the media agent 104 adds a deleted timestamp to the entry.
The process 400 will now be explained using the examples of the records shown in the primary and secondary tables 900, 950. At time T1, the process 400 begins. At step 405, the media agent 104 receives a selection of AF1 to prune. At step 410 the media agent 104 looks up AF1, in the primary 900 and secondary 950 tables. At step 415, the media agent 104 determines that AF1 has a reference out, shown by entry 994 in the secondary table 950. (Entry 992 is shown in the secondary table 950 with strikethrough to indicate that it was previously deleted during an operation to prune AF0.) At step 420 the media agent deletes this reference out by deleting entry 994 from the secondary table 950. At step 425 the media agent 104 determines if AF0 has any other references in. Since the only reference in for AF0 is from AF1 (which is to be pruned), AF0 does not have any other references in. At step 430 the media agent 104 then prunes AF0 and adds a timestamp indicating that AF0 was pruned at time T1 at entry 1052 of the deleted archive file table 1000.
At step 435 the media agent 104 determines if AF1 has any references in. AF1 has a reference in from AF3, shown in entry 996 of the secondary table 950. The media agent thus cannot prune AF1. At step 440, the media agent deletes the references in to AF1 by deleting entry 996 from the secondary table 950. At step 445, the media agent adds entry 1054 to the deleted archive file table 1000, leaving the deleted timestamp blank. The blank timestamp indicates that AF1 should be pruned. The process 400 then concludes.
At time T2, the process 400 begins anew. At step 405, the media agent 104 receives a selection of AF3 to prune. At step 410 the media agent 104 looks up AF3, in the primary 900 and secondary 950 tables. At step 415, the media agent 104 determines that AF3 has a reference out, shown by entry 998 in the secondary table 950, that references AF1. At step 420 the media agent deletes entry 998 from the secondary table 950. At step 425 the media agent 104 determines if AF1 has any other references in. Since the only reference in for AF1 is from AF3 (which is to be pruned), AF1 does not have any other references in, and can now be pruned. At step 430 the media agent 104 then prunes AF1 and adds a timestamp indicating that AF1 was pruned at time T2 at entry 1054 of the deleted archive file table 1000. This entry now indicates that AF1 has been pruned at time T2.
At step 435, the media agent 104 determines if AF3 has any references in. AF3 has no references in listed in the secondary table 950. The media agent thus can prune AF3. At step 450, the media agent 104 prunes AF3. At step 455, the media agent 104 adds the entry 1056 to the deleted archive file table 1000 with a deleted timestamp as T2. The process 400 then concludes.
The pruning process 400 thus enables the system to maximize available storage space for storing archive files by storing them efficiently and then deleting or pruning them when it is no longer necessary to store them. The pruning process 400 may have additional or fewer steps than the ones described, or their order may vary other than what is described. For example, instead of the media agent adding a timestamp to an entry in the deleted archive file table 1000 to indicate when the archive file was pruned, the media agent may simply delete the entry from the deleted archive file table 1000. As another example, entries in the primary table 900 may also be deleted when the corresponding archive files are deleted. Those of skill in the art will understand that other variations are of course possible.
As previously noted, the data structures illustrated in
At step 1407 the media agent 104 determines the file, e.g., archive file, and the volume folders 1302 and chunk folder 1304 corresponding to the job to be pruned. The media agent 104 may do so, for example, by analyzing various data structures to determine this information. At step 1410 the media agent 104 deletes the metadata file 1306 and the metadata index file 1308 in the chunk folder 1304. The media agent 104 can delete the metadata file 1306 and the metadata index file 1308 in this example because these files include data, which is not referenced by any other data.
At step 1415 the media agent 104 accesses the container file 1310 and the container index file 1312 in the chunk folder 1304. The media agent 104 begins iterating through the data blocks in the container files 1310. At step 1420, beginning with a first block in the container file 1310, the media agent 104 accesses the primary table 1200 in the single instance database 105. The media agent 104 determines from the primary table 1200 whether the reference count of a data block in the container file 1310 is equal to zero. If so, this indicates that there are no references to the data block. The process 1400 then continues at step 1425, where the media agent 104 sets the entry in the container index file 1312 corresponding to the data block equal to zero, thus indicating that there are no references to the data block, and therefore prunable.
If the reference count of a data block is not equal to zero, then the data block is not prunable, and the process 1400 continues at step 1430. At this step, the media agent 104 determines whether there are more data blocks in the container file 1310. If so, the process 1400 returns to step 1420, where it accesses the next data block. If there are no more data blocks in the container file 1310, the process 1400 continues at step 1432, where the media agent 104 determines whether all the entries in the container index file 1312 corresponding to the container file 1310 are equal to zero. As illustrated in
However, if the container file 1310 did not contain any referenced data blocks, then at step 1433, the media agent 104 would delete the container file 1310. The process would then continue at step 1435, where the media agent 104 determines whether there are more container files. According to the example as illustrated in
After processing container files 1310/1311, the process 1400 continues at step 1440, where the media agent 104 determines whether to free up storage space in the container files 1310/1311. The media agent 104 may do so using various techniques. For example, if the operating system of the media agent 104 supports sparse files, then the media agent 104 may free up space by zeroing out the bytes in the container files corresponding to the space to be freed up. For a number of contiguous blocks (e.g., a threshold number of contiguous blocks, such as three contiguous blocks) for which the corresponding entries in the container index file 1312 indicate that the blocks are not being referred to, then the media agent 104 may mark these portions of the container files 1310/1311 as available for storage by the operating system or the file system. The media agent 104 may do so by calling an API of the operating system to mark the unreferenced portions of the container files 1310/1311 as available for storage.
The media agent 104 may use certain optimizations to manage the number of times portions of the container file are marked as available for storage, such as only zeroing out bytes in container files when a threshold number of unreferenced contiguous blocks is reached (e.g., three unreferenced contiguous blocks). These optimizations may result in less overhead for the operating system because it reduces the number of contiguous ranges of zero-value bytes in the container files 1310/1311 that the operating system must keep track of (e.g., it reduces the amount of metadata about portions of the container files 1310/1311 that are available for storage).
If the operating system of the media agent 104 does not support sparse files, then the media agent 104 may free up space by truncating either the beginning or the end of the container files 1310/1311 (removing or deleting data at the beginning or end of the container files 1310/1311). The media agent 104 may do so by calling an API of the operating system, or by operating directly on the container files 1310/1311. For example, if a certain number of the last blocks of the container file are not being referred to, the media agent 104 may truncate these portions of the container files 1310/1311. Other techniques may be used to free up space in the container files 1310/1311 for storage of other data. At step 1445 the media agent 104 frees up space in the container files 1310/1311. The process 1400 then concludes.
As a result of the process 1400, the chunk folder 1304 would contain only the container files 1310/1311 and the container index file 1312. At a later time, when the chunk folder 1305 is pruned (the job that created this chunk folder is selected to be pruned), then the container files 1310/1311 in the chunk folder 1304 can be deleted, because they no longer contain data blocks that is referenced by other data. Therefore, pruning data corresponding to a job may also result in pruning data corresponding to an earlier job, because the data corresponding to the earlier job is no longer referenced by the later job.
Although the process 1400 is described with reference to the pruning of data corresponding to jobs (one or more storage operations), other data can also be pruned. For example, an administrator may wish to delete SI data but retain non-SI data. In such case, the administrator may instruct the media agent 104 to delete the container files 1310/1311/1313 but retain the metadata files 1306/1307 and metadata index files 1308/1309. As another example, an administrator or storage policy may delete one or more specific files. In such case, the media agent 104 deletes the data blocks in the container files 1310/1311/1313 corresponding to the specific files but retains other data blocks. The process 1400 may include fewer or more steps than those described herein to accommodate these other pruning examples. Those of skill in the art will understand that data can be pruned in various fashions and therefore, that the process 1400 is not limited to the steps described herein.
One advantage of the process 1400 and the techniques described herein is that they enable the deletion of data on the storage devices 103 that no longer needs to be stored while still retaining data that needs to be stored, and doing so in a space-efficient manner. Space previously allocated for data blocks that no longer need to be stored can be reclaimed by the data storage system, and used to store other data. Accordingly, the techniques described herein provide for efficient use of available storage space (available on physical media).
Suitable System
The above system may be incorporated within a data storage system and may be subjected to a data stream during a data copy operation. Referring to
The secondary storage device receives the data from the media agent 512 and stores the data as a secondary copy, such as a backup copy. Secondary storage devices may be magnetic tapes, optical disks, USB and other similar media, disk, and tape drives, and so on. Of course, the system may employ other configurations of data stream components not shown in
The system 650 may generally include combinations of hardware and software components associated with performing storage operations on electronic data. Storage operations include copying, backing up, creating, storing, retrieving, and/or migrating primary storage data (e.g., data stores 660 and/or 662) and secondary storage data (which may include, for example, snapshot copies, backup copies, hierarchical storage management (HSM) copies, archive copies, and other types of copies of electronic data stored on storage devices 615). The system 650 may provide one or more integrated management consoles for users or system processes to interface with in order to perform certain storage operations on electronic data as further described herein. Such integrated management consoles may be displayed at a central control facility or several similar consoles distributed throughout multiple network locations to provide global or geographically specific network data storage information.
In one example, storage operations may be performed according to various storage preferences, for example, as expressed by a user preference, a storage policy, a schedule policy, and/or a retention policy. A “storage policy” is generally a data structure or other information source that includes a set of preferences and other storage criteria associated with performing a storage operation. The preferences and storage criteria may include, but are not limited to, a storage location, relationships between system components, network pathways to utilize in a storage operation, data characteristics, compression or encryption requirements, preferred system components to utilize in a storage operation, a single instancing or variable instancing policy to apply to the data, and/or other criteria relating to a storage operation. For example, a storage policy may indicate that certain data is to be stored in the storage device 615, retained for a specified period of time before being aged to another tier of secondary storage, copied to the storage device 615 using a specified number of data streams, etc.
A “schedule policy” may specify a frequency with which to perform storage operations and a window of time within which to perform them. For example, a schedule policy may specify that a storage operation is to be performed every Saturday morning from 2:00 a.m. to 4:00 a.m. In some cases, the storage policy includes information generally specified by the schedule policy. (Put another way, the storage policy includes the schedule policy.) A “retention policy” may specify how long data is to be retained at specific tiers of storage or what criteria must be met before data may be pruned or moved from one tier of storage to another tier of storage. Storage policies, schedule policies and/or retention policies may be stored in a database of the storage manager 605, to archive media as metadata for use in restore operations or other storage operations, or to other locations or components of the system 650.
The system 650 may comprise a storage operation cell that is one of multiple storage operation cells arranged in a hierarchy or other organization. Storage operation cells may be related to backup cells and provide some or all of the functionality of backup cells as described in the assignee's U.S. patent application Ser. No. 09/354,058, now U.S. Pat. No. 7,395,282, which is incorporated herein by reference in its entirety. However, storage operation cells may also perform additional types of storage operations and other types of storage management functions that are not generally offered by backup cells.
Storage operation cells may contain not only physical devices, but also may represent logical concepts, organizations, and hierarchies. For example, a first storage operation cell may be configured to perform a first type of storage operations such as HSM operations, which may include backup or other types of data migration, and may include a variety of physical components including a storage manager 605 (or management agent 631), a secondary storage computing device 665, a client 630, and other components as described herein. A second storage operation cell may contain the same or similar physical components; however, it may be configured to perform a second type of storage operations, such as storage resource management (SRM) operations, and may include monitoring a primary data copy or performing other known SRM operations.
Thus, as can be seen from the above, although the first and second storage operation cells are logically distinct entities configured to perform different management functions (i.e., HSM and SRM, respectively), each storage operation cell may contain the same or similar physical devices. Alternatively, different storage operation cells may contain some of the same physical devices and not others. For example, a storage operation cell configured to perform SRM tasks may contain a secondary storage computing device 665, client 630, or other network device connected to a primary storage volume, while a storage operation cell configured to perform HSM tasks may instead include a secondary storage computing device 665, client 630, or other network device connected to a secondary storage volume and not contain the elements or components associated with and including the primary storage volume. (The term “connected” as used herein does not necessarily require a physical connection; rather, it could refer to two devices that are operably coupled to each other, communicably coupled to each other, in communication with each other, or more generally, refer to the capability of two devices to communicate with each other.) These two storage operation cells, however, may each include a different storage manager 605 that coordinates storage operations via the same secondary storage computing devices 665 and storage devices 615. This “overlapping” configuration allows storage resources to be accessed by more than one storage manager 605, such that multiple paths exist to each storage device 615 facilitating failover, load balancing, and promoting robust data access via alternative routes.
Alternatively or additionally, the same storage manager 605 may control two or more storage operation cells (whether or not each storage operation cell has its own dedicated storage manager 605). Moreover, in certain embodiments, the extent or type of overlap may be user-defined (through a control console) or may be automatically configured to optimize data storage and/or retrieval.
Data agent 695 may be a software module or part of a software module that is generally responsible for performing storage operations on the data of the client 630 stored in data store 660/662 or other memory location. Each client 630 may have at least one data agent 695 and the system 650 can support multiple clients 630. Data agent 695 may be distributed between client 630 and storage manager 605 (and any other intermediate components), or it may be deployed from a remote location or its functions approximated by a remote process that performs some or all of the functions of data agent 695.
The overall system 650 may employ multiple data agents 695, each of which may perform storage operations on data associated with a different application. For example, different individual data agents 695 may be designed to handle Microsoft Exchange data, Lotus Notes data, Microsoft Windows 2000 file system data, Microsoft Active Directory Objects data, and other types of data known in the art. Other embodiments may employ one or more generic data agents 695 that can handle and process multiple data types rather than using the specialized data agents described above.
If a client 630 has two or more types of data, one data agent 695 may be required for each data type to perform storage operations on the data of the client 630. For example, to back up, migrate, and restore all the data on a Microsoft Exchange 2000 server, the client 630 may use one Microsoft Exchange 2000 Mailbox data agent 695 to back up the Exchange 2000 mailboxes, one Microsoft Exchange 2000 Database data agent 695 to back up the Exchange 2000 databases, one Microsoft Exchange 2000 Public Folder data agent 695 to back up the Exchange 2000 Public Folders, and one Microsoft Windows 2000 File System data agent 695 to back up the file system of the client 630. These data agents 695 would be treated as four separate data agents 695 by the system even though they reside on the same client 630.
Alternatively, the overall system 650 may use one or more generic data agents 695, each of which may be capable of handling two or more data types. For example, one generic data agent 695 may be used to back up, migrate and restore Microsoft Exchange 2000 Mailbox data and Microsoft Exchange 2000 Database data while another generic data agent 695 may handle Microsoft Exchange 2000 Public Folder data and Microsoft Windows 2000 File System data, etc.
Data agents 695 may be responsible for arranging or packing data to be copied or migrated into a certain format such as an archive file. Nonetheless, it will be understood that this represents only one example, and any suitable packing or containerization technique or transfer methodology may be used if desired. Such an archive file may include metadata, a list of files or data objects copied, the file, and data objects themselves. Moreover, any data moved by the data agents may be tracked within the system by updating indexes associated with appropriate storage managers 605 or secondary storage computing devices 665. As used herein, a file or a data object refers to any collection or grouping of bytes of data that can be viewed as one or more logical units.
Generally speaking, storage manager 605 may be a software module or other application that coordinates and controls storage operations performed by the system 650. Storage manager 605 may communicate with some or all elements of the system 650, including clients 630, data agents 695, secondary storage computing devices 665, and storage devices 615, to initiate and manage storage operations (e.g., backups, migrations, data recovery operations, etc.).
Storage manager 605 may include a jobs agent 620 that monitors the status of some or all storage operations previously performed, currently being performed, or scheduled to be performed by the system 650. (One or more storage operations are alternatively referred to herein as a “job” or “jobs.”) Jobs agent 620 may be communicatively coupled to an interface agent 625 (e.g., a software module or application). Interface agent 625 may include information processing and display software, such as a graphical user interface (“GUI”), an application programming interface (“API”), or other interactive interface through which users and system processes can retrieve information about the status of storage operations. For example, in an arrangement of multiple storage operations cell, through interface agent 625, users may optionally issue instructions to various storage operation cells regarding performance of the storage operations as described and contemplated herein. For example, a user may modify a schedule concerning the number of pending snapshot copies or other types of copies scheduled as needed to suit particular needs or requirements. As another example, a user may employ the GUI to view the status of pending storage operations in some or all of the storage operation cells in a given network or to monitor the status of certain components in a particular storage operation cell (e.g., the amount of storage capacity left in a particular storage device 615).
Storage manager 605 may also include a management agent 631 that is typically implemented as a software module or application program. In general, management agent 631 provides an interface that allows various management agents 631 in other storage operation cells to communicate with one another. For example, assume a certain network configuration includes multiple storage operation cells hierarchically arranged or otherwise logically related in a WAN or LAN configuration. With this arrangement, each storage operation cell may be connected to the other through each respective interface agent 625. This allows each storage operation cell to send and receive certain pertinent information from other storage operation cells, including status information, routing information, information regarding capacity and utilization, etc. These communications paths may also be used to convey information and instructions regarding storage operations.
For example, a management agent 631 in a first storage operation cell may communicate with a management agent 631 in a second storage operation cell regarding the status of storage operations in the second storage operation cell. Another illustrative example includes the case where a management agent 631 in a first storage operation cell communicates with a management agent 631 in a second storage operation cell to control storage manager 605 (and other components) of the second storage operation cell via management agent 631 contained in storage manager 605.
Another illustrative example is the case where management agent 631 in a first storage operation cell communicates directly with and controls the components in a second storage operation cell and bypasses the storage manager 605 in the second storage operation cell. If desired, storage operation cells can also be organized hierarchically such that hierarchically superior cells control or pass information to hierarchically subordinate cells or vice versa.
Storage manager 605 may also maintain an index, a database, or other data structure 611. The data stored in database 611 may be used to indicate logical associations between components of the system, user preferences, management tasks, media containerization and data storage information or other useful data. For example, the storage manager 605 may use data from database 611 to track logical associations between secondary storage computing device 665 and storage devices 615 (or movement of data as containerized from primary to secondary storage).
Generally speaking, the secondary storage computing device 665, which may also be referred to as a media agent, may be implemented as a software module that conveys data, as directed by storage manager 605, between a client 630 and one or more storage devices 615 such as a tape library, a magnetic media storage device, an optical media storage device, or any other suitable storage device. In one embodiment, secondary storage computing device 665 may be communicatively coupled to and control a storage device 615. A secondary storage computing device 665 may be considered to be associated with a particular storage device 615 if that secondary storage computing device 665 is capable of routing and storing data to that particular storage device 615.
In operation, a secondary storage computing device 665 associated with a particular storage device 615 may instruct the storage device to use a robotic arm or other retrieval means to load or eject a certain storage media, and to subsequently archive, migrate, or restore data to or from that media. Secondary storage computing device 665 may communicate with a storage device 615 via a suitable communications path such as a SCSI or Fibre Channel communications link. In some embodiments, the storage device 615 may be communicatively coupled to the storage manager 605 via a SAN.
Each secondary storage computing device 665 may maintain an index, a database, or other data structure 661 that may store index data generated during storage operations for secondary storage (SS) as described herein, including creating a metabase (MB). For example, performing storage operations on Microsoft Exchange data may generate index data. Such index data provides a secondary storage computing device 665 or other external device with a fast and efficient mechanism for locating data stored or backed up. Thus, a secondary storage computing device index 661, or a database 611 of a storage manager 605, may store data associating a client 630 with a particular secondary storage computing device 665 or storage device 615, for example, as specified in a storage policy, while a database or other data structure in secondary storage computing device 665 may indicate where specifically the data of the client 630 is stored in storage device 615, what specific files were stored, and other information associated with storage of the data of the client 630. In some embodiments, such index data may be stored along with the data backed up in a storage device 615, with an additional copy of the index data written to index cache in a secondary storage device. Thus the data is readily available for use in storage operations and other activities without having to be first retrieved from the storage device 615.
Generally speaking, information stored in cache is typically recent information that reflects certain particulars about operations that have recently occurred. After a certain period of time, this information is sent to secondary storage and tracked. This information may need to be retrieved and uploaded back into a cache or other memory in a secondary computing device before data can be retrieved from storage device 615. In some embodiments, the cached information may include information regarding format or containerization of archives or other files stored on storage device 615.
One or more of the secondary storage computing devices 665 may also maintain one or more single instance databases 623. Single instancing (alternatively called data deduplication) generally refers to storing in secondary storage only a single instance of each data object (or data block) in a set of data (e.g., primary data). More details as to single instancing may be found in one or more of the following commonly-assigned U.S. patent applications: 1) U.S. patent application Ser. No. 11/269,512 (entitled SYSTEM AND METHOD TO SUPPORT SINGLE INSTANCE STORAGE OPERATIONS, 2) U.S. patent application Ser. No. 12/145,347 (entitled APPLICATION-AWARE AND REMOTE SINGLE INSTANCE DATA MANAGEMENT, or 3) U.S. patent application Ser. No. 12/145,342 (entitled APPLICATION-AWARE AND REMOTE SINGLE INSTANCE DATA MANAGEMENT, 4) U.S. patent application Ser. No. 11/963,623 (entitled SYSTEM AND METHOD FOR STORING REDUNDANT INFORMATION, and 5) U.S. patent application Ser. No. 11/950,376 (entitled SYSTEMS AND METHODS FOR CREATING COPIES OF DATA SUCH AS ARCHIVE COPIES, each of which is incorporated by reference herein in its entirety.
In some examples, the secondary storage computing devices 665 maintain one or more variable instance databases. Variable instancing generally refers to storing in secondary storage one or more instances, but fewer than the total number of instances, of each data block (or data object) in a set of data (e.g., primary data). More details as to variable instancing may be found in the commonly-assigned U.S. Pat. App. No. 61/164,803 (entitled STORING A VARIABLE NUMBER OF INSTANCES OF DATA OBJECTS.
In some embodiments, certain components may reside and execute on the same computer. For example, in some embodiments, a client 630 such as a data agent 695, or a storage manager 605, coordinates and directs local archiving, migration, and retrieval application functions as further described in the previously-referenced U.S. patent application Ser. No. 09/610,738. This client 630 can function independently or together with other similar clients 630.
As shown in
Moreover, in operation, a storage manager 605 or other management module may keep track of certain information that allows the storage manager 605 to select, designate, or otherwise identify metabases to be searched in response to certain queries as further described herein. Movement of data between primary and secondary storage may also involve movement of associated metadata and other tracking information as further described herein.
In some examples, primary data may be organized into one or more sub-clients. A sub-client is a portion of the data of one or more clients 630, and can contain either all of the data of the clients 630 or a designated subset thereof. As depicted in
Referring to
Block Level Single Instancing
Instead of single instancing files or data objects, single instancing can be performed on a block level. Files can be broken into blocks and transmitted using the techniques described herein. The blocks are typically fixed sizes, e.g., 64 Kb. An identifier is created for each block, and a lookup of the identifier is performed in the single instance database 105 to see if it has already been stored. If it has not, then the block can be stored. If it has, a reference to the block can be stored, using the techniques described herein.
For example, a data storage system may include multiple computing devices (e.g., client computing devices) that store primary data (e.g., production data such as system files, user files, etc.). The data storage system may also include a secondary storage computing device, a single instance database, and one or more storage devices that store copies of the primary data (e.g., secondary copies, tertiary copies, etc.). The secondary storage computing device receives blocks of data from the computing devices and accesses the single instance database to determine whether the blocks of data are unique (unique meaning that no instances of the blocks of data are already stored on the storage devices). If a block of data is unique, the single instance database stores it in a file on a storage device. If not, the secondary storage computing device can avoid storing the block of data on the storage devices.
The primary data of the computing devices can be divided into data that is eligible for single instancing and data that is not eligible for single instancing. An example of the latter is metadata (e.g., Master File Table information) and an example of the former is data (e.g., operating system and/or application files). A file typically comprises one or more blocks as tracked by the file systems of the computing devices.
The computing devices align data that is eligible for single instancing into blocks of data (which may comprise one or more blocks as tracked by the file systems of the computing devices) and generate identifiers for the blocks of data that the secondary storage computing device uses to determine if the blocks of data are unique. This allows the secondary storage computing device to avoid generating identifiers for the blocks of data, which may be computationally expensive and/or require a long time to perform. Therefore, distributed identifier generation apportions potentially lengthy operations across numerous computing devices, thereby freeing up the secondary storage computing device to perform other operations (e.g., storing data, retrieving data, pruning data, etc.).
The computing devices send the blocks of data and other data (e.g., metadata and/or the data that is not eligible for single instancing) in a data stream to the secondary storage computing device. The secondary storage computing device receives the data stream and stores blocks of data and their identifiers in buffers in random access memory (RAM). The secondary storage computing device determines whether a block of data is already stored on a storage device. To do this, the secondary storage computing device determines, by analyzing data structures in the single instance database in view of the block's identifier, whether the block of data is already stored on a storage device. If it is, then the secondary storage computing device 1) stores a link to the already stored block of data in a metadata file and 2) discards the block of data from the memory buffer. If it is not, then the secondary storage computing device stores the block of data in a container file.
Because the size of a block of data and associated metadata is typically less then the size of a memory buffer, the secondary storage computing device can keep a single block of data in a single memory buffer while it looks up its identifier in the single instance database. This allows the secondary storage computing device to avoid writing the block of data to disk (an operation which is typically slower than storing the block of data in a RAM buffer) until the secondary storage computing device determines that it needs to store the block of data in a container file on a storage device. The secondary storage computing device stores data that is not eligible for single instancing in metadata files.
By storing multiple blocks of data in a single container file, the secondary storage computing device avoids storing each block of data as a separate file on the file systems of the storage devices. This reduces the number of files that would be stored on the file systems of the storage devices, thereby ensuring that the storage devices can adequately store the data of the computing devices in the data storage system.
One advantage of these techniques is that they significantly reduce the number of files stored on a file system of a computing device or storage device. This is at least partly due to the storage of data blocks within the container files. Even if the secondary storage computing device performs numerous storage operations, these techniques will result in storing far fewer files on the file system than storage operations where each data block is stored as a separate file. Therefore, the file system of the computing device or storage device may not necessarily have to contend with storing excessively large numbers of files, such as millions of files or more. Accordingly, these techniques enable very large numbers of blocks of data to be stored without regard to limitations of the file system of the computing device or storage device.
However, the storage of blocks of data in container files may create additional complexities when it comes time to prune or delete data. This is because a container file may contain blocks of data that are referenced by links in metadata files and thus cannot be deleted, because referenced blocks of data typically still need to be stored on the storage devices. Furthermore, because the blocks of data are not stored as files on the file systems of the storage devices, they cannot be directly referenced by the file system.
The systems and methods described herein provide solutions to these problems. The secondary storage computing device creates the container files as sparse files (typically only on operating systems that support sparse files, e.g., Windows operating systems, but also on other operating systems that support sparse files). A sparse file is type of file that may include empty space (e.g., a sparse file may have real data within it, such as at the beginning of the file and/or at the end of the file, but may also have empty space in it that is not storing actual data, such as a contiguous range of bytes all having a value of zero). Second, the secondary storage computing device maintains a separate index that stores an indication of whether blocks of data in container files are referred to by links in metadata files. In some examples, this can be thought of as creating another file system on top of the existing file systems of the storage devices that keeps track of blocks of data in the container files.
When a block of data is not referred to and does not need to be stored, the secondary storage computing device can prune it. To prune data, the secondary storage computing device accesses the separate index to determine the blocks of data that are not referred to by links. On operating systems that support sparse files, the secondary storage computing device can free up space in the container files corresponding to those blocks of data by marking the portions of the physical media corresponding to the unreferenced portions of the container file as available for storage (e.g., by zeroing out the corresponding bytes in the container files). On operating systems that do not support sparse files, the secondary storage computing device can free up space in the container files by truncating the extreme portions of the container files (e.g., the beginnings and/or the ends of the container files), thereby making the corresponding portions of the physical media available to store other data. Freeing up space in container files allows the operating system to utilize the freed-up space in other fashions (e.g., other programs may utilize the freed-up space).
Siloing a Single Instance Store
The combination of the data stored on the storage device 103 and the single instance database 105 can be termed a “single instance store.” Siloing a single instance store refers to moving all the information stored in the single instance store to secondary storage, such as to tape, to create a silo of a single instance store. When this occurs, a new single instance store, comprising a new single instance database 105 and a new data structure (e.g., a new collection of one or more chunk folders 202) on the storage device 103 is created. When this occurs, single instancing of data objects essentially starts over from the beginning.
This process can be repeated on a periodic or ad hoc basis to create multiple silos of single instance stores. Consider the following example. A single instance store can be siloed every 15 days. Starting at day 1, secondary copies of numerous data objects can be created on the storage device 103 using the techniques described herein. These data objects can be backed up to tape and the tapes sent offsite for storage (although the tapes are described as being sent offsite, they may not be sent offsite). On day 2, any incremental changes to the data objects would be picked up and copied over to the storage device 103, backed up to tape, and the tape sent offsite for storage. This continues until day 15, when the entire single instance store is backed up to tape. On day 16, a new single instance store is created at that time, and the above process would start anew. One advantage of this process is that up to 15 days worth of changes to data objects can be easily recovered.
Another advantage of this process that it may reduce the number of secondary media, such as tapes, needed to restore files. For example, the longer period of time between backups of a single instance store, the more tapes potentially could be needed to restore files. For example, a very large number of files could have references on a large number of tapes. All of the tapes would need to be mounted in order to restore all of the very large number of files. This could slow down the restore process to unacceptable levels.
Although a fifteen-day window has been described for siloing a single instance store, this window may be configurable based upon the storage needs of the implementer of the system. This window could be optimized based upon the rate of change of the data being stored. For example, if the rate of change of data is very small, then a longer siloing window can be used, because there is a lot of data being single instanced, and therefore tapes will be used at a slower rate. However, if the rate of change of data is very high, then a shorter siloing window may be necessary, because the amount of data that is being single instanced is actually very low, and therefore tapes will be used at a faster rate. This window can also be based on other factors, including tape usage, tape hardware, tape access times, numbers of restores, etc., in order to optimize the siloing window for the storage needs of the implementer of the system.
Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Modules described herein may be executed by a general-purpose computer, e.g., a server computer, wireless device, or personal computer. Those skilled in the relevant art will appreciate that aspects of the invention can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, hand-held devices (including personal digital assistants (PDAs)), wearable computers, all manner of cellular or mobile phones, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, and the like. Indeed, the terms “computer,” “server,” “host,” “host system,” and the like, are generally used interchangeably herein and refer to any of the above devices and systems, as well as any data processor. Furthermore, aspects of the invention can be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to perform one or more of the computer-executable instructions explained in detail herein.
Software and other modules may be accessible via local memory, a network, a browser, or other application in an ASP context, or via another means suitable for the purposes described herein. Examples of the technology can also be practiced in distributed computing environments where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein.
Examples of the technology may be stored or distributed on computer-readable media, including magnetically or optically readable computer disks, hard-wired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. Indeed, computer-implemented instructions, data structures, screen displays, and other data under aspects of the invention may be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme).
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
The above Detailed Description is not intended to be exhaustive or to limit the invention to the precise form disclosed above. While specific examples for the invention are described above for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.
The teachings of the invention provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further implementations of the invention.
Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention.
These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.
While certain aspects of the invention are presented below in certain claim forms, the applicant contemplates the various aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C § 112, ¶6, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. § 112, ¶6, will begin with the words “means for”.) Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the invention.
This application is a continuation of U.S. patent application Ser. No. 12/565,576, filed on Sep. 23, 2009, entitled SYSTEMS AND METHODS FOR MANAGING SINGLE INSTANCING DATA, which claims the benefit of U.S. Patent Application No. 61/100,686, filed on Sep. 26, 2008, entitled SYSTEMS AND METHODS FOR MANAGING SINGLE INSTANCING DATA, and is related to U.S. Patent Application No. 61/180,791, filed on May 22, 2009, entitled BLOCK-LEVEL SINGLE INSTANCING, each of which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4686620 | Ng | Aug 1987 | A |
4713755 | Worley, Jr. et al. | Dec 1987 | A |
4995035 | Cole et al. | Feb 1991 | A |
5005122 | Griffin et al. | Apr 1991 | A |
5093912 | Dong et al. | Mar 1992 | A |
5133065 | Cheffetz et al. | Jul 1992 | A |
5193154 | Kitajima et al. | Mar 1993 | A |
5212772 | Masters | May 1993 | A |
5226157 | Nakano et al. | Jul 1993 | A |
5239647 | Anglin et al. | Aug 1993 | A |
5241668 | Eastridge et al. | Aug 1993 | A |
5241670 | Eastridge et al. | Aug 1993 | A |
5276860 | Fortier et al. | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5287500 | Stoppani, Jr. | Feb 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5333315 | Saether et al. | Jul 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5410700 | Fecteau et al. | Apr 1995 | A |
5437012 | Mahajan | Jul 1995 | A |
5448724 | Hayashi | Sep 1995 | A |
5491810 | Allen | Feb 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5504873 | Martin et al. | Apr 1996 | A |
5544345 | Carpenter et al. | Aug 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5559957 | Balk | Sep 1996 | A |
5604862 | Midgely et al. | Feb 1997 | A |
5606686 | Tarui et al. | Feb 1997 | A |
5619644 | Crockett et al. | Apr 1997 | A |
5628004 | Gormley et al. | May 1997 | A |
5634052 | Morris | May 1997 | A |
5638509 | Dunphy et al. | Jun 1997 | A |
5673381 | Huai et al. | Sep 1997 | A |
5699361 | Ding et al. | Dec 1997 | A |
5729743 | Squibb | Mar 1998 | A |
5742792 | Yanai et al. | Apr 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5758359 | Saxon | May 1998 | A |
5761677 | Senator et al. | Jun 1998 | A |
5764972 | Crouse et al. | Jun 1998 | A |
5778395 | Whiting et al. | Jul 1998 | A |
5794229 | French et al. | Aug 1998 | A |
5806057 | Gormley et al. | Sep 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5813008 | Benson et al. | Sep 1998 | A |
5813009 | Johnson et al. | Sep 1998 | A |
5813017 | Morris | Sep 1998 | A |
5822780 | Schutzman | Oct 1998 | A |
5842222 | Lin | Nov 1998 | A |
5862325 | Reed et al. | Jan 1999 | A |
5875478 | Blumenau | Feb 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5901327 | Ofek | May 1999 | A |
5924102 | Perks | Jul 1999 | A |
5940833 | Benson | Aug 1999 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
5974563 | Beeler, Jr. | Oct 1999 | A |
5990810 | Williams | Nov 1999 | A |
6021415 | Cannon et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6052735 | Ulrich et al. | Apr 2000 | A |
6073133 | Chrabaszcz | Jun 2000 | A |
6076148 | Kedem | Jun 2000 | A |
6094416 | Ying | Jul 2000 | A |
6125369 | Wu et al. | Sep 2000 | A |
6131095 | Low et al. | Oct 2000 | A |
6131190 | Sidwell | Oct 2000 | A |
6148412 | Cannon et al. | Nov 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6161111 | Mutalik et al. | Dec 2000 | A |
6167402 | Yeager | Dec 2000 | A |
6212512 | Barney et al. | Apr 2001 | B1 |
6260069 | Anglin | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6275953 | Vahalia et al. | Aug 2001 | B1 |
6301592 | Aoyama et al. | Oct 2001 | B1 |
6311252 | Raz | Oct 2001 | B1 |
6324544 | Alam et al. | Nov 2001 | B1 |
6324581 | Xu et al. | Nov 2001 | B1 |
6328766 | Long | Dec 2001 | B1 |
6330570 | Crighton | Dec 2001 | B1 |
6330642 | Carteau | Dec 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
RE37601 | Eastridge et al. | Mar 2002 | E |
6356801 | Goodman et al. | Mar 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6477544 | Bolosky | Nov 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6513051 | Bolosky et al. | Jan 2003 | B1 |
6519679 | Devireddy et al. | Feb 2003 | B2 |
6538669 | Lagueux, Jr. et al. | Mar 2003 | B1 |
6564228 | O'Connor | May 2003 | B1 |
6609157 | Deo et al. | Aug 2003 | B2 |
6609183 | Ohran | Aug 2003 | B2 |
6609187 | Merrell et al. | Aug 2003 | B1 |
6658526 | Nguyen et al. | Dec 2003 | B2 |
6675177 | Webb | Jan 2004 | B1 |
6704730 | Moulton et al. | Mar 2004 | B2 |
6708195 | Borman | Mar 2004 | B1 |
6745304 | Playe | Jun 2004 | B2 |
6757699 | Lowry | Jun 2004 | B2 |
6757794 | Cabrera et al. | Jun 2004 | B2 |
6795903 | Schultz et al. | Sep 2004 | B2 |
6810398 | Moulton | Oct 2004 | B2 |
6862674 | Dice et al. | Mar 2005 | B2 |
6865655 | Andersen | Mar 2005 | B1 |
6868417 | Kazar et al. | Mar 2005 | B2 |
6889297 | Krapp et al. | May 2005 | B2 |
6901493 | Maffezzoni | May 2005 | B1 |
6912645 | Dorward et al. | Jun 2005 | B2 |
6928459 | Sawdon et al. | Aug 2005 | B1 |
6952758 | Chron et al. | Oct 2005 | B2 |
6959368 | St. Pierre et al. | Oct 2005 | B1 |
6973553 | Archibald, Jr. et al. | Dec 2005 | B1 |
6976039 | Chefalas et al. | Dec 2005 | B2 |
6993162 | Stephany et al. | Jan 2006 | B2 |
7017113 | Bourbakis et al. | Mar 2006 | B2 |
7035876 | Kawai et al. | Apr 2006 | B2 |
7035880 | Crescenti et al. | Apr 2006 | B1 |
7035943 | Yamane et al. | Apr 2006 | B2 |
7085904 | Mizuno et al. | Aug 2006 | B2 |
7089383 | Ji et al. | Aug 2006 | B2 |
7089395 | Jacobson et al. | Aug 2006 | B2 |
7092956 | Ruediger | Aug 2006 | B2 |
7103740 | Colgrove et al. | Sep 2006 | B1 |
7107418 | Ohran | Sep 2006 | B2 |
7111173 | Scheidt | Sep 2006 | B1 |
7117246 | Christenson et al. | Oct 2006 | B2 |
7139808 | Anderson et al. | Nov 2006 | B2 |
7143091 | Charnock et al. | Nov 2006 | B2 |
7143108 | George | Nov 2006 | B1 |
7191290 | Ackaouy et al. | Mar 2007 | B1 |
7200604 | Forman et al. | Apr 2007 | B2 |
7200621 | Beck et al. | Apr 2007 | B2 |
7246272 | Cabezas et al. | Jul 2007 | B2 |
7272606 | Borthakur et al. | Sep 2007 | B2 |
7287252 | Bussiere et al. | Oct 2007 | B2 |
7290102 | Lubbers et al. | Oct 2007 | B2 |
7310655 | Dussud | Dec 2007 | B2 |
7320059 | Armangau et al. | Jan 2008 | B1 |
7325110 | Kubo et al. | Jan 2008 | B2 |
7330997 | Odom | Feb 2008 | B1 |
7343459 | Prahlad et al. | Mar 2008 | B2 |
7370003 | Pych | May 2008 | B2 |
7376805 | Stroberger et al. | May 2008 | B2 |
7383304 | Shimada et al. | Jun 2008 | B2 |
7383462 | Osaki et al. | Jun 2008 | B2 |
7389345 | Adams | Jun 2008 | B1 |
7395282 | Crescenti et al. | Jul 2008 | B1 |
7409522 | Fair et al. | Aug 2008 | B1 |
7444382 | Malik | Oct 2008 | B2 |
7444387 | Douceur et al. | Oct 2008 | B2 |
7478113 | De Spiegeleer et al. | Jan 2009 | B1 |
7480782 | Garthwaite | Jan 2009 | B2 |
7487245 | Douceur et al. | Feb 2009 | B2 |
7490207 | Amarendran et al. | Feb 2009 | B2 |
7493314 | Huang et al. | Feb 2009 | B2 |
7493456 | Brittain et al. | Feb 2009 | B2 |
7496604 | Sutton, Jr. et al. | Feb 2009 | B2 |
7512745 | Gschwind et al. | Mar 2009 | B2 |
7519726 | Palliyil et al. | Apr 2009 | B2 |
7533331 | Brown et al. | May 2009 | B2 |
7536440 | Budd et al. | May 2009 | B2 |
7546428 | McAndrews | Jun 2009 | B1 |
7568080 | Prahlad et al. | Jul 2009 | B2 |
7577687 | Bank et al. | Aug 2009 | B2 |
7603529 | MacHardy et al. | Oct 2009 | B1 |
7613748 | Brockway et al. | Nov 2009 | B2 |
7617297 | Bruce et al. | Nov 2009 | B2 |
7631120 | Darcy | Dec 2009 | B2 |
7631194 | Wahlert et al. | Dec 2009 | B2 |
7636824 | Tormasov | Dec 2009 | B1 |
7647462 | Wolfgang et al. | Jan 2010 | B2 |
7657550 | Prahlad et al. | Feb 2010 | B2 |
7661028 | Erofeev | Feb 2010 | B2 |
7668884 | Prahlad et al. | Feb 2010 | B2 |
7672779 | Fuchs | Mar 2010 | B2 |
7672981 | Faibish et al. | Mar 2010 | B1 |
7676590 | Silverman et al. | Mar 2010 | B2 |
7685126 | Patel et al. | Mar 2010 | B2 |
7685177 | Hagerstrom et al. | Mar 2010 | B1 |
7685384 | Shavit | Mar 2010 | B2 |
7685459 | De Spiegeleer et al. | Mar 2010 | B1 |
7698699 | Rogers et al. | Apr 2010 | B2 |
7721292 | Frasier et al. | May 2010 | B2 |
7734581 | Gu et al. | Jun 2010 | B2 |
7747579 | Prahlad et al. | Jun 2010 | B2 |
7747659 | Bacon et al. | Jun 2010 | B2 |
7778979 | Hatonen et al. | Aug 2010 | B2 |
7786881 | Burchard et al. | Aug 2010 | B2 |
7788230 | Dile et al. | Aug 2010 | B2 |
7814142 | Mamou et al. | Oct 2010 | B2 |
7818287 | Torii et al. | Oct 2010 | B2 |
7818495 | Tanaka et al. | Oct 2010 | B2 |
7818531 | Barrall | Oct 2010 | B2 |
7831707 | Bardsley | Nov 2010 | B2 |
7831795 | Prahlad et al. | Nov 2010 | B2 |
7836161 | Scheid | Nov 2010 | B2 |
7840537 | Gokhale et al. | Nov 2010 | B2 |
7853750 | Stager | Dec 2010 | B2 |
7856414 | Zee | Dec 2010 | B2 |
7865678 | Arakawa et al. | Jan 2011 | B2 |
7870105 | Arakawa et al. | Jan 2011 | B2 |
7870486 | Wang et al. | Jan 2011 | B2 |
7873599 | Ishii et al. | Jan 2011 | B2 |
7873806 | Prahlad et al. | Jan 2011 | B2 |
7882077 | Gokhale et al. | Feb 2011 | B2 |
7899990 | Moll et al. | Mar 2011 | B2 |
7921077 | Ting et al. | Apr 2011 | B2 |
7953706 | Prahlad et al. | May 2011 | B2 |
7962452 | Anglin | Jun 2011 | B2 |
8028106 | Bondurant et al. | Sep 2011 | B2 |
8037028 | Prahlad et al. | Oct 2011 | B2 |
8041907 | Wu et al. | Oct 2011 | B1 |
8051367 | Arai et al. | Nov 2011 | B2 |
8054765 | Passey et al. | Nov 2011 | B2 |
8055618 | Anglin | Nov 2011 | B2 |
8055627 | Prahlad et al. | Nov 2011 | B2 |
8055745 | Atluri | Nov 2011 | B2 |
8086799 | Mondal et al. | Dec 2011 | B2 |
8108429 | Sim-Tang et al. | Jan 2012 | B2 |
8112357 | Mueller et al. | Feb 2012 | B2 |
8131687 | Bates et al. | Mar 2012 | B2 |
8140786 | Bunte et al. | Mar 2012 | B2 |
8156092 | Hewett et al. | Apr 2012 | B2 |
8156279 | Tanaka et al. | Apr 2012 | B2 |
8161003 | Kavuri | Apr 2012 | B2 |
8165221 | Zheng et al. | Apr 2012 | B2 |
8166263 | Prahlad et al. | Apr 2012 | B2 |
8170994 | Tsaur et al. | May 2012 | B2 |
8190823 | Waltermann et al. | May 2012 | B2 |
8190835 | Yueh | May 2012 | B1 |
8219524 | Gokhale | Jul 2012 | B2 |
8234444 | Bates et al. | Jul 2012 | B2 |
8271992 | Chatley et al. | Sep 2012 | B2 |
8285683 | Prahlad et al. | Oct 2012 | B2 |
8295875 | Masuda | Oct 2012 | B2 |
8296260 | Ting et al. | Oct 2012 | B2 |
8315984 | Frandzel | Nov 2012 | B2 |
8346730 | Srinivasan et al. | Jan 2013 | B2 |
8375008 | Gomes | Feb 2013 | B1 |
8380957 | Prahlad et al. | Feb 2013 | B2 |
8392677 | Bunte et al. | Mar 2013 | B2 |
8401996 | Muller et al. | Mar 2013 | B2 |
8412677 | Klose | Apr 2013 | B2 |
8412682 | Zheng et al. | Apr 2013 | B2 |
8548953 | Wong et al. | Oct 2013 | B2 |
8578120 | Attarde et al. | Nov 2013 | B2 |
8620845 | Stoakes et al. | Dec 2013 | B2 |
8626723 | Ben-Shaul et al. | Jan 2014 | B2 |
8712969 | Prahlad et al. | Apr 2014 | B2 |
8712974 | Datuashvili et al. | Apr 2014 | B2 |
8725687 | Klose | May 2014 | B2 |
8769185 | Chung | Jul 2014 | B2 |
8782368 | Lillibridge et al. | Jul 2014 | B2 |
8880797 | Yueh | Nov 2014 | B2 |
8909881 | Bunte et al. | Dec 2014 | B2 |
8935492 | Gokhale et al. | Jan 2015 | B2 |
8965852 | Jayaraman | Feb 2015 | B2 |
9015181 | Kottomtharayil et al. | Apr 2015 | B2 |
9058117 | Attarde et al. | Jun 2015 | B2 |
9098495 | Gokhale | Aug 2015 | B2 |
10089337 | Senthilnathan et al. | Oct 2018 | B2 |
10262003 | Kottomtharayil | Apr 2019 | B2 |
10324897 | Amarendran et al. | Jun 2019 | B2 |
10324914 | Kumarasamy | Jun 2019 | B2 |
10678758 | Dornemann | Jun 2020 | B2 |
20010037323 | Moulton et al. | Nov 2001 | A1 |
20020055972 | Weinman | May 2002 | A1 |
20020065892 | Malik | May 2002 | A1 |
20020099806 | Balsamo et al. | Jul 2002 | A1 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20020169934 | Krapp et al. | Nov 2002 | A1 |
20030004922 | Schmidt et al. | Jan 2003 | A1 |
20030033308 | Patel et al. | Feb 2003 | A1 |
20030097359 | Ruediger | May 2003 | A1 |
20030105716 | Sutton et al. | Jun 2003 | A1 |
20030110190 | Achiwa et al. | Jun 2003 | A1 |
20030135704 | Martin | Jul 2003 | A1 |
20030167318 | Robbin et al. | Sep 2003 | A1 |
20030172368 | Alumbaugh et al. | Sep 2003 | A1 |
20030177149 | Coombs | Sep 2003 | A1 |
20030191849 | Leong | Oct 2003 | A1 |
20030236763 | Kilduff | Dec 2003 | A1 |
20040006702 | Johnson | Jan 2004 | A1 |
20040093259 | Pych | May 2004 | A1 |
20040148306 | Moulton et al. | Jul 2004 | A1 |
20040167898 | Margolus et al. | Aug 2004 | A1 |
20040230817 | Ma | Nov 2004 | A1 |
20050033756 | Kottomtharayil et al. | Feb 2005 | A1 |
20050055359 | Kawai et al. | Mar 2005 | A1 |
20050060643 | Glass et al. | Mar 2005 | A1 |
20050066190 | Martin | Mar 2005 | A1 |
20050097150 | McKeon et al. | May 2005 | A1 |
20050114406 | Borthakur et al. | May 2005 | A1 |
20050138081 | Alshab et al. | Jun 2005 | A1 |
20050149589 | Bacon et al. | Jul 2005 | A1 |
20050177603 | Shavit | Aug 2005 | A1 |
20050193028 | Oswalt | Sep 2005 | A1 |
20050203864 | Schmidt et al. | Sep 2005 | A1 |
20050234823 | Schimpf | Oct 2005 | A1 |
20050262110 | Gu et al. | Nov 2005 | A1 |
20050262193 | Mamou et al. | Nov 2005 | A1 |
20050286466 | Tagg et al. | Dec 2005 | A1 |
20060005048 | Osaki et al. | Jan 2006 | A1 |
20060010227 | Atluri | Jan 2006 | A1 |
20060047894 | Okumura | Mar 2006 | A1 |
20060053305 | Wahlert et al. | Mar 2006 | A1 |
20060056623 | Gligor et al. | Mar 2006 | A1 |
20060089954 | Anschutz | Apr 2006 | A1 |
20060095470 | Cochran et al. | May 2006 | A1 |
20060123313 | Brown et al. | Jun 2006 | A1 |
20060129875 | Barrall | Jun 2006 | A1 |
20060156064 | Damani et al. | Jul 2006 | A1 |
20060174112 | Wray | Aug 2006 | A1 |
20060224846 | Amarendran et al. | Oct 2006 | A1 |
20060230244 | Amarendran et al. | Oct 2006 | A1 |
20070022145 | Kavuri | Jan 2007 | A1 |
20070118573 | Gadiraju | May 2007 | A1 |
20070118705 | Arakawa et al. | May 2007 | A1 |
20070136200 | Frank et al. | Jun 2007 | A1 |
20070156998 | Gorobets | Jul 2007 | A1 |
20070179995 | Prahlad et al. | Aug 2007 | A1 |
20070185879 | Roublev | Aug 2007 | A1 |
20070208788 | Chakravarty et al. | Sep 2007 | A1 |
20070255758 | Zheng et al. | Nov 2007 | A1 |
20070255909 | Gschwind et al. | Nov 2007 | A1 |
20070271316 | Hollebeek | Nov 2007 | A1 |
20080005141 | Zheng et al. | Jan 2008 | A1 |
20080016467 | Chambers et al. | Jan 2008 | A1 |
20080028007 | Ishii et al. | Jan 2008 | A1 |
20080034045 | Bardsley | Feb 2008 | A1 |
20080082736 | Chow et al. | Apr 2008 | A1 |
20080091881 | Brittain et al. | Apr 2008 | A1 |
20080098083 | Shergill et al. | Apr 2008 | A1 |
20080125170 | Masuda | May 2008 | A1 |
20080162320 | Mueller et al. | Jul 2008 | A1 |
20080162467 | Fuchs | Jul 2008 | A1 |
20080162518 | Bollinger et al. | Jul 2008 | A1 |
20080184001 | Stager | Jul 2008 | A1 |
20080243879 | Gokhale et al. | Oct 2008 | A1 |
20080243914 | Prahlad et al. | Oct 2008 | A1 |
20080243957 | Prahlad et al. | Oct 2008 | A1 |
20080243958 | Prahlad et al. | Oct 2008 | A1 |
20080244204 | Cremelie et al. | Oct 2008 | A1 |
20080294696 | Frandzel | Nov 2008 | A1 |
20090012984 | Ravid et al. | Jan 2009 | A1 |
20090013140 | Bondurant et al. | Jan 2009 | A1 |
20090049260 | Upadhyayula | Feb 2009 | A1 |
20090063528 | Yueh | Mar 2009 | A1 |
20090083341 | Parees | Mar 2009 | A1 |
20090083344 | Inoue et al. | Mar 2009 | A1 |
20090083610 | Arai et al. | Mar 2009 | A1 |
20090106369 | Chen et al. | Apr 2009 | A1 |
20090106480 | Chuna | Apr 2009 | A1 |
20090112870 | Ozzie et al. | Apr 2009 | A1 |
20090132619 | Arakawa et al. | May 2009 | A1 |
20090132764 | Moll et al. | May 2009 | A1 |
20090144285 | Chatley et al. | Jun 2009 | A1 |
20090150498 | Branda et al. | Jun 2009 | A1 |
20090192978 | Hewett et al. | Jul 2009 | A1 |
20090204636 | Li et al. | Aug 2009 | A1 |
20090204649 | Wono et al. | Aug 2009 | A1 |
20090235022 | Bates et al. | Sep 2009 | A1 |
20090268903 | Bojinov et al. | Oct 2009 | A1 |
20090271402 | Srinivasan et al. | Oct 2009 | A1 |
20090271454 | Anglin et al. | Oct 2009 | A1 |
20090319534 | Gokhale | Dec 2009 | A1 |
20100036887 | Anglin et al. | Feb 2010 | A1 |
20100070715 | Waltermann et al. | Mar 2010 | A1 |
20100077161 | Stoakes et al. | Mar 2010 | A1 |
20100082529 | Mace et al. | Apr 2010 | A1 |
20100082672 | Kottomtharayil et al. | Apr 2010 | A1 |
20100088296 | Periyagaram et al. | Apr 2010 | A1 |
20100094817 | Ben-Shaul et al. | Apr 2010 | A1 |
20100161554 | Datuashvili et al. | Jun 2010 | A1 |
20100223441 | Lillibridge et al. | Sep 2010 | A1 |
20110035357 | Ting et al. | Feb 2011 | A1 |
20110125720 | Jayaraman | May 2011 | A1 |
20120271793 | Gokhale | Oct 2012 | A1 |
20130218842 | Muller et al. | Aug 2013 | A1 |
20130262386 | Kottomtharayil et al. | Oct 2013 | A1 |
20140188805 | Vijayan | Jul 2014 | A1 |
20140233366 | Prahlad et al. | Aug 2014 | A1 |
20140250088 | Klose | Sep 2014 | A1 |
20150134924 | Gokhale et al. | May 2015 | A1 |
20150199242 | Attarde | Jul 2015 | A1 |
20190188188 | Kottomtharayil et al. | Jun 2019 | A1 |
20190192978 | Eatedali et al. | Jun 2019 | A1 |
20190266139 | Kumarasamy et al. | Aug 2019 | A1 |
20190278748 | Amarendran | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
0259912 | Mar 1988 | EP |
0405926 | Jan 1991 | EP |
0467546 | Jan 1992 | EP |
0774715 | May 1997 | EP |
0809184 | Nov 1997 | EP |
0899662 | Mar 1999 | EP |
0981090 | Feb 2000 | EP |
WO-9513580 | May 1995 | WO |
WO-9912098 | Mar 1999 | WO |
WO-03027891 | Apr 2003 | WO |
WO-2006052872 | May 2006 | WO |
2008070688 | Jun 2008 | WO |
2008080140 | Jul 2008 | WO |
Entry |
---|
Extended European Search Report for 09816825.5; dated Oct. 27, 2015, 15 pages. |
Australian Examination Report dated Feb. 14, 2012 for Australian Application No. 2009296695 in 3 pages. |
Australian Examination Report dated Jun. 7, 2013 for Australian Application No. 2009296695 in 3 pages. |
Australian Examination Report dated Sep. 12, 2013 for Australian Application No. 2009296695 in 6 pages. |
Australian Examination Report dated Oct. 2, 2014 for Australian Application No. 2013206404 in 2 pages. |
Canadian Examination Report dated Nov. 7, 2014 for Canadian Application No. 2729078 in 5 pages. |
Partial Supplementary European Search Report dated Apr. 15, 2015 for European Application No. 09816825.5 in 6 pages. |
European Examination Report dated Jan. 13, 2020 for European Application No. 09816825.5 in 7 pages. |
Armstead et al., “Implementation of a Campwide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, Sep. 11-14, 1995, pp. 190-199. |
Arneson, “Mass Storage Archiving in Network Environments,” Digest of Papers, Ninth IEEE Symposium on Mass Storage Systems, Oct. 31, 1988-Nov. 3, 1988, pp. 45-50, Monterey, CA. |
Cabrera et al., “ADSM: A Multi-Platform, Scalable, Backup and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA. |
Commvault Systems, Inc., “Continuous Data Replicator 7.0,” Product Data Sheet, 2007, 6 pages. |
CommVault Systems, Inc., “Deduplication—How To,”; <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_US;/features/single_instance/single_instance_how_to.htm>, earliest known publication date:; Jan. 26, 2009, 7 pages. |
CommVault Systems, Inc., “Deduplication,”; <http://documentation.commvault.com/commvault/release_8_0_0/books_online_1/english_US;/features/single_instance/single_instance.htm>, earliest known publication date: Jan. 26, 2009, 9; pages. |
Diligent Technologies “HyperFactor,” <http://www.diligent.com/products:protecTIER-1:HyperFactor-1>, Internet accessed on Dec. 5, 2008, 2 pages. |
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, Jun. 12-16, 1994, pp, 124-126. |
Enterprise Storage Management, “What Is Hierarchical Storage Management?”, Jun. 19, 2005, pp. 1, http://web.archive.org/web/20050619000521/hhttp://www.enterprisestoragemanagement.com/faq/hierarchical-storage-management-shtml. |
Enterprise Storage Management, What Is a Incremental Backup?, Oct. 26, 2005, pp. 1-2, http://web.archive.org/web/w0051026010908/http://www.enterprisestoragemanagement.com/faq/incremental-backup.shtml. |
Extended European Search Report for EP07865192.4; dated May 2, 2013, 7 pages. |
Federal Information Processing Standards Publication 180-2, “Secure Hash Standard”, Aug. 1, 2002, ; <http://csrc.nist.gov/publications/fips/fips1 80-2/fips 1 80-2withchangenotice. pdf>, 83 pages. |
Gait, J., “The Optical File Cabinet: A Random-Access File System for Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (Jun. 1988). |
Geer, D., “Reducing the Storage Burden Via Data Deduplication,” IEEE, Computer Journal, vol. 41, Issue 12, Dec. 2008, pp. 15-17. |
Handy, Jim, “The Cache Memory Book: The Authoritative Reference on Cache Design,” Second Edition, 1998, pp. 64-67 and pp. 204-205. |
International Search Report and Written Opinion for PCT/US07/86421, dated Apr. 18, 2008, 9 pages. |
International Search Report for Application No. PCT/US09/58137, dated Dec. 23, 2009, 14 pages. |
International Search Report for Application No. PCT/US10/34676, dated Nov. 29, 2010, 9 pages. |
International Search Report for Application No. PCT/US11/54378, dated May 2, 2012, 8 pages. |
Jander, M., “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4 (Mar. 21, 1998), pp. 64-72. |
Kornblum, Jesse, “Identifying Almost Identical Files Using Context Triggered Piecewise Hashing,” www.sciencedirect.com, Digital Investigation 3S (2006), pp. S91-S97. |
Kulkarni P. et al., “Redundancy elinination within large collections of files,” Proceedings of the Usenix Annual Technical Conference, Jul. 2, 2004, pp. 59-72. |
Lortu Software Development, “Kondar Technology-Deduplication,” <http://www.lortu.com/en/deduplication.asp>, Internet accessed on Dec. 5, 2008, 3 pages. |
Menezes et al., “Handbook of Applied Cryptography”, CRC Press, 1996, <http://www.cacr.math.uwaterloo.ca/hac/aboutlchap9.pdf>, 64 pages. |
Microsoft, “Computer Dictionary”, p. 249, Fifth Edition, 2002, 3 pages. |
Microsoft, “Computer Dictionary”, pp. 142, 150, 192, and 538, Fifth Edition, 2002, 6 pages. |
Overland Storage, “Data Deduplication,” <http://www.overlandstorage.com/topics/data_deduplication.html>, Internet accessed on Dec. 5, 2008, 2 pages. |
Partial Supplementary European Search Report in Application No. 09816825.5, dated Apr. 15, 2015, 6 pages. |
Quantum Corporation, “Data De-Duplication Background: A Technical White Paper,” May 2008, 13 pages. |
Rosenblum et al., “The Design and Implementation of a Log-Structured File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991). |
SearchStorage, “File System”, Nov. 1998, <http://searchstorage.techtarget.com/definition/file-system>, 10 pages. |
Sharif, A., “Cache Memory,” Sep. 2005, http://searchstorage.techtarget.com/definition/cache-memory, pp. 1-26. |
Webopedia, “Cache,” Apr. 11, 2001, http://web.archive.org/web/20010411033304/http://www.webopedia.com/TERM/c/cache.html pp. 1-4. |
Webopedia, “Data Duplication”, Aug. 31, 2006, <http://web.archive.org/web/20060913030559/http://www.webopedia.com/TERMID/data_deduplication.html>, 2 pages. |
Examination Report dated Dec. 14, 2018 in European Patent Application No. 09816825.5, 7 pages. |
U.S. Appl. No. 12/565,576, filed Sep. 23, 2009, now U.S. Pat. No. 9,015,181, titled Systems and Methods for Managing Single Instancing Data. |
U.S. Appl. No. 12/647,906, filed Dec. 28, 2009, now U.S. Pat. No. 8,578,120, titled Block-Level Single Instancing. |
U.S. Appl. No. 14/049,463, filed Oct. 9, 2013, now U.S. Pat. No. 9,058,117, titled Block-Level Single Instancing. |
U.S. Appl. No. 14/668,450, filed Mar. 5, 2015, titled Block-Level Single Instancing. |
Canada Office Action for Application No. 2706007, dated Jul. 30, 2014, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20150205678 A1 | Jul 2015 | US |
Number | Date | Country | |
---|---|---|---|
61100686 | Sep 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12565576 | Sep 2009 | US |
Child | 14674229 | US |