The present invention relates to methods and apparatus for implementing data de-duplication in respect of serial-access storage media.
Existing storage devices frequently offer data compression (short dictionary type redundancy elimination); for example, LTO (Linear Tape-Open) tape drives may use SLDC (Streaming Lossless Data Compression which is very similar to the Adaptive Lossless Data Compression algorithm). This type of redundancy elimination is not fully efficient when handling large scale data duplications such as frequently found in data supplied to storage devices for backup or archiving; such data often contains copies of files or other large sections of repeated data.
For such large scale redundancy elimination, a class of techniques known as ‘data de-duplication’ have been developed. In general terms data de-duplication, when applied to the storage of input subject data on a storage medium, involves identifying chunks of repeated data in the input subject data, storing the first occurrence of the chunk data, and for subsequent occurrences of that chunk of data, storing only a pointer to the corresponding stored data chunk. When retrieving the data from the storage medium, it is possible to reconstruct the original data by replacing the chunk pointers read from the storage medium with the corresponding chunk data.
As it is possible for the same data chunk to occur both at or near the beginning of the subject data and at or near the end of the subject data, the chunk data has to be available throughout the recovery of the original data from the storage medium. As a result, data-de-duplication is well suited for use with random access storage media such as disc.
Application of data de-duplication to the storage of data to streaming media (that is, serially-accessed media, such as tape) is not attractive because retrieving the full chunk data from the media upon encountering a stored chunk pointer, requires the media to be repositioned which is inevitably very time consuming. Furthermore, although it would be possible to avoid media repositioning by storing all data chunks read from the media to a random access cache memory for the duration of the recovery operation, this would require a very large, and therefore very expensive, cache memory.
According to the present invention, there is provided a data storage method and apparatus, for storing data to a serial access medium, as set out in accompanying claims 1 and 13 respectively.
Further according to the present invention, there is provided a method and apparatus for reconstructing a subject data stream from data items read from a serial-access storage medium, as set out in accompanying claims 7 and 17 respectively.
Embodiments of the invention will now be described, by way of non-limiting example, with reference to the accompanying diagrammatic drawings, in which:
The tape drive functionally comprises a tape read/write subsystem 10A and a chunk processing subsystem 10B both under the control of a common controller 18 In
The read/write subsystem 10A comprises a tape transport 11 for moving a storage tape 12 relative to a read/write head 13, a write channel 15 for organizing into the appropriate format data to be supplied to the read/write head 13 for writing to tape, and a read channel 16 for reversing the formatting of data read from tape by the read/write head 13. The write channel 15 will generally also be arranged to effect error correction coding and low-level data compression, with the read channel being correspondingly arranged to effect decompression and error correction.
The chunk processing subsystem 10B provides the input/output interface for the tape drive 10 and is arranged to implement data de-duplication for input subject data to be stored to tape and later retrieved. More particularly, the chunk processing subsystem 10B comprises
The data de-duplication method implemented by the chunk processing subsystem 10B will next be described in detail, first with respect to the processing effected during data storage (
An input subject data stream received at the processing block 14 is divided into chunks (for example 7 KB in size) and a hash of each subject-data chunk is dynamically generated by dedicated hardware circuitry (not separately shown but part of block 14) or any other suitable means. Each hash forms, with very high probability, a unique identifier of the subject data making up the chunk concerned such that chunks giving rise to the same hash value can be reliably considered to comprise the same subject data. In general terms, the chunk subject-data hashes are used to detect duplicate chunks of subject data and each such duplicate chunk is then replaced by its hash. (As used herein, reference to a ‘chunk of subject data’ is to be understood as a reference to the subject data making up a chunk rather than to the specific chunk concerned). The data output by the processing block 14 to the write channel 15 thus comprises a succession of data items, each data item being either a chunk of subject data where this is the first occurrence of that data as a chunk in the input subject-data stream, or the hash of a chunk where the subject data of the chunk is a duplicate of that of a previously occurring chunk. Each data item (or just selected data items, such as those comprising subject data) may also include metadata about the corresponding chunk, this metadata being placed, for example, at the start of the data item.
Each chunk in the input subject-data stream has an associated logical location in that data stream and each data item is written to tape along with a location mark allowing a determination to be made of the logical location of the data item in the original data. The general format of the data stored to the tape 12 is thus as shown in
The logical location of a chunk in the input data stream, and thus of the corresponding data item stored to tape, is for example expressed by the serial number of the chunk either within the whole input subject data being stored or within a sub-unit, such as a record, of that data—in the latter case, the full logical location of a chunk would also require a sub-unit identifier, such as a record serial number, as well as the chunk serial number. The logical location (hereinafter just ‘location’) of each chunk of the input subject data provides a unique identifier of the chunk and is tracked by the processing block 14 (or alternatively by the controller 18).
The location marks 25 written to tape can comprise the absolute location of the corresponding data items, relative (in particular, incremental) location indicators, or a mixture of the two. For example, the location marks 25 can comprise a standard codeword or other boundary indicator marking the start of a new data item 26, 27 and constituting an incremental location indicator. Where incremental location indicators are used, absolute location can be determined by counting the incremental location indicators from a previous absolute location (either an absolute location mark or some other absolute mark such as the BOD mark 21).
Each location mark 25 may also provide an indication of whether the following data item is a chunk-subject-data data item 26 or a chunk-hash data item 27.
During the course of chunk processing by block 14, two databases 31, 35 (see
The Chunk database 31 comprises a respective multi-field entry 32 for each unique chunk of subject data encountered in the input subject-data stream, each entry 32 comprising a field storing the hash of the chunk of subject data and a field storing the location of the first occurrence of a chunk comprising that subject data (this location being abbreviated herein to ‘FOL’—First Occurrence Location). The DupC database 35 comprises a respective multi-field entry 36 for each chunk of subject data duplicated one or more times in the input subject-data stream, each entry 36 comprising a field storing the hash of the chunk of subject data, a field storing the first occurrence location, FOL, of a chunk comprising that subject data, and the number of repetitions (duplicates) 37 of the chunk subject data concerned (or a related indicator such as the total number of occurrences of the chunk, this of course being one more than the number of repetitions).
The process carried out by the chunk processing subsystem 10B during data storage is depicted in the flow chart of
At the end of processing in accordance with the
The copies of the Chunk DB 31 and DupC DB 35S present in the memory 19 are deleted once the
With regard to the required size or the memory 19, if every entry in the Chunk DB 31 takes up 32 bytes, then for a ITB tape and a 7 KB chunk size (giving approximately 1.5×108 chunks) up to 5×109 bytes of memory are needed for the Chunk DB. Assuming a similar number of bytes per entry, the size of the DupC 35 may range from zero (no duplicates) to that of the Chunk DB 31 (every chunk duplicated once); the total space required for both DBs is still, however, around 5×109 bytes.
Processing effected during data retrieval (read-back from tape 13) will now be described with reference to
The data items 26, 27 are then read in turn from the tape 13 and their respective locations are tracked based on the associated location marks 25. The processing of each data item 26, 27 by the chunk processing subsystem 10B to reconstruct the original subject data stream is depicted in the flow chart of
At the end of processing in accordance with the
A number of general variants are possible to the above-described embodiment of the invention. For example, in the foregoing the hash of a chunk's subject data has been used as an identifier of the subject data making up that chunk (this identifier is hereinafter referred to as the ‘chunk-data identifier’ or chunk-data ID’). Note that the chunk-data ID is an identifier of the subject data making up a chunk and not an identification of a specific chunk that comprises that data—such a chunk-specific identification is provided by its logical location. Alternatives to using the hash of the subject data of a chunk as the chunk-data identifier are possible, for example:
In another variant, instead of recording the number of repetitions 37 of each duplicated chunk of subject data in the DupC DB 35 and decrementing this value as each duplication of the chunk is encountered on read-back to determine when the cached chunk subject matter is no longer needed, it would alternatively be possible to record in the DupC DB 35 the Last Occurrence Location (LOL) of the chunk of subject data in the original subject matter, this being simply done during data storage by recording in the DupC DB entry for each duplicated chunk, the logical location of each duplicate of the chunk as it is encountered, the latest such location overwriting an earlier one. During read-back, the Last Occurrence Location (LOL) data for a DupC DB entry would not need updating each time a copy of the related chunk of subject data was output from the cache 50, it simply being necessary to determine when the LOL data matches the location of the current chunk-ID data item 27 (since no further duplication of the chunk data will thereafter required, the corresponding cache space can be freed up). It should, however, be noted that use of Last Occurrence Location (LOL) data rather than repetitions data to judge when a particular chunk of subject data can be removed from the cache, means that the
With regard to reconstruction of the subject data during read-back (
It will he appreciated that the operations of data storage and data retrieval may be carried out by different tape drives 10 and, indeed, can be carried out by separate, dedicated, pieces of equipment rather than using equipment that performs both functions.
It will also be appreciated that the size of the chunks need not be constant but can be varied during the course of data storage to better suit characteristics of the subject data being stored.
It is also possible to provide embodiments of the methods and apparatus of the invention in which the DupC database 35 does not include last occurrence data (such as a repetitions indicator or LUL), each entry in the database simply serving to link a chunk-data ID with the FOL of the corresponding subject data. The DupC database is then used during data retrieval to indicate where non-cached chunks of subject data can he located on tape. Where data retrieval starts from BOD (Beginning Of Data), it would, of course, he possible to dynamically build up a table associating each chunk-data ID with the corresponding FOL (provided the chunk-data ID of a chunk of subject data was either derivable from the subject data or stored as metadata with the corresponding data item). However, the use of the DupC database is both more efficient (since it only contains entries for duplicated chunks), and allows data retrieval to be started part way through the stored data (though this would in all probability give rise to a greater number of tape repositionings than if retrieval had been started from BOD).
Although as described, the DupC database 35 only contains entries in respect of duplicated chunks, it may in fact contain an entry for every chunk (for example, for holding metadata of interest); however, this would take up extra memory space and in that respect is not efficient.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2010/023964 | 2/11/2010 | WO | 00 | 9/23/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2011/099975 | 8/18/2011 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8131924 | Frandzel et al. | Mar 2012 | B1 |
8204862 | Paulzagade et al. | Jun 2012 | B1 |
8281066 | Trimmer et al. | Oct 2012 | B1 |
20050091234 | Hsu et al. | Apr 2005 | A1 |
20070097534 | Evans et al. | May 2007 | A1 |
20070255758 | Zheng et al. | Nov 2007 | A1 |
20090013129 | Bondurant | Jan 2009 | A1 |
20090193223 | Saliba et al. | Jul 2009 | A1 |
20090204765 | Gupta et al. | Aug 2009 | A1 |
20090265399 | Cannon et al. | Oct 2009 | A1 |
20090271454 | Anglin et al. | Oct 2009 | A1 |
20090327625 | Jaquette et al. | Dec 2009 | A1 |
20110093664 | Leppard | Apr 2011 | A1 |
20110185149 | Gruhl et al. | Jul 2011 | A1 |
Entry |
---|
International Search Report and Written Opionon, dated Nov. 4, 2010, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20120047328 A1 | Feb 2012 | US |