In computer systems, backup of data to a backup data store can provide data redundancy that allows for data to be restored after a data loss event.
Various embodiments are depicted in the accompanying drawings for illustrative purposes, and should in no way be interpreted as limiting the scope of this disclosure. In addition, various features of different disclosed embodiments can be combined to form additional embodiments, which are part of this disclosure.
While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the scope of protection.
The headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claims. Disclosed herein are example configurations and embodiments relating to backing up data in a computing system.
Overview
According to various data deduplication processes, backing up of data involves splitting files into smaller chunks of data and, in order to save space and/or data transfer time, only saving those chunks that have changed during backup. In certain embodiments, a hash value is calculated for each hashable chunk of a file, such that the changed chunks may be identified by comparing hash values and identifying chunks that have changed hash values associated therewith. Such a process may provide a number of benefits. For example, if a chunk has not changed, such chunk is not saved, thereby saving data storage resources. In addition, if a hash value for a particular chunk in a file is found in another file, such chunk may be reused for the second and/or subsequent files; the redundant chunk is not saved again, and a corresponding entry in a metadata list of hashes for this file is instead updated to reflect the relationship.
For containerized backup, certain solutions provide for modified or new chunks associated with a file to be appended to a comprehensive file, or blob, wherein the comprehensive file/blob is continually expanded to include the modified/new chunks. However, certain backup systems may not allow for appending of chunks to files/blobs within the backup system, or such operation may be inconvenient or impractical. For example, a backup server may not provide a public-facing application programming interface (API) that would allow for a client system to cause modified/new chunks to be saved in the backup server without downloading the entire relevant file to the client first. Therefore, it may be necessary in such systems to pull a file down from the backup system, append modified/new chunks thereto, and push the entire file/blob including the appended chunks back up to the backup system, which may introduce various data transfer inefficiencies.
Certain embodiments disclosed herein provide for improved backup efficiency in systems that may not allow for appended writing of chunks by generating stand-alone files for each modified/new chunk to be backed up. In certain embodiments, unique file chunks are associated with unique hash values that provide mapping information indicating where the chunk file resides in non-volatile storage of the backup system. With individual chunk files saved to the backup data store, there may be no need to append new chunks to a single blob or other comprehensive data structure.
Data Storage System
The host system 110 may comprise one or more computing devices including one or more processors configured to execute code. The host 110 may further comprise one or more data storage modules, such as the host data storage data store 114 illustrated in
It may be desirable for the host system 110 to implement data redundancy by backing up user data to the backup system 120 in order to reduce the risk of data loss, or for other reasons. The host 110 may be configured to backup at least a portion of data stored in the host data store 114 to one or more external backup systems, including the backup system 120. The backup system 120 may be configured to receive data from the host system 110 over the interface 175 and backup such data in one or more nonvolatile storage modules 140, as directed by a backup client. The illustrated system 100 shows a backup client 132 implemented within a controller 130 of the backup system 120. Alternatively or additionally, the system 100 may be configured to implement data backup as directed by a backup client 112 that is a component of the host system 110. In certain embodiments described below, backup operations are advantageously directed by the backup client 112 of the host system 110. However, it should be understood that backup client logic may reside in any desirable or practical location within the scope of the present disclosure.
In certain embodiments, the backup system 120 comprises a direct-attached storage device (DAS). Alternatively, the backup system 120 may be a remote backup server system coupled to the host system 110 over a computer network, such as the Internet.
The backup client 112 of the host system 110 may issue read and/or write commands to the backup system 120 directing the backup system to save copies of data in the nonvolatile storage 140 of the backup system 120. In certain embodiments, the backup system 120 further maintains certain metadata tables or other data facilitating efficient backup and maintenance of user data in the non-volatile storage 140.
The backup client logic may be configured to implement data deduplication, which may involve identifying and removing or preventing duplication of data within the data storage 140 without compromising data fidelity and/or integrity. Deduplication may provide resiliency during hardware failures, and may further provide checksum validation on data and metadata, as well as redundancy for metadata and chunk data (e.g., frequently-accessed data chunks).
In order to facilitate deduplication functionality, the backup client (132 and/or 112) may segment files into smaller-sized chunks (e.g., 32-128 KB), which may be variable in size in certain embodiments. The backup client may further identify duplicate chunks and maintain a single copy of each chunk. Redundant copies of the chunks may be replaced by references to the single copy, such references being described in metadata table(s). In certain embodiments, chunks are compressed and then organized into container files for containerized backup.
The term “chunk” is used herein according to its broad and ordinary meaning and may refer to any allocation or collection of data in any type of form or structure. Furthermore, references to “chunks” herein may be applicable to any type of data structure, such as chunks, heaps, blobs, files, blocks, pages, or other data structure. Chunks described herein may be of fixed or arbitrary size. In certain embodiments, chunk size may change dynamically.
For data deduplication purposes, files may be replaced with stubs that point to data blocks that are stored within a common chunk store of the non-volatile storage 140. During file access, the correct blocks/chunks may be assembled and served to the host system 110. The backup client (112/130) may further implement one or more data maintenance operations with respect to the non-volatile storage 140, such as optimization, garbage collection, wear leveling and/or scrubbing.
In order to implement deduplication functionality, the backup system 120 and/or host system 110 may maintain metadata indicating associations between files and chunks associated therewith, as well as hash values or other identifiers associated with the various files and/or chunks in order to allow for the identification of modifications or changes in user data that necessitate backing up or saving of such modified or changed file data according to the relevant deduplication protocol.
In certain embodiments, the system 100 is configured to save each hashable unit of a file as a separate chunk file. Such chunk files may be named with the hexadecimal hash value associated with the respective chunks. In certain embodiments, separate chunk files are save with a “.hash” file extension. The saving of chunks as separate chunk files may at least partially alleviate the need to append chunks to an ever-growing file or blob, as described above. Therefore, where backup repositories do not provide a method of appending chunks to a remote file, embodiments disclosed herein may allow for saving of modified/new chunks by the host system 110 in the backup system 120. For example, rather than calling an append function for appending the chunk to the file, a simple file save function may be called by the host system 110 to save the separate chunk files in the backup system 120.
Although not illustrated, the backup system 120 and host system 110 may include various other components and or features, such as a volatile and/or nonvolatile memory modules, data transmission channels/interfaces, and/or, and/or processors, state machines, or other computational components.
Chunk-Level Files
In certain embodiments, a file of user data may comprise a plurality of chunks of user data, each chunk representing a sub portion of the user data of the file.
As described above, in certain backup solutions, modified/new chunks may be appended to an ever-growing file, wherein file offset and chunk length data may be maintained in the backup data store for locating such chunks. However, as not all backup repositories may be configured to support appending chunks to files in such a manner, it may be desirable for individual chunks to be saved as separate files in certain situations, as disclosed herein. Furthermore, the separate chunk files may be named with the hash value (e.g., hexadecimal value) of the respective chunk. For illustration purposes, a chunk file may be named as, for example, “08EF8A59CC3B17D9.hash,” or other hash value and/or file extension.
The separate files stored for individual chunks of data are referred to herein as “chunk files.” As shown in
As described above, data deduplication is often used to backup only those portions of a file that have changed or are new and to reuse chunks of files across many files. In order to determine changes and/or new file chunks, a hash algorithm may be run over one or more portions (e.g., chunks) of a file that has changed to generate a hash value for each chunk. In certain embodiments, the hash algorithm produces a unique hash value in order to avoid collisions that may otherwise cause problems during data reassembly.
The backup client may be configured to compare the newly calculated hash values to a list of saved hash values associated with the file. If the hash value has changed for a chunk or chunks of the file, only those changed chunks are saved as chunk files in certain embodiments. If a hash value is recognized as already being present in the backup destination repository, due to the uniqueness of the hash values, the new chunk may not need to be saved, and may be marked as already being saved.
Filename Directory Location Identification
In certain embodiments, the path to the storage location of a hash file is encoded into the filename of the chunk file for ease of lookup. As a result, there may not be a need to save a separate path value into the data store that indexes the files that have been backed up. As shown, certain characters, symbols or bits/bytes of the filename may be associated with different levels of the filesystem hierarchy, and may identify a file location path within the hierarchy. As the hash values associated with the various saved chunk files may be, by their nature, unique values, certain embodiments therefore take advantage of such uniqueness for the purpose of reducing the amount of metadata necessary to locate the chunk files in the file directory.
With respect to the example filename “08EF8B,” therefore, “08” may identify a highest level (Level 0) folder 401 of the relevant directory hierarchy. Therefore, the file 450 may be identified as being saved in a subfolder, or child folder, of the parent folder 401. The remaining symbols of the filename may identify each of the remaining levels of the hierarchy. For example, as shown, the third and fourth symbols “EF” may identify the next level folder 411 under which the file 450 is stored.
The last two characters “8B” identify the child subfolder 422 in which the file 450 is stored. As shown, the file 450 may be a chunk file including the filename, which is the storage location-identifying string described above identifying the storage location of the chunk file 450. The chunk file 450 further includes the chunk data.
By implementing the storage location path in this manner, embodiments disclosed herein may provide for reduced metadata requirements, as it may not be necessary to store file offset and number of bytes, as well as storage path information in the data store. Directory trees like that shown in
File Backup/Restore Processes
At block 504, the process 500 involves calculating hashes of hashable chunks of the file. Such step may involve identifying hashable chunk portions of the file and calculating hash values associated with one or more of the hashable chunks. In certain embodiments, the process 500 involves calculating hashes for only those chunks of the file that have been modified or her new.
At block 506, the process 500 involves comparing the calculated hashes of the file with saved hashes associate with file. Based on such comparison, the process 500 and involves determining which of the hashable chunks of the file have changed at block 508. For example, the process 500 may involve determining whether a newly-generated hash value already exists in the relevant storage module; if the generated hash does not already exist in the storage module, such absence may indicate that the associated chunk is new or modified, and therefore need to be backed up.
At block 510, the process 500 involves saving the modified or new chunks as separate files in the backup data store. The separate chunk files may include certain metadata or chunk identifier data, as well as the chunk data. That is, instead of appending new/modified chunks to an ever-growing file, the chunk may be saved as a standalone file. This may allow the backup client to use standard file I/O operations across different backup destination repository media types, including third-party destinations that do not support file append operations.
Certain embodiments, as shown at block 512, the separate chunk files may have filenames that includes the hash value of the respective chunk being saved. For example, the chunk file may be named with a hexadecimal representation of the hash of the chunk, and may have a “.hash” file extension. As described in greater detail above, in certain embodiments, the hash value filenames may be used to identify a storage location where the chunk file is stored.
At block 514, the process 500 involves marking the new or modified chunk as to the backup data store. If additional chunks remain to be saved as chunk files, the process 500 involves looping back to block 510 from the decision block 516.
At block 604, the process 600 involves retrieving a list of chunks associated with the requested file. For example, the list may be maintained in the backup system, such as in the backup data store, as a mechanism for tracking the locations of chunk files associated with particular files. The chunks may be hashable chunks of the file that constitute the requested version of the file. The process may involve by consulting the backup data store for the file to retrieve the list and/or identified chunks.
At block 606, the process 600 involves creating a temporary file corresponding to the requested restored file. The temporary file may be built as the restored file to provide back to the host. The blocks 608-612 illustrate a loop for iteratively retrieving each of the individual chunk files associated with the requested file. That is, at block 608, a chunk file associated with the requested files or tree from the backup data store, and the chunk of the retrieved chunk file is appended to the temporary file at block 610. Because hashable chunks of the file are read as files instead of offsets and lengths from a larger file, the backup solution illustrated in
If additional chunks remain of the file that have not yet been retrieved, at decision block 612, such determination is made and, if chunks remain, the process 600 returns to block 608 and groups until all of the chunks associated with the requested file have been retrieved and appended to the temporary file. The hashable chunks of the file may be retrieved from the backup destination repository using the hash value of the chunk to determine the hashable chunk file name, wherein the file path is identified by the filename itself, as described above. At block 614, the process 600 involves providing the restored file to, for example, a host system. The file may be restored to a location and/or name provided by the user.
Embodiments disclosed herein may provide various benefits over certain existing backup solutions. For example, the disclosed backup solutions may be compatible with a substantial number of third-party data repositories that may be used by host-side backup clients. For example, such improved efficiency may be particularly evident when, for example, a single word in a relatively large document is modified, or adjusts a small number of pixels in a relatively large image file are modified. In such a scenario, according to certain embodiments disclosed herein, only the portion of the file that changed, and not the entire file, is saved, thereby saving time and/or resources.
Additional Embodiments
Those skilled in the art will appreciate that in some embodiments, other types of data backup systems can be implemented while remaining within the scope of the present disclosure. In addition, the actual steps taken in the processes discussed herein may differ from those described or shown in the figures. Depending on the embodiment, certain of the steps described above may be removed, and/or others may be added.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the protection. For example, the various components illustrated in the figures may be implemented as software and/or firmware on a processor, application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), or dedicated hardware. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure. Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
All of the processes described above may be embodied in, and fully automated via, software code modules executed by one or more general purpose or special purpose computers or processors. The code modules may be stored on any type of computer-readable medium or other computer storage device or collection of storage devices. Some or all of the methods may alternatively be embodied in specialized computer hardware.
Number | Name | Date | Kind |
---|---|---|---|
6499054 | Hesselink et al. | Dec 2002 | B1 |
6732158 | Hesselink et al. | May 2004 | B1 |
7120692 | Hesselink et al. | Oct 2006 | B2 |
7454443 | Ram et al. | Nov 2008 | B2 |
7467187 | Hesselink et al. | Dec 2008 | B2 |
7546353 | Hesselink et al. | Jun 2009 | B2 |
7587467 | Hesselink et al. | Sep 2009 | B2 |
7600036 | Hesselink et al. | Oct 2009 | B2 |
7788404 | Hesselink et al. | Aug 2010 | B2 |
7917628 | Hesselink et al. | Mar 2011 | B2 |
7934251 | Hesselink et al. | Apr 2011 | B2 |
7949564 | Hughes et al. | May 2011 | B1 |
8004791 | Szeremeta et al. | Aug 2011 | B2 |
8209540 | Brouwer et al. | Jun 2012 | B2 |
8255661 | Karr et al. | Aug 2012 | B2 |
8285965 | Karr et al. | Oct 2012 | B2 |
8341117 | Ram et al. | Dec 2012 | B2 |
8341275 | Hesselink et al. | Dec 2012 | B1 |
8352567 | Hesselink et al. | Jan 2013 | B2 |
8458131 | Bindal et al. | Jun 2013 | B2 |
8489611 | Tofano | Jul 2013 | B2 |
8526798 | Hesselink | Sep 2013 | B2 |
8631284 | Stevens | Jan 2014 | B2 |
8646054 | Karr et al. | Feb 2014 | B1 |
8661507 | Hesselink et al. | Feb 2014 | B1 |
8688797 | Hesselink et al. | Apr 2014 | B2 |
8713265 | Rutledge | Apr 2014 | B1 |
8762682 | Stevens | Jun 2014 | B1 |
8780004 | Chin | Jul 2014 | B1 |
8793374 | Hesselink et al. | Jul 2014 | B2 |
8819443 | Lin | Aug 2014 | B2 |
20050144195 | Hesselink et al. | Jun 2005 | A1 |
20050144200 | Hesselink et al. | Jun 2005 | A1 |
20080250085 | Gray et al. | Oct 2008 | A1 |
20080270436 | Fineberg et al. | Oct 2008 | A1 |
20100106691 | Preslan et al. | Apr 2010 | A1 |
20100211983 | Chou | Aug 2010 | A1 |
20100250885 | Nakata | Sep 2010 | A1 |
20110213754 | Bindal et al. | Sep 2011 | A1 |
20110218969 | Anglin et al. | Sep 2011 | A1 |
20120036041 | Hesselink | Feb 2012 | A1 |
20120054156 | Mason, Jr. et al. | Mar 2012 | A1 |
20120084523 | Littlefield | Apr 2012 | A1 |
20120317359 | Lillibridge | Dec 2012 | A1 |
20130060739 | Kalach | Mar 2013 | A1 |
20130144840 | Anglin | Jun 2013 | A1 |
20130212401 | Lin | Aug 2013 | A1 |
20130246366 | Preslan et al. | Sep 2013 | A1 |
20130266137 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268749 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268759 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268771 | Blankenbeckler et al. | Oct 2013 | A1 |
20140032499 | Li et al. | Jan 2014 | A1 |
20140095439 | Ram | Apr 2014 | A1 |
20140149699 | Yakushev et al. | May 2014 | A1 |
20140169921 | Carey | Jun 2014 | A1 |
20140173215 | Lin et al. | Jun 2014 | A1 |
20140344539 | Gordon et al. | Nov 2014 | A1 |
20150186371 | Xu et al. | Jul 2015 | A1 |
20150186407 | Xu et al. | Jul 2015 | A1 |
Number | Date | Country |
---|---|---|
2005022321 | Mar 2005 | WO |
2010113167 | Oct 2010 | WO |
Entry |
---|
Tamir Ram, et al., U.S. Appl. No. 14/246,706, filed Apr. 7, 2014, 21 pages. |
International Search Report and Written Opinion for PCT/US2016/024181 dated Jul. 5, 2016. |
Number | Date | Country | |
---|---|---|---|
20160292048 A1 | Oct 2016 | US |