The present invention relates in general to computers, and more particularly to a method, system, and computer program product for preserving redundancy and other data security characteristics in computing environments in which data deduplication systems are incorporated.
Computers and computer systems are found in a variety of settings in today's society. Computing environments and networks may be found at home, at work, at school, in government, and in other settings. Computing environments increasingly store data in one or more storage environments, which in many cases are remote from the local interface presented to a user.
These computing storage environments may use many storage devices such as disk drives, often working in concert, to store, retrieve, and update a large body of data, which may then be provided to a host computer requesting or sending the data. In some cases, a number of data storage subsystems are collectively managed as a single data storage system. These subsystems may be managed by host “sysplex” (system complex) configurations that combine several processing units or clusters of processing units. In this way, multi-tiered/multi-system computing environments, often including a variety of types of storage devices, may be used to organize and process large quantities of data.
Many multi-tiered/multi-system computing environments implement data deduplication technologies to improve storage performance by reducing the amount of duplicated storage across storage devices. Data deduplication systems are increasingly utilized because they help reduce the total amount of physical storage that is required to store data. This reduction is accomplished by ensuring that duplicate data is not stored multiple times. Instead, for example, if a chunk of data matches with an already stored chunk of data, a pointer to the original data is stored in the virtual storage map instead of allocating new physical storage space for the new chunk of data.
In certain situations, however, the behavior of deduplication may go against the redundancy requirements of a hosted application, for example, or a storage policy, or other requirements. A need exists for a mechanism whereby identical data having redundancy requirements is safeguarded, yet the benefits of deduplication systems are not diminished, by allowing deduplication to occur for remaining data not having such requirements.
In view of the foregoing, various embodiments for preserving data redundancy in a data deduplication systems are disclosed. In one embodiment, by way of example only, a method for such preservation is disclosed. For a multi-device file system, at least one virtual device out of a volume set is designated as not subject to a deduplication operation.
In addition to the foregoing exemplary embodiment, various additional embodiments are provided and supply related advantages.
In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
Data deduplication in storage controllers typically works “behind the scene” of an application, and may sometimes operate contrary to the interests of the application when deduplication operations are performed against the needs of the application. This situation may arise if an application writes multiple copies of the same data, and intends to retain multiple physical copies, while the deduplication subsystem (deduplication engine) finds these matching copies and ends up deduplicating the copies while storing the data. This can be detrimental to the application, which expects to find multiple copies at various locations, and is made to believe that it has done so by the storage subsystem, but in reality only a single copy of the data has been written.
Consider the following example. File systems usually prefer to write multiple physical copies of the “Superblock,” or a segment of metadata describing the file system on a block-based storage device, (or other metadata information) on a virtual disk to ensure redundancy. Since the contents of the Superblock are the same, data deduplication would result in retaining a single, physical copy of the Superblock and point multiple virtual addresses to the same physical block. This situation is highly inadvisable, because the loss of a single block on the physical copy of the Superblock may render the file system totally unusable, as there are no redundant copies of the Superblock. Conventionally, there are no existing methodologies that directly address this problem in data deduplication systems.
Various indirect methodologies may be employed to attempt to address this problem. In one example, the storage pool from which the data deduplication subsystem carves out physical storage can be mirrored (i.e., contains 2 or 3 copies of the same data). Hence multiple redundant copies can be created despite deduplication. However, this is inadequate protection for the application because of the following reasons. First, the application may wish to keep, for example, ten (10) copies of the same data. However, if the storage pool is two-way mirrored, it may only retain a maximum of two (2) copies. Second, since data deduplication carves out physical storage pools that span across large amounts of storage and multiple file systems, it is likely that multiple applications and file systems share the same physical storage pool. Hence it is possible that some critical copies of data (like the Superblock) from multiple file-systems are physically placed on the same disk. Since deduplication would prevent multiple copies of the same data to be written to multiple physical locations, the number of copies of critical data reduces and they can be placed on the same physical disk for multiple file systems. This increases the risk of single failures becoming fatal.
The illustrated embodiments provide multiple mechanisms for addressing the issues discussed previously. One goal of these mechanisms is to ensure that the deduplication subsystem in the storage controller (or wherever it may be located) balances the benefits of reducing the number of copies of data against application requirements for physical allocating multiple copies of identical data that is critical. Each of the methodologies described in the following illustrated embodiments may be used in a variety of circumstances and may have attendant benefits specific to those circumstances.
In one such embodiment, for multi-device file systems, one or more of the virtual disks associated in such file systems may be designated such that the virtual disks become devices in which storage components (such as the storage controller) does not perform deduplication operations for, such as deduplicating incoming write commands for these devices. The owning application may thereby allocate space to these specific virtual disks in order to store multiple physical copies of identical data.
In view of the described embodiment, by allowing the application to allocate certain data to un-deduplicated storage, and thereby dictate whether a write must be deduplicated, the application is allowed flexibility to implement storage policy associated with the data it generates. This way, the application is in a better position than the deduplication system to determine whether selected data blocks, even though identical, must still be located in separate physical locations. In addition, the storage controller (or other storage management device) continues to perform its role of data reduction by deduplication, and at the same time allows enough control to the owning application to rule out deduplication when required.
By allowing an owning application control to, in effect, designate which data is to forgo data deduplication operations by specifically allocating it as such, very fine-grained control is thereby provided to the application, allowing for flexibility in implementation while still retaining advantages of deduplication functionality and retaining redundancy for key data.
In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments of the present invention. It is understood that other embodiments may be utilized and structural and operational changes may be made without departing from the scope of the present invention.
A number of virtual volumes 22, 24, and 26 are presented to the host systems 2a, b . . . n in lieu of presenting a number of physical or logical volumes (often which may be physically configured in a complex relationship). The host systems 2a, b . . . n may communicate with the storage controller 6 over a network 8, such as the Internet, a Storage Area Network (SAN), an Intranet, Local Area Network (LAN), Wide Area Network (WAN), etc., using multiple communication protocols such as TCP/IP, Fibre Channel, Ethernet, etc. at different layers in a protocol stack.
The storage controller 6 includes a processor 10 executing code 12 to perform storage controller operations. The storage controller 6 further includes a cache system 14 and non-volatile storage unit 16, such as a battery backed-up memory device. The storage controller 6 stores in cache 14 data updates received from the hosts 2a, b . . . n to write to the virtual storage volumes 22, 24, and 26 (and thereby to volumes 28, 30, and 32) as well as data read from the volumes 28, 30, and 32 to return to the hosts 2a, b . . . n. When operating in Fast Write mode, data updates received from the hosts 2a, b . . . n are copied to both cache 14 and the NVS 16. End status is returned to the host 2a, b . . . n sending the data update after the update is copied to both the cache 14 and NVS 16.
Storage controller 6 also includes a data deduplication engine 17 in communication with a storage management module 18 as will be further described. Data deduplication engine 17 is configured for performing, in conjunction with processor 10, data deduplication operations on write data passed through storage controller 6 to virtual volumes 20 and volumes 28, 30, and 32.
Cache system 14 may include a data frequency index map, or “storage map” for short, which is not shown for purposes of illustrative convenience. In one embodiment, cache system 14 accepts write data from hosts 2a, b . . . n or similar devices, that is then placed in cache memory. Data deduplication engine 17 then tests the write data for duplication in the cache memory and writes an index and frequency for such in the storage map.
Turning to
The storage management operations further described may be executed on memory 206, located in system 200 or elsewhere. Memory device 206 may include such memory as electrically erasable programmable read only memory (EEPROM) or a host of related devices. Memory device 206 and mass storage device 204 are connected to CPU 202 via a signal-bearing medium. In addition, CPU 202 and overall host 2 are connected to communication network 8.
Memory 206 as shown includes an application 208, and an application 210, in which a file system 212 is operational. Application 208 and Application 210 may create, delete, or otherwise manage segments of data, such as data chunks or data blocks, which are physically stored in devices such as mass storage device 204, for example, in storage 28, 30, and 32 as shown in
A “volume manager” 211 such as a Logical Volume Manager (LVM) operational in Linux® architectures may constitute at least a portion of an application 210. The LVM manages disk drives and similar mass-storage devices (e.g., storage volumes 28, 30, and 32,
In one embodiment, application 208 may be an operating system (OS) 208, or application 210 may be an OS 210, and file system 212 retains a tight coupling between the OS 210 and the file system 212. File system 212 may provide mechanisms to control access to the data and metadata, and may contain mechanisms to ensure data reliability such as those necessary to further certain aspects of the present invention, as one of ordinary skill in the art will appreciate. File system 212 may provide a means for multiple application programs 208, 210 to update data in the same file at nearly the same time.
As previously described, the storage controller 6 (again,
In one exemplary embodiment, the computing administrator may create Logical Unit Names (LUNs) (virtual devices) on the storage controller 6 (again,
While discovering the above virtual device 20 that is created by the storage controller 6, the respective device or devices not subject to deduplication may be conveyed to the owning application by means of a special Small Computing System Interface (SCSI) command, for example. In one embodiment, this command may be a Mode Sense command or a unique page of the Inquiry command. This information may be conveyed to the application in an out-of-band fashion as well.
Once the application 208, 210 or file system 212 comes to know the virtual device(s) described previously, the application 208, 210 or file system 212 may allocate space from these devices for those data segments where deduplication is not desired (e.g., the Superblock). This way, multiple copies of identical data blocks may be stored by the owning application when the owning application issues write commands to these specially designated virtual devices.
Turning now to
As one of ordinary skill in the art will appreciate, if the allocated volume (e.g., volume 308) is full, the owning application/multi-device file system/storage controller/volume manager may choose to allocate the non-deduplication-subject data to another volume, such as volume 304 or volume 306, or may create/designate another volume (e.g., volume 310) in another volume set as subject to no deduplication. As one of ordinary skill in the art will appreciate, these processes may vary according to a particular implementation, characteristics of the underlying physical storage, resource considerations (e.g., bandwidth and cost considerations) and the like.
In a following step, a write command is then issued to the designated LUN (step 610). The write data bypasses the deduplication system to the allocated storage in the LUNs/virtual volume(s) (step 612), and deduplication operations are withheld from being performed (step 614). The method 600 then ends (step 616).
As will be appreciated by one of ordinary skill in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” “process” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, or entirely on the remote computer or server. In the last scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the above figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While one or more embodiments of the present invention have been illustrated in detail, one of ordinary skill in the art will appreciate that modifications and adaptations to those embodiments may be made without departing from the scope of the present invention as set forth in the following claims.
This application is a Continuation of U.S. patent application Ser. No. 13/453,270, filed on Apr. 23, 2012.
Number | Name | Date | Kind |
---|---|---|---|
7567188 | Anglin et al. | Jul 2009 | B1 |
7818495 | Tanaka et al. | Oct 2010 | B2 |
7908436 | Srinivasan et al. | Mar 2011 | B1 |
8074049 | Gelson et al. | Dec 2011 | B2 |
8086799 | Mondal et al. | Dec 2011 | B2 |
8117464 | Kogelnik | Feb 2012 | B1 |
8468138 | Chhaunker et al. | Jun 2013 | B1 |
8527544 | Colgrove et al. | Sep 2013 | B1 |
8539148 | Chen | Sep 2013 | B1 |
8589640 | Colgrove et al. | Nov 2013 | B2 |
8660994 | Slater | Feb 2014 | B2 |
8712976 | Chen et al. | Apr 2014 | B1 |
9058118 | Urkude et al. | Jun 2015 | B1 |
20030115447 | Pham et al. | Jun 2003 | A1 |
20030225800 | Kavuri | Dec 2003 | A1 |
20040111625 | Duffy et al. | Jun 2004 | A1 |
20040131182 | Rogaway | Jul 2004 | A1 |
20060179489 | Mas Ribes | Aug 2006 | A1 |
20060230076 | Gounares et al. | Oct 2006 | A1 |
20070168350 | Utiger | Jul 2007 | A1 |
20080098083 | Shergill et al. | Apr 2008 | A1 |
20080244172 | Kano | Oct 2008 | A1 |
20090063795 | Yueh | Mar 2009 | A1 |
20090063883 | Mori | Mar 2009 | A1 |
20090268903 | Bojinov et al. | Oct 2009 | A1 |
20090271402 | Srinivasan et al. | Oct 2009 | A1 |
20090319585 | Gokhale | Dec 2009 | A1 |
20090319772 | Singh et al. | Dec 2009 | A1 |
20100037118 | Saliba et al. | Feb 2010 | A1 |
20100070478 | Anglin | Mar 2010 | A1 |
20100070715 | Waltermann et al. | Mar 2010 | A1 |
20100121825 | Bates et al. | May 2010 | A1 |
20100250501 | Mandagere et al. | Sep 2010 | A1 |
20100250549 | Muller | Sep 2010 | A1 |
20100268960 | Moffat et al. | Oct 2010 | A1 |
20100299311 | Anglin et al. | Nov 2010 | A1 |
20100306412 | Therrien et al. | Dec 2010 | A1 |
20100313036 | Lumb | Dec 2010 | A1 |
20100313040 | Lumb | Dec 2010 | A1 |
20100333116 | Prahlad et al. | Dec 2010 | A1 |
20110022718 | Evans et al. | Jan 2011 | A1 |
20110029739 | Nakajima et al. | Feb 2011 | A1 |
20110035541 | Tanaka et al. | Feb 2011 | A1 |
20110066628 | Jayaraman | Mar 2011 | A1 |
20110145207 | Agrawal et al. | Jun 2011 | A1 |
20110145576 | Bettan | Jun 2011 | A1 |
20110225130 | Tokoro | Sep 2011 | A1 |
20110225214 | Guo | Sep 2011 | A1 |
20110238634 | Kobara | Sep 2011 | A1 |
20110238635 | Leppard | Sep 2011 | A1 |
20110239097 | Bates et al. | Sep 2011 | A1 |
20110258398 | Saliba et al. | Oct 2011 | A1 |
20120017043 | Aizman et al. | Jan 2012 | A1 |
20120072654 | Olbrich et al. | Mar 2012 | A1 |
20120089574 | Doerner | Apr 2012 | A1 |
20120095968 | Gold | Apr 2012 | A1 |
20120158672 | Oltean et al. | Jun 2012 | A1 |
20120317084 | Liu | Dec 2012 | A1 |
20130086006 | Colgrove et al. | Apr 2013 | A1 |
20130097380 | Colgrove et al. | Apr 2013 | A1 |
20130144846 | Chhaunker et al. | Jun 2013 | A1 |
20130198742 | Kumar | Aug 2013 | A1 |
20130262404 | Daga et al. | Oct 2013 | A1 |
20130262753 | Prins et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
1341240 | Mar 2002 | CN |
101656720 | Feb 2010 | CN |
201110156839 | Jun 2011 | CN |
102156727 | Aug 2011 | CN |
102221982 | Oct 2011 | CN |
102281320 | Dec 2011 | CN |
2006308636 | Nov 2006 | JP |
2011033582 | Mar 2011 | WO |
Entry |
---|
Gu Yu et al., “Reliability Provision Mechanism for Large-Scale De-duplication Storage Systems” pp. 739-744, vol. 50, No. 5, 2010., J Tsinghua Univ (Sci & Tech), ISSN 1000-0054. |
Wang et al., “Research on Secure De-duplication based on Proxy-Recreation”, Dec. 27, 2011, pp. 1-6, Sciencepaper Online. |
Number | Date | Country | |
---|---|---|---|
20130282675 A1 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13453270 | Apr 2012 | US |
Child | 13801724 | US |