Full sweep disk synchronization in a storage system

Information

  • Patent Grant
  • 10235091
  • Patent Number
    10,235,091
  • Date Filed
    Friday, September 23, 2016
    8 years ago
  • Date Issued
    Tuesday, March 19, 2019
    5 years ago
Abstract
Described embodiments provide systems and methods for synchronizing a production volume and a backup volume of a storage system. A first thin volume is created and associated with the production volume. A first replica of the production volume is generated by copying data from the production volume to a replica volume. During the copying, an I/O request to be written to the production volume may be received. Data from the I/O request is written to the first thin volume and data changed due to the I/O request is tracked in metadata associated with the production volume and the first thin volume. A size of the first thin volume is checked, and when the size of the first thin volume is below a threshold, changes from the first thin volume are applied asynchronously to the backup storage.
Description
BACKGROUND

A distributed storage system may include a plurality of storage devices (e.g., storage arrays) to provide data storage to a plurality of nodes. The plurality of storage devices and the plurality of nodes may be situated in the same physical location, or in one or more physically remote locations. A distributed storage system may include data protection systems that back up production site data by replicating production site data on a secondary backup storage system. The production site data may be replicated on a periodic basis and/or may be replicated as changes are made to the production site data. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location.


SUMMARY

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.


An embodiment may provide a method for synchronizing a production volume and a backup volume of a storage system. A first thin volume is created and associated with the production volume. A first replica of the production volume is generated by copying data from the production volume to a replica volume. During the copying, an I/O request to be written to the production volume may be received. Data from the I/O request is written to the first thin volume and data changed due to the I/O request is tracked in metadata associated with the production volume and the first thin volume. A size of the first thin volume is checked, and when the size of the first thin volume is below a threshold, changes from the first thin volume are applied asynchronously to the backup storage.


Another embodiment may provide a system including a processor and a memory storing computer program code that when executed on the processor causes the processor to operate a storage system. The system may be operable to create a first thin volume associated with the production volume. A first replica of the production volume is generated by copying data from the production volume to a replica volume. During the copying, an I/O request to be written to the production volume may be received. Data from the I/O request is written to the first thin volume and data changed due to the I/O request is tracked in metadata associated with the production volume and the first thin volume. A size of the first thin volume is checked, and when the size of the first thin volume is below a threshold, changes from the first thin volume are applied asynchronously to the backup storage.


Another embodiment may provide a computer program product including a non-transitory computer readable storage medium having computer program code encoded thereon that when executed on a processor of a computer causes the computer to operate a storage system. The computer program product may include computer program code to create a first thin volume associated with the production volume. A first replica of the production volume is generated by copying data from the production volume to a replica volume. During the copying, an I/O request to be written to the production volume may be received. Data from the I/O request is written to the first thin volume and data changed due to the I/O request is tracked in metadata associated with the production volume and the first thin volume. A size of the first thin volume is checked, and when the size of the first thin volume is below a threshold, changes from the first thin volume are applied asynchronously to the backup storage.





BRIEF DESCRIPTION OF THE DRAWING FIGURES

Objects, aspects, features, and advantages of embodiments disclosed herein will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements. Reference numerals that are introduced in the specification in association with a drawing figure may be repeated in one or more subsequent figures without additional description in the specification in order to provide context for other features. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. The drawings are not meant to limit the scope of the claims included herewith.



FIG. 1 is a block diagram of a data protection system, according to an illustrative embodiment of the instant disclosure;



FIG. 2 is a diagram illustrating a journal history of write transactions for the data protection system of FIG. 1, according to an illustrative embodiment of the instant disclosure;



FIG. 3 is a block diagram of an example of a data protection system to perform snapshot replication on a storage system not configured to support snapshot replication, according to an illustrative embodiment of the instant disclosure;



FIG. 4 is a flowchart of an example of a process to generate an initial snapshot into backup storage, according to an illustrative embodiment of the instant disclosure;



FIG. 5 is a flowchart of an example of a process to perform snapshot replication on a storage system not configured to support snapshot replication, according to an illustrative embodiment of the instant disclosure;



FIG. 6 is a flowchart of an example of a process to perform full sweep disk synchronization, according to an illustrative embodiment of the instant disclosure; and



FIG. 7 is a block diagram of an example of a hardware device that may perform at least a portion of the processes in FIGS. 4-6.





DETAILED DESCRIPTION

Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. In some embodiments, the term “I/O request” or simply “I/O” may be used to refer to an input or output request. In some embodiments, an I/O request may refer to a data read or data write request. In some embodiments, the term “storage system” may encompass physical computing systems, cloud or virtual computing systems, or a combination thereof. In some embodiments, the term “storage device” may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), solid state drivers (SSDs), flash devices (e.g., NAND flash devices), and similar devices that may be accessed locally and/or remotely (e.g., via a storage attached network (SAN)). In some embodiments, the term “storage device” may also refer to a storage array including multiple storage devices.


Referring to the example embodiment shown in FIG. 1, a data protection system 100 may include two sites, Site I 100a and Site II 100b, which communicate via a wide area network (WAN) 128, such as the Internet. In some embodiments, under normal operation, Site I 100a may correspond to a source site (i.e., the transmitter within a data replication workflow) of system 100 and Site II 100b may be a target site (i.e., the receiver within a data replication workflow) of data protection system 100. Thus, in some embodiments, during normal operations, the direction of replicated data flow may be from Site I 100a to Site II 100b.


In certain embodiments, Site I 100a and Site II 100b may be remote from one another. In other embodiments, Site I 100a and Site II 100b may be local to one another and may be connected via a local area network (LAN). In some embodiments, local data protection may have the advantage of minimizing data lag between target and source, and remote data protection may have the advantage of being robust in the event that a disaster occurs at the source site.


In particular embodiments, data protection system 100 may include a failover mode of operation, wherein the direction of replicated data flow is reversed. In particular, in some embodiments, Site I 100a may behave as a target site and Site II 100b may behave as a source site. In some embodiments, failover may be triggered manually (e.g., by a user) or automatically. In many embodiments, failover may be performed in the event of a disaster at Site I 100a. In some embodiments, both Site I 100a and Site II 100b may behave as source site for some stored data and may behave simultaneously as a target site for other stored data. In certain embodiments, a portion of stored data may be replicated from one site to the other, and another portion may not be replicated.


In some embodiments, Site I 100a corresponds to a production site (e.g., a facility where one or more hosts run data processing applications that write data to a storage system and read data from the storage system) and Site II 100b corresponds to a backup or replica site (e.g., a facility where replicated production site data is stored). Thus, in some embodiments, Site II 100b may be responsible for replicating production site data and may enable rollback of data of Site I 100a to an earlier point in time. In some embodiments, rollback may be used in the event of data corruption of a disaster, or alternatively in order to view or to access data from an earlier point in time.


Described embodiments of Site I 100a may include a source host 104, a source storage system (or “storage array”) 108, and a source data protection appliance (DPA) 112 coupled via a first storage area network (SAN). Similarly, in some embodiments, Site II 100b may include a target host 116, a target storage system 120, and a target DPA 124 coupled via a second SAN. In some embodiments, each SAN may include one or more devices (or “nodes”) that may be designated an “initiator,” a “target”, or both. For example, in some embodiments, the first SAN may include a first fiber channel switch 148 and the second SAN may include a second fiber channel switch 168. In some embodiments, communication links between each host 104 and 116 and its corresponding storage system 108 and 120 may be any appropriate medium suitable for data transfer, such as fiber communication channel links. In many embodiments, a host communicates with its corresponding storage system over a communication link, such as an InfiniBand (IB) link or Fibre Channel (FC) link, and/or a network, such as an Ethernet or Internet (e.g., TCP/IP) network that may employ, for example, the iSCSI protocol.


In some embodiments, each storage system 108 and 120 may include storage devices for storing data, such as disks or arrays of disks. Typically, in such embodiments, storage systems 108 and 120 may be target nodes. In some embodiments, in order to enable initiators to send requests to storage system 108, storage system 108 may provide (e.g., expose) one or more logical units (LU) to which commands are issued. Thus, in some embodiments, storage systems 108 and 120 may be SAN entities that provide multiple logical units for access by multiple SAN initiators. In some embodiments, an LU is a logical entity provided by a storage system for accessing data stored therein. In some embodiments, a logical unit may be a physical logical unit or a virtual logical unit. In some embodiments, a logical unit may be identified by a unique logical unit number (LUN).


In the embodiment shown in FIG. 1, storage system 108 may expose logical unit 136, designated as LU A, and storage system 120 exposes logical unit 156, designated as LU B. LU B 156 may be used for replicating LU A 136. In such embodiments, LU B 156 may be generated as a copy of LU A 136. In one embodiment, LU B 156 may be configured so that its size is identical to the size of LU A 136.


As shown in FIG. 1, in some embodiments, source host 104 may generate a host device 140 (“Device A”) corresponding to LU A 136 and source host 116 may generate a host device 160 (“Device B”) corresponding to LU B 156. In some embodiments, a host device may be a logical entity within a host through which the host may access an LU. In some embodiments, an operating system of a host may generate a host device for each LU exposed by the storage system in the host SAN.


In some embodiments, source host 104 may act as a SAN initiator that issues I/O requests through host device 140 to LU A 136 using, for example, SCSI commands. In some embodiments, such requests may be transmitted to LU A 136 with an address that includes a specific device identifier, an offset within the device, and a data size.


In some embodiments, source DPA 112 and target DPA 124 may perform various data protection services, such as data replication of a storage system, and journaling of I/O requests issued by hosts 104 and/or 116. In some embodiments, when acting as a target DPA, a DPA may also enable rollback of data to an earlier point-in-time (PIT), and enable processing of rolled back data at the target site. In some embodiments, each DPA 112 and 124 may be a physical device, a virtual device, or may be a combination of a virtual and physical device.


In some embodiments, a DPA may be a cluster of such computers. In some embodiments, use of a cluster may ensure that if a DPA computer is down, then the DPA functionality switches over to another computer. In some embodiments, the DPA computers within a DPA cluster may communicate with one another using at least one communication link suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (FC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via fiber channel or IP based protocols, or other such transfer protocols. In some embodiments, one computer from the DPA cluster may serve as the DPA leader. In some embodiments, the DPA cluster leader may coordinate between the computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.


In certain embodiments, a DPA may be a standalone device integrated within a SAN. Alternatively, in some embodiments, a DPA may be integrated into storage system. In some embodiments, the DPAs communicate with their respective hosts through communication links suitable for data transfer, for example, an InfiniBand (IB) link, a Fibre Channel (FC) link, and/or a network link, such as an Ethernet or Internet (e.g., TCP/IP) link to transfer data via, for example, SCSI commands or any other protocol.


In various embodiments, the DPAs may act as initiators in the SAN. For example, in some embodiments, the DPAs may issue I/O requests using, for example, SCSI commands, to access LUs on their respective storage systems. In some embodiments, each DPA may also be configured with the necessary functionality to act as targets, e.g., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including their respective hosts. In some embodiments, being target nodes, the DPAs may dynamically expose or remove one or more LUs. As described herein, in some embodiments, Site I 100a and Site II 100b may each behave simultaneously as a production site and a backup site for different logical units. As such, in some embodiments, DPA 112 and DPA 124 may each behave as a source DPA for some LUs and as a target DPA for other LUs, at the same time.


In the example embodiment shown in FIG. 1, hosts 104 and 116 include protection agents 144 and 164, respectively. In some embodiments, protection agents 144 and 164 may be intercept commands (e.g., SCSI commands) issued by their respective hosts to LUs via host devices (e.g., host devices 140 and 160). In some embodiments, a protection agent may act on intercepted SCSI commands issued to a logical unit in one of the following ways: send the SCSI commands to its intended LU; redirect the SCSI command to another LU; split the SCSI command by sending it first to the respective DPA and, after the DPA returns an acknowledgement, send the SCSI command to its intended LU; fail a SCSI command by returning an error return code; and delay a SCSI command by not returning an acknowledgement to the respective host. In some embodiments, protection agents 144 and 164 may handle different SCSI commands, differently, according to the type of the command. For example, in some embodiments, a SCSI command inquiring about the size of a certain LU may be sent directly to that LU, whereas a SCSI write command may be split and sent first to a DPA within the host's site.


In various embodiments, a protection agent may change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA. For example, in some embodiments, the behavior of a protection agent for a certain host device may depend on the behavior of its associated DPA with respect to the LU of the host device. In some embodiments, when a DPA behaves as a source site DPA for a certain LU, then during normal course of operation, the associated protection agent may split I/O requests issued by a host to the host device corresponding to that LU. Similarly, in some embodiments, when a DPA behaves as a target device for a certain LU, then during normal course of operation, the associated protection agent fails I/O requests issued by host to the host device corresponding to that LU.


In some embodiments, communication between protection agents 144 and 164 and a respective DPA 112 and 124 may use any protocol suitable for data transfer within a SAN, such as fiber channel, SCSI over fiber channel, or other protocols. In some embodiments, the communication may be direct, or via a logical unit exposed by the DPA.


In certain embodiments, protection agents may be drivers located in their respective hosts. Alternatively, in some embodiments, a protection agent may also be located in a fiber channel switch, or in any other device situated in a data path between a host and a storage system or on the storage system itself. In some embodiments, in a virtualized environment, the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.


As shown in the example embodiment shown in FIG. 1, target storage system 120 may expose a journal LU 176 for maintaining a history of write transactions made to LU B 156, referred to herein as a “journal.” In some embodiments, a journal may be used to provide access to storage at specified points-in-time (PITs), as discussed in greater detail in regard to FIG. 2. In some embodiments, the journal may be stored across multiple LUs (e.g., using striping, etc.). In some embodiments, target DPA 124 may include a journal processor 180 for managing the journal within journal LU 176. Referring back to the example embodiment of FIG. 1, journal processor 180 may manage the journal entries of LU B 156. Specifically, in some embodiments, journal processor 180 may enter write transactions received by the target DPA 124 from the source DPA 112 into the journal by writing them into journal LU 176, read the undo information for the transaction from LU B 156, update the journal entries in journal LU 176 with undo information, apply the journal transactions to LU B 156, and remove already-applied transactions from the journal. In one embodiment, journal processor 180 may perform processing such as described in the patent titled “METHODS AND APPARATUS FOR OPTIMAL JOURNALING FOR CONTINUOUS DATA REPLICATION” and with U.S. Pat. No. 7,516,287, issued Apr. 7, 2009, which is hereby incorporated by reference.


Some embodiments of data protection system 100 may be provided as physical systems for the replication of physical LUs, or as virtual systems for the replication of virtual LUs. For example, in one embodiment, a hypervisor may consume LUs and may generate a distributed file system on the logical units such as Virtual Machine File System (VMFS) that may generate files in the file system and expose the files as LUs to the virtual machines (each virtual machine disk is seen as a SCSI device by virtual hosts). In another embodiment, a hypervisor may consume a network based file system and exposes files in the Network File System (NFS) as SCSI devices to virtual hosts.


In some embodiments, in normal operation (sometimes referred to as “production mode”), DPA 112 may act as a source DPA for LU A 136. Thus, in some embodiments, protection agent 144 may act as a source protection agent, specifically by splitting I/O requests to host device 140 (“Device A”). In some embodiments, protection agent 144 may send an I/O request to source DPA 112 and, after receiving an acknowledgement from source DPA 112, may send the I/O request to LU A 136. In some embodiments, after receiving an acknowledgement from storage system 108, host 104 may acknowledge that the I/O request has successfully completed.


In some embodiments, when source DPA 112 receives a replicated I/O request from protection agent 144, source DPA 112 may transmit certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to target DPA 124 for journaling and for incorporation within target storage system 120. In some embodiments, when applying write operations to storage system 120, target DPA 124 may act as an initiator, and may send SCSI commands to LU B 156.


In some embodiments, source DPA 112 may send its write transactions to target DPA 124 using a variety of modes of transmission, including (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode.


In some embodiments, in synchronous mode, source DPA 112 may send each write transaction to target DPA 124, may receive back an acknowledgement from the target DPA 124, and in turn may send an acknowledgement back to protection agent 144. In some embodiments, in synchronous mode, protection agent 144 may wait until receipt of such acknowledgement before sending the I/O request to LU 136. In some embodiments, in asynchronous mode, source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.


In some embodiments, in snapshot mode, source DPA 112 may receive several I/O requests and combine them into an aggregate “snapshot” or “batch” of write activity performed in the multiple I/O requests, and may send the snapshot to target DPA 124 for journaling and incorporation in target storage system 120. In some embodiments, in snapshot mode, source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.


In some embodiments, a snapshot replica may be a differential representation of a volume. For example, the snapshot may include pointers to the original volume, and may point to log volumes for locations of the original volume that store data changed by one or more I/O requests. In some embodiments, snapshots may be combined into a snapshot array, which may represent different images over a time period (e.g., for multiple PITs).


As described herein, in some embodiments, in normal operation, LU B 156 may be used as a backup of LU A 136. As such, in some embodiments, while data written to LU A 136 by host 104 is replicated from LU A 136 to LU B 156, target host 116 should not send I/O requests to LU B 156. In some embodiments, to prevent such I/O requests from being sent, protection agent 164 may act as a target site protection agent for host device B 160 and may fail I/O requests sent from host 116 to LU B 156 through host device B 160. In some embodiments, in a recovery mode, target DPA 124 may undo the write transactions in journal LU 176 so as to restore the target storage system 120 to an earlier state.


Referring to FIG. 2, in some described embodiments, a write transaction 200 may be included within a journal and stored within a journal LU. In some embodiments, write transaction 200 may include one or more identifiers; a time stamp indicating the date and time at which the transaction was received by the source DPA; a write size indicating the size of the data block; a location in the journal LU where the data is entered; a location in the target LU where the data is to be written; and the data itself.


Referring to both FIGS. 1 and 2, in some embodiments, transaction 200 may correspond to a transaction transmitted from source DPA 112 to target DPA 124. In some embodiments, target DPA 124 may record write transaction 200 in the journal that includes four streams. In some embodiments, a first stream, referred to as a “DO” stream, includes a copy of the new data for writing to LU B 156. In some embodiments, a second stream, referred to as a “DO METADATA” stream, includes metadata for the write transaction, such as an identifier, a date and time, a write size, the offset within LU B 156 where the new data is written, and a pointer to the offset in the DO stream where the corresponding data is located. In some embodiments, a third stream, referred to as an “UNDO” stream, includes a copy of the data being overwritten within LU B 156 (referred to herein as the “old” data). In some embodiments, a fourth stream, referred to as an “UNDO METADATA” stream, includes an identifier, a date and time, a write size, a beginning address in LU B 156 where data was (or will be) overwritten, and a pointer to the offset in the UNDO stream where the corresponding old data is located.


Since the journal may contain the “undo” information necessary to rollback storage system 120, in some embodiments, data that was stored in specific memory locations at a specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time (PIT).


In some embodiments, each of the four streams may hold a plurality of write transaction data. In some embodiments, as write transactions are received dynamically by the target DPA, the write transactions may be recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction.


In some embodiments, a metadata stream (e.g., UNDO METADATA stream or the DO METADATA stream) and the corresponding data stream (e.g., UNDO stream or DO stream) may be kept in a single stream by interleaving metadata and data.


Some described embodiments may validate that point-in-time (PIT) data replicas (e.g., data replicated to LU B 156) are valid and usable, for example to verify that the data replicas are not corrupt due to a system error or inconsistent due to violation of write order fidelity. In some embodiments, validating data replicas can be important, for example, in data replication systems employing incremental backup where an undetected error in an earlier data replica may lead to corruption of future data replicas.


In conventional systems, validating data replicas can increase the journal lag, which may increase a recovery time objective (RTO) of a data protection system (e.g., an elapsed time between replicas or PITs). In such conventional systems, if the journal lag time is significant, the journal may become full and unable to account for data changes due to subsequent transactions. Further, in such conventional systems, validating data replicas may consume system resources (e.g., processor time, memory, communication link bandwidth, etc.), resulting in reduced performance for system tasks.


Referring to FIG. 3, in an illustrative embodiment, a data protection system 300 may include host 302a, host 302b, backup storage system 304 (e.g., a deduplicated storage system) and a datastore 306. In some embodiments, host 302a may include production virtual machine 310 and splitter 314 (e.g., data protection agent 144 of FIG. 1). In some embodiments, host 302b may be a hypervisor and splitter 314 may operate either in the hypervisor kernel or in another layer in the hypervisor, which allows splitter 314 to intercept I/O requests sent from host 302a to one or more virtual machine disks (VMDKs) 342. In some embodiments, host 302b may include a virtual data protection appliance (e.g., DPA appliance 124 of FIG. 1) having snapshot replication module 320 and splitter 334 (e.g., data protection agent 164 of FIG. 1). In an embodiment, splitter 334 of host 302b enables protection of virtual machines on the host 302b. In some embodiments, splitter 334 of host 302b may also provide faster access to VMDKs 342 from virtual DPA (vDPA) 316.


In an embodiment, datastore 306 may include one or more differential virtual machine disks, shown as differential VMDKs 346. Some embodiments of datastore 306 may also include journal virtual machine disk 348. In some embodiments, differential VMDKs 346 and journal VMDK 348 may be stored in datastore 306, and one or more production virtual machine disks 342 may be stored in datastore 307. In some embodiments, datastore 306 and datastore 307 are separate physical devices so that access to differential VMDKs does not affect performance of production VMDKs. In some embodiments, the differential VMDKs 346 may be used to store differential snapshot data representative of changes that happened to data stored on production VMDK 342. In one example, a first differential VMDK 346 may include changes due to writes that occurred to production VMDK 342 from time t1 to time t2, a second differential VMDK 346 may include the changes due to writes that occurred to production VMDK 342 from time t2 to time t3, and so forth.


In some embodiments, differential VMDKs 346 may be thin provisioned. Thin provisioning allocates storage space to volumes of a SAN in a flexible manner among multiple volumes based on a minimum space requirement for each volume at any given time.


In some embodiments, data protection system may include one or more consistency groups. In some embodiments, a consistency group may treat source volumes (e.g., production volumes) and target volumes (e.g., backup volumes) as a single logical entity for data replication and migration.


In some embodiments, journal 352 may be stored in journal VMDK 348. In some embodiments, journal 352 includes one or more delta marker streams (DMS) 362. In some embodiments, each DMS 362 may include metadata associated with data that may be different between one differential VMDK and another differential VMDK.


In one example, DMS 362 may include the metadata differences between a current copy of the production VMDK 342 and a copy currently stored in backup storage 304. In some embodiments, journal 352 does not include the actual data changes, but rather metadata associated with the changes. In some embodiments, the data of the changes may be stored in the differential VMDKs. Thus, some embodiments may operate employing thin volumes to perform data replication by tracking regions for replications with the thin devices, as described herein. Other embodiments may operate to replicate data directly (e.g., without employing thin devices) from a source storage to a target (or replica) storage.


Although not shown in FIG. 3, in some embodiments, host 302b, datastore 306 and backup storage system 304 may be integrated into a single device, such as an integrated protection appliance to backup and protect production data.


Referring to FIG. 4, some described embodiments may employ illustrative process 400 to generate an initial snapshot into the backup storage. Process 400 starts at block 402. In some embodiments, at block 404, a VMDK is generated. For example, vDPA 316 of FIG. 3 may generate a first VMDK 346 that will include changes and a delta marker stream (DMS) 362 on journal 352. In an embodiment, VMDK 346 is thin provisioned. In some embodiments, at block 406, data is copied (e.g., by vDPA 316 of FIG. 3) from a first virtual machine (e.g., production virtual machine 310 of FIG. 3) to a backup storage (e.g., backup storage system 304 of FIG. 3). In other words, in some embodiments, data is read from a production volume and written to backup storage by vDPA 316.


In some embodiments, at block 408, changes to the production virtual machine may be written to the differential VMDK. For example, vDPA 316 may write changes to production data of production virtual machine 310 to first differential VMDK 346. In some embodiments, splitter 314 may intercept write I/O requests arriving to production VMDK 342, and send them to vDPA 316, which may mark the metadata of changed locations (e.g., an offset and volume of the writes) in DMS 362, and acknowledge the I/O requests. In some embodiments, splitter 314 may write data associated with the I/O request to production VMDK 342 and, asynchronously, vDPA 316 may write data arriving from splitter 314 to differential VMDK 346.


In some embodiments, data stored on production volume 342 can change while data is being copied from it to backup storage (e.g., differential VMDK 346), for example if an I/O request is received while data is being copied. Thus, a snapshot replica may not be consistent with the data stored on the production volume. In some embodiments, at block 410, when a non-consistent copy for production VMDK 342 is generated on backup storage 304, process 400 may generate a new differential VMDK 346, which may also be a thin provisioned VMDK. In some embodiments, at block 412, mirrored I/O requests may be redirected from the production virtual machine to the new differential VMDK. For example, in some embodiments, splitter 334 may send I/O requests to vDPA 316a, and vDPA 316 may then asynchronously write data associated with the I/O requests to the new differential VMDK 346 and write metadata for the I/O request to the associated DMS 362.


In some embodiments, at block 414, data from the first differential VMDK 346 may be applied to a point-in-time in backup storage. For example, in some embodiments, vDPA 316 may apply data from the first differential VMDK 346 to a point-in-time replica in backup storage 304 (e.g., vDPA 316 reads the list of changed locations from a first DMS 362 and, for each changed location, vDPA 316 reads the changes from the associated differential VDMK 346 and writes the changes to backup storage system 304).


In some embodiments, at block 416, the first differential VMDK may be deleted. For example, in some embodiments, after backup storage 304 has a consistent point-in-time replica (e.g., as stored at block 414), vDPA 316 may delete the first differential VMDK. Process 400 completes at block 418.


Referring to FIG. 5, process 500 is an example of a process to perform snapshot replication on a storage system not configured to support snapshot replication, according to described embodiments. In some embodiments, process 500 may be performed by snapshot replication module 320. In some embodiments, at block 504, I/O requests from the virtual machine may be intercepted. For example, in some embodiments, I/O requests may be intercepted by splitter 314 and sent to vDPA 316. In some embodiments, at block 506, data associated with the I/O request may be buffered. For example, in some embodiments, vDPA 316 may buffer the data associated with the I/O request in memory of vDPA 316 and may send an acknowledgement that it received the I/O request to splitter 314. In some embodiments, splitter 314 may then write the data associated with the I/O request to production VMDK 342.


In some embodiments, at block 508, data associated with the I/O request may be written asynchronously to the differential VMDK, and, at block 510, metadata may be written to a delta marking stream (DMS) in an associated journal. For example, in some embodiments, vDPA 316 may write the data associated with the I/O request asynchronously to differential VMDK 346 and write the metadata associated with the I/O request to DMS 362. In some embodiments, journal metadata may also be stored at a predetermined portion of the associated VMDK (e.g., at the end of the VMDK).


In some embodiments, at block 512, a new differential VMDK and a new DMS may be generated. At block 514, system 300 may generate a snapshot replica corresponding to a point-in-time (PIT) to be stored in the backup storage. For example, in some embodiments, new differential VMDK 346 and new DMS 362 may be generated by vDPA 316 to track further changes (e.g., changes after the PIT). In some embodiments, system 300 may generate a point-in-time snapshot replica by having vDPA 316 generate a snapshot replica of the PIT copy in backup storage 304. In some embodiments, data differences from differential VDMK 346 may be applied to the new snapshot replica, so that backup storage 304 holds both a snapshot replica at the old PIT and a snapshot replica at the new PIT.


In some embodiments, at block 516, a previous differential VMDK may be deleted. For example, vDPA 316 may delete an earlier one of the differential virtual machine disks 346. In some embodiments, after block 516, process 500 may repeat by returning to block 504 to receive subsequent intercepted I/O requests.


To generate a full sweep disk synchronization (e.g., a full copy of the production volume) may be complicated since the thin volumes contain data changes, but not the complete data, and copying the entire production volume can be time consuming and consume system resources (e.g., if the production volume is large). Thus, some embodiments may perform a number of iterations of production volume synchronization to reduce an amount of time to perform the full sweep disk synchronization.


In some described embodiments, at the start of a copy process, a complete copy of the entire production volume may be created. However, for a large production volume, creating a complete copy could be time consuming. Further, during the time required to copy the entire production volume, additional I/O requests could be received, leading to changes of data on the production volume that are tracked in the VMDKs. In conventional systems, the VMDKs then need to be large to account for changes that occur during the production volume copying time, or risk becoming full and not being able to process additional I/O requests. Some described embodiments reduce the need to maintain large VMDKs without risking becoming full due to I/O requests received while copying the production volume.


Referring to FIG. 6, some described embodiments may employ illustrative process 600 to perform full sweep disk synchronization. At block 602, process 600 begins. In some embodiments, at block 604, a new thin volume is created (e.g., production VMDK 342). In some embodiments, at block 606, “dirty” data is copied from the production volume to a replica volume. In some embodiments, dirty data is data that has changed on the production volume (e.g., due to an I/O request) since the most recent replica was created. In some embodiments, the first time process 600 is performed (e.g., the first time a replica of the production volume is generated), all the data of the production volume may be considered dirty, since no data has yet been copied to a replica. In some embodiments, the copying of block 606 may continue to be performed (e.g., to copy a large amount of data) while process 600 continues to block 608 (and subsequent blocks). In other words, block 606 may be performed in parallel with one or more of blocks 608-622 of process 600.


In some embodiments, at block 608, I/O requests, for example, write requests, received by the DPA are saved and sent, asynchronously to the thin volume as dirty data. In some embodiments, at block 610, if the thin volume created at block 604 reaches or exceeds a threshold size, then at block 612, data protection system 100 enters a second mode of operation. In some embodiments, at block 612, if a region of the production volume associated with a given I/O request (e.g., one of the I/O requests saved at block 608) has already been copied to the thin volume, then at block 616, data in the thin volume is overwritten with data associated with the I/O request, which does not require additional space for the thin volume (e.g., does not track additional metadata). Process 600 continues to block 618.


In some embodiments, if, at block 612, the region of the production volume associated with a given I/O request (e.g., one of the I/O requests saved at block 608) has not yet been copied to the thin volume, then at block 614, metadata of data changes due to the I/O request is tracked in a DMS associated with the thin volume. As shown in FIG. 6, process 600 may continue to block 618.


In some embodiments, at block 618, if copying the production volume to the replica volume (e.g., block 606) has been completed, then at block 619, data from the thin volume may be copied to the replica volume. In some embodiments, while block 619 is occurring, any received I/O requests are tracked in the associated DMS (even if the I/O request is associated with a location already copied to the thin volume). In some embodiments, at block 620, the DPA sets dirty data indicators based upon metadata stored in the associated DMS (e.g., for received I/O requests), and process 600 returns to block 606 to copy the dirty data from the production volume to the replica volume. In some embodiments, if, at block 618, copying the production volume to the replica volume (e.g., block 606) has not yet been completed, then process 600 returns to block 612 to process a subsequent I/O request.


In some embodiments, at block 610, if the thin volume created at block 604 did not reach or exceed a threshold size, then at block 622, data protection system 100 stays in the first mode of operation (e.g., block 608) if copying the production volume to the replica volume (e.g., block 606) has not yet been completed. In some embodiments, if, at block 622, copying the production volume to the replica volume (e.g., block 606) has been completed, then at block 24, the full sweep disk synchronization is complete, and data protection system 100 operates, for example, as shown at block 410 of FIG. 4.


Referring back to FIGS. 1 and 3, as described herein, in some embodiments, protection agent 144 may intercept I/O requests for storage 108 and send the I/O request to DPA 112, such as described in regard to FIG. 1. In some embodiments, DPA 112 may be implemented as vDPA 316 of FIG. 3. In some embodiments, a thin volume is attached to vDPA 316, for example production VMDK 342. In some embodiments, the I/O request is provided by vDPA 316 to VMDK 342. In some embodiments, vDPA 316 tracks changes to data stored in VMDK 342 as metadata in DMS 362. In some embodiments, when the I/O request is processed, vDPA 316 may acknowledge the I/O request to protection agent 144.


Some described embodiments may generate a snapshot replica by creating a new thin volume (e.g., another VMDK), and attaching the new VMDK to vDPA 316. In some embodiments, vDPA 316 may maintain a list of all changed (dirty) addresses of each VMDK. In some embodiments, when the data protection system first begins operation (or after a snapshot has been generated), there may be no dirty data until I/O requests (e.g., writes) are received. In some embodiments, as I/O requests are received, the DO METADATA of FIG. 2 may be employed to track dirty data (e.g., data that was changed by an I/O request). In some embodiments, when the I/O request is intercepted by the protection agent, the I/O request is sent to the vDPA, which writes the I/O request to the first (e.g., production) VMDK, and tracks metadata associated with the I/O request in the UNDO METADATA of FIG. 2.


In some embodiments, in response to a snapshot request (e.g., at an interval determined by a desired RTO), I/O requests are written from vDPA 316 to the new VMDK. Some embodiments may then copy data from the first (e.g., production) VMDK to the new (e.g., replica) VMDK as a background process based on data marked as changed (e.g., “dirty”) in DMA 362.


In some embodiments, if the size of the production VMDK becomes bigger than a threshold while data is copied from the production VMDK to the replica VMDK, a third VMDK may be created having a third DMS to track dirty data. In some embodiments, if a new I/O request arrives, the vDPA may determine if the address associated with the new I/O request is already written to the replica VMDK. In some embodiments, if the address is already copied from the production VMDK to the replica VMDK, then the data of the I/O request is written to the replica VMDK. Otherwise, in some embodiments, if the address has not yet been copied from the production VMDK to the replica VMDK, then the dirty data metadata is updated in the third DMS.


In some embodiments, when the data is entirely copied from the production VMDK to the replica VMDK (e.g., the background process is completed), the production VMDK is deleted, and the third VMDK is renamed to operate in the place of the production VMDK. In other words, is some embodiments, the DMS associated with the production VMDK and the DMS associated with the replica VMDK are deleted, and the DMS associated with the third VMDK is renamed to operate as the production DMS.


In some embodiments, if the size of the production VMDK remains smaller than a threshold while data is copied from the production VMDK to the replica VMDK, and if a new I/O request arrives, the data of the I/O request is written to the replica VMDK. In some embodiments, once the data is entirely copied from the production VMDK to the replica VMDK (e.g., the background process is completed), the production VMDK is deleted, and the replica VMDK is renamed to operate as the production VMDK.


In some described embodiments, hosts 104 and 116 of FIG. 1 may each correspond to one computer, a plurality of computers, or a network of distributed computers. For example, in some embodiments, host 104 and/or host 116 may be implemented as one or more computers such as shown in FIG. 7. As shown in FIG. 7, computer 700 may include processor 702, volatile memory 704 (e.g., RAM), non-volatile memory 706 (e.g., one or more hard disk drives (HDDs), one or more solid state drives (SSDs) such as a flash drive, one or more hybrid magnetic and solid state drives, and/or one or more virtual storage volumes, such as a cloud storage, or a combination of physical storage volumes and virtual storage volumes), graphical user interface (GUI) 708 (e.g., a touchscreen, a display, and so forth) and input/output (I/O) device 720 (e.g., a mouse, a keyboard, etc.). Non-volatile memory 706 stores computer instructions 712, an operating system 716 and data 718 such that, for example, the computer instructions 712 are executed by the processor 702 out of volatile memory 704 to perform at least a portion of the processes shown in FIGS. 4-6. Program code may be applied to data entered using an input device of GUI 708 or received from I/O device 720.


Processes 400, 500 and 600 (FIGS. 4-6) are not limited to use with the hardware and software of FIG. 7 and may find applicability in any computing or processing environment and with any type of machine or set of machines that may be capable of running a computer program. Processes 3400, 500 and 600 (FIGS. 4-6) may be implemented in hardware, software, or a combination of the two.


The processes described herein are not limited to the specific embodiments described. For example, processes 400, 500 and 600 are not limited to the specific processing order shown in FIGS. 4-6. Rather, any of the blocks of processes 400, 500 and 600 may be re-ordered, combined or removed, performed in parallel or in serial, as necessary, to achieve the results set forth herein.


Processor 702 may be implemented by one or more programmable processors executing one or more computer programs to perform the functions of the system. As used herein, the term “processor” describes an electronic circuit that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals. In some embodiments, the “processor” can be embodied in an application specific integrated circuit (ASIC). In some embodiments, the “processor” may be embodied in a microprocessor with associated program memory. In some embodiments, the “processor” may be embodied in a discrete electronic circuit. The “processor” may be analog, digital or mixed-signal. In some embodiments, the “processor” may be one or more physical processors or one or more “virtual” (e.g., remotely located or “cloud”) processors.


Various functions of circuit elements may also be implemented as processing blocks in a software program. Such software may be employed in, for example, one or more digital signal processors, microcontrollers, or general purpose computers. Described embodiments may be implemented in hardware, a combination of hardware and software, software, or software in execution by one or more physical or virtual processors.


Some embodiments may be implemented in the form of methods and apparatuses for practicing those methods. Described embodiments may also be implemented in the form of program code, for example, stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation. A non-transitory machine-readable medium may include but is not limited to tangible media, such as magnetic recording media including hard drives, floppy diskettes, and magnetic tape media, optical recording media including compact discs (CDs) and digital versatile discs (DVDs), solid state memory such as flash memory, hybrid magnetic and solid state memory, non-volatile memory, volatile memory, and so forth, but does not include a transitory signal per se. When embodied in a non-transitory machine-readable medium and the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the method.


When implemented on a processing device, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. Such processing devices may include, for example, a general purpose microprocessor, a digital signal processor (DSP), a reduced instruction set computer (RISC), a complex instruction set computer (CISC), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic array (PLA), a microcontroller, an embedded controller, a multi-core processor, and/or others, including combinations of one or more of the above. Described embodiments may also be implemented in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus as recited in the claims.


Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. It will be further understood that various changes in the details, materials, and arrangements of the parts that have been described and illustrated herein may be made by those skilled in the art without departing from the scope of the following claims.

Claims
  • 1. A method comprising: creating a first thin volume associated with a production volume;generating a first replica of the production volume by copying data from the production volume to a replica volume;during the copying: receiving an I/O request to be written to the production volume;writing data from the I/O request to the first thin volume and tracking data changed due to the I/O request in metadata associated with the production volume and the first thin volume;checking data size of the first thin volume;when the size of the first thin volume is below a threshold, asynchronously applying changes from the first thin volume to a backup storage;when the size of the first thin volume is at or above a threshold;determining whether a region of the production volume associated with the received I/O request has already been copied from the production volume to the first thin volume; andwhen the region is already copied, overwriting an associated region in the first thin volume, whereby an amount of metadata to track changed data is reduced.
  • 2. The method of claim 1, further comprising performing the copying as a background process while processing new received I/O requests.
  • 3. The method of claim 1, further comprising, when the size of the first thin volume is below the threshold and the region is already copied from the production volume to the replica volume: when the copying data from the production volume to the replica volume is complete, generating a second thin volume and writing data for new received I/O requests to the second thin volume.
  • 4. The method of claim 3, further comprising, when the size of the first thin volume is at or above the threshold and the region is already copied from the production volume to the replica volume: copying data from the first thin volume to the replica volume;during the copying from the first thin volume to the replica volume, tracking data changed due to a new I/O request in metadata associated with the first thin volume; andwhen the copying from the first thin volume to the replica volume is complete, setting, based upon the metadata, locations as dirty data to be copied from the production volume to the replica volume.
  • 5. The method of claim 3, wherein upon determining the data size of the first thin volume meets or exceeds the threshold while data is being copied from the production volume to the replica volume: creating a third thin volume having corresponding metadata;upon receiving a next I/O request, determining if an address associated with the next I/O request is already written to the replica volume;if the address is already copied to the replica volume, writing the data of the I/O request to the replica volume; andif the address has not been copied from the production volume to the replica volume, updating the corresponding metadata.
  • 6. The method of claim 1, further comprising, when the region of the production volume associated with the received I/O request has not yet been copied from the production volume to the replica volume, tracking data changed due to the I/O request in metadata associated with the first thin volume.
  • 7. A system comprising: a processor; andmemory storing computer program code that when executed on the processor causes the processor to operate a storage system, the storage system operable to perform the operations of: creating a first thin volume associated with a production volume;generating a first replica of the production volume by copying data from the production volume to a replica volume;during the copying: receiving an I/O request to be written to the production volume;writing data from the I/O request to the first thin volume and tracking data changed due to the I/O request in metadata associated with the production volume and the first thin volume;checking data size of the first thin volume;when the size of the first thin volume is below a threshold, asynchronously applying changes from the first thin volume to a backup storage;when the size of the first thin volume is at or above a threshold, the storage system is operable to perform the operations of:determining whether a region of the production volume associated with the received I/O request has already been copied from the production volume to the first thin volume; andwhen the region is already copied, overwriting an associated region in the first thin volume, whereby an amount of metadata to track changed data is reduced.
  • 8. The system of claim 7, wherein the storage system is operable to perform the operation of performing the copying as a background process while processing new received I/O requests.
  • 9. The system of claim 7, wherein, when the size of the first thin volume is below the threshold and when the region is already copied from the production volume to the replica volume, the storage system is operable to perform the operation of: when the copying data from the production volume to the replica volume is complete, generating a second thin volume and writing data for new received I/O requests to the second thin volume.
  • 10. The system of claim 9, wherein, when the size of the first thin volume is at or above the threshold and the region is already copied from the production volume to the replica volume, the storage system is operable to perform the operations of: copying data from the first thin volume to the replica volume;during the copying from the first thin volume to the replica volume, tracking data changed due to a new I/O request in metadata associated with the first thin volume; andwhen the copying from the first thin volume to the replica volume is complete, setting, based upon the metadata, locations as dirty data to be copied from the production volume to the replica volume.
  • 11. The system of claim 7, wherein, when the region of the production volume associated with the received I/O request has not yet been copied from the production volume to the replica volume, the storage system is operable to perform the operation of tracking data changed due to the I/O request in metadata associated with the first thin volume.
  • 12. A computer program product including a non-transitory computer readable storage medium having computer program code encoded thereon that when executed on a processor of a computer causes the computer to operate a storage system, the computer program product comprising: computer program code for creating a first thin volume associated with a production volume;computer program code for generating a first replica of the production volume by copying data from the production volume to a replica volume, wherein the copying is performed as a background process while processing new received I/O requests;during the copying: computer program code for receiving an I/O request to be written to the production volume;computer program code for writing data from the I/O request to the first thin volume and tracking data changed due to the I/O request in metadata associated with the production volume and the first thin volume;computer program code for checking data size of the first thin volume, and, when the size of the first thin volume is below a threshold, asynchronously applying changes from the first thin volume to a backup storage;when the size of the first thin volume is at or above a threshold:determining whether a region of the production volume associated with the received I/O request has already been copied from the production volume to the first thin volume; andwhen the region is already copied, overwriting an associated region in the first thin volume, whereby an amount of metadata to track changed data is reduced.
  • 13. The computer program product of claim 12, further comprising, computer program code for performing the copying as a background process while processing new received I/O requests.
  • 14. The computer program product of claim 12, further comprising, when the size of the first thin volume is below the threshold and the region is already copied from the production volume to the replica volume: when the copying data from the production volume to the replica volume is complete, computer program code for generating a second thin volume and writing data for new received I/O requests to the second thin volume.
  • 15. The computer program product of claim 14, further comprising, when the size of the first thin volume is at or above the threshold and the region is already copied from the production volume to the replica volume: computer program code for copying data from the first thin volume to the replica volume;during the copying from the first thin volume to the replica volume, computer program code for tracking data changed due to a new I/O request in metadata associated with the first thin volume; andwhen the copying from the first thin volume to the replica volume is complete, computer program code for setting, based upon the metadata, locations as dirty data to be copied from the production volume to the replica volume.
  • 16. The computer program product of claim 12, further comprising, when the region of the production volume associated with the received I/O request has not yet been copied from the production volume to the replica volume, computer program code for tracking data changed due to the I/O request in metadata associated with the first thin volume.
US Referenced Citations (262)
Number Name Date Kind
5170480 Mohan et al. Dec 1992 A
5249053 Jain Sep 1993 A
5388254 Betz et al. Feb 1995 A
5499367 Bamford et al. Mar 1996 A
5526397 Lohman Jun 1996 A
5864837 Maimone Jan 1999 A
5879459 Gadgil et al. Mar 1999 A
5990899 Whitten Nov 1999 A
6042652 Hyun et al. Mar 2000 A
6065018 Beier et al. May 2000 A
6143659 Leem Nov 2000 A
6148340 Bittinger et al. Nov 2000 A
6174377 Doering et al. Jan 2001 B1
6174809 Kang et al. Jan 2001 B1
6203613 Gates et al. Mar 2001 B1
6260125 McDowell Jul 2001 B1
6270572 Kim et al. Aug 2001 B1
6272534 Guha Aug 2001 B1
6287965 Kang et al. Sep 2001 B1
6467023 DeKoning et al. Oct 2002 B1
6574657 Dickinson Jun 2003 B1
6621493 Whitten Sep 2003 B1
6804676 Bains, II Oct 2004 B1
6947981 Lubbers et al. Sep 2005 B2
7043610 Horn et al. May 2006 B2
7051126 Franklin May 2006 B1
7076620 Takeda et al. Jul 2006 B2
7111197 Kingsbury et al. Sep 2006 B2
7117327 Hirakawa et al. Oct 2006 B2
7120768 Mizuno et al. Oct 2006 B2
7130975 Suishu et al. Oct 2006 B2
7139927 Park et al. Nov 2006 B2
7159088 Hirakawa et al. Jan 2007 B2
7167963 Hirakawa et al. Jan 2007 B2
7203741 Marco et al. Apr 2007 B2
7222136 Brown et al. May 2007 B1
7296008 Passerini et al. Nov 2007 B2
7328373 Kawamura et al. Feb 2008 B2
7353335 Kawamura Apr 2008 B2
7360113 Anderson et al. Apr 2008 B2
7426618 Vu et al. Sep 2008 B2
7519625 Honami et al. Apr 2009 B2
7519628 Leverett Apr 2009 B1
7546485 Cochran et al. Jun 2009 B2
7590887 Kano Sep 2009 B2
7606940 Yamagami Oct 2009 B2
7719443 Natanzon May 2010 B1
7757057 Sangapu et al. Jul 2010 B2
7840536 Ahal et al. Nov 2010 B1
7840662 Natanzon Nov 2010 B1
7844856 Ahal et al. Nov 2010 B1
7860836 Natanzon et al. Dec 2010 B1
7882286 Natanzon et al. Feb 2011 B1
7934262 Natanzon et al. Apr 2011 B1
7958372 Natanzon Jun 2011 B1
8037162 Marco et al. Oct 2011 B2
8041940 Natanzon et al. Oct 2011 B1
8060713 Natanzon Nov 2011 B1
8060714 Natanzon Nov 2011 B1
8103937 Natanzon et al. Jan 2012 B1
8108634 Natanzon et al. Jan 2012 B1
8205009 Heller et al. Jun 2012 B2
8214612 Natanzon Jul 2012 B1
8250149 Marco et al. Aug 2012 B2
8271441 Natanzon et al. Sep 2012 B1
8271447 Natanzon et al. Sep 2012 B1
8332687 Natanzon et al. Dec 2012 B1
8335761 Natanzon Dec 2012 B1
8335771 Natanzon et al. Dec 2012 B1
8341115 Natanzon et al. Dec 2012 B1
8370648 Natanzon Feb 2013 B1
8380885 Natanzon Feb 2013 B1
8392680 Natanzon et al. Mar 2013 B1
8429362 Natanzon et al. Apr 2013 B1
8433869 Natanzon et al. Apr 2013 B1
8438135 Natanzon et al. May 2013 B1
8464101 Natanzon et al. Jun 2013 B1
8478955 Natanzon et al. Jul 2013 B1
8495304 Natanzon et al. Jul 2013 B1
8510279 Natanzon et al. Aug 2013 B1
8521691 Natanzon Aug 2013 B1
8521694 Natanzon Aug 2013 B1
8543809 Notanaon Sep 2013 B2
8583885 Natanzon Nov 2013 B1
8600945 Natanzon et al. Dec 2013 B1
8601085 Ives et al. Dec 2013 B1
8627012 Derbeko et al. Jan 2014 B1
8683592 Dotan et al. Mar 2014 B1
8694700 Natanzon et al. Apr 2014 B1
8706700 Natanzon et al. Apr 2014 B1
8712962 Natanzon Apr 2014 B1
8719497 Don et al. May 2014 B1
8725691 Natanzon May 2014 B1
8725692 Natanzon et al. May 2014 B1
8726066 Natanzon et al. May 2014 B1
8738813 Natanzon et al. May 2014 B1
8745004 Natanzon et al. Jun 2014 B1
8751828 Raizen et al. Jun 2014 B1
8769336 Natanzon et al. Jul 2014 B1
8805786 Natanzon Aug 2014 B1
8806161 Natanzon Aug 2014 B1
8825848 Dotan et al. Sep 2014 B1
8832399 Natanzon et al. Sep 2014 B1
8850143 Natanzon Sep 2014 B1
8850144 Natanzon et al. Sep 2014 B1
8862546 Natanzon et al. Oct 2014 B1
8892835 Natanzon et al. Nov 2014 B1
8898112 Natanzon et al. Nov 2014 B1
8898409 Natnon et al. Nov 2014 B1
8898515 Natanzon Nov 2014 B1
8898519 Natanzon et al. Nov 2014 B1
8914595 Natanzon Dec 2014 B1
8924668 Natanzon Dec 2014 B1
8930500 Marco et al. Jan 2015 B2
8930947 Derbeko et al. Jan 2015 B1
8935498 Natanzon Jan 2015 B1
8949180 Natanzon et al. Feb 2015 B1
8954673 Natanzon et al. Feb 2015 B1
8954796 Cohen et al. Feb 2015 B1
8959054 Natanzon Feb 2015 B1
8977593 Natanzon et al. Mar 2015 B1
8977826 Meiri et al. Mar 2015 B1
8996460 Frank et al. Mar 2015 B1
8996461 Natanzon et al. Mar 2015 B1
8996827 Natanzon Mar 2015 B1
9003138 Natanzon Apr 2015 B1
9026696 Natanzon et al. May 2015 B1
9031913 Natanzon May 2015 B1
9032160 Natanzon et al. May 2015 B1
9037818 Natanzon et al. May 2015 B1
9063994 Notanzon et al. Jun 2015 B1
9069479 Natanzon Jun 2015 B1
9069709 Natanzon et al. Jun 2015 B1
9081754 Natanzon et al. Jul 2015 B1
9081842 Natanzon et al. Jul 2015 B1
9087008 Natanzon Jul 2015 B1
9087112 Natanzon et al. Jul 2015 B1
9104529 Derbeko et al. Aug 2015 B1
9110914 Frank et al. Aug 2015 B1
9116811 Derbeko et al. Aug 2015 B1
9128628 Natanzon et al. Sep 2015 B1
9128855 Natanzon et al. Sep 2015 B1
9134914 Derbeko et al. Sep 2015 B1
9135119 Natanzon et al. Sep 2015 B1
9135120 Natanzon Sep 2015 B1
9146878 Cohen et al. Sep 2015 B1
9152339 Cohen et al. Oct 2015 B1
9152578 Saad et al. Oct 2015 B1
9152814 Natanzon Oct 2015 B1
9158578 Derbeko et al. Oct 2015 B1
9158630 Natanzon Oct 2015 B1
9160526 Raizen et al. Oct 2015 B1
9177670 Derbeko et al. Nov 2015 B1
9189339 Cohen et al. Nov 2015 B1
9189341 Natanzon et al. Nov 2015 B1
9201736 Moore et al. Dec 2015 B1
9223659 Natanzon et al. Dec 2015 B1
9225529 Natanzon et al. Dec 2015 B1
9235481 Natanzon et al. Jan 2016 B1
9235524 Derbeko et al. Jan 2016 B1
9235632 Natanzon Jan 2016 B1
9244997 Natanzon et al. Jan 2016 B1
9256605 Natanzon Feb 2016 B1
9274718 Natanzon et al. Mar 2016 B1
9275063 Natanzon Mar 2016 B1
9286052 Solan et al. Mar 2016 B1
9305009 Bono et al. Apr 2016 B1
9323750 Natanzon et al. Apr 2016 B2
9330155 Bono et al. May 2016 B1
9336094 Wolfson et al. May 2016 B1
9336230 Natanzon May 2016 B1
9367260 Natanzon Jun 2016 B1
9378096 Erel et al. Jun 2016 B1
9378219 Bono et al. Jun 2016 B1
9378261 Bono et al. Jun 2016 B1
9383937 Frank et al. Jul 2016 B1
9389800 Natanzon et al. Jul 2016 B1
9405481 Cohen et al. Aug 2016 B1
9405684 Derbeko et al. Aug 2016 B1
9405765 Natanzon Aug 2016 B1
9411535 Shemer et al. Aug 2016 B1
9459804 Natanzon et al. Oct 2016 B1
9460028 Raizen et al. Oct 2016 B1
9471579 Natanzon Oct 2016 B1
9477407 Marshak et al. Oct 2016 B1
9501542 Natanzon Nov 2016 B1
9507732 Natanzon et al. Nov 2016 B1
9507845 Natanzon et al. Nov 2016 B1
9514138 Natanzon et al. Dec 2016 B1
9524218 Veprinsky et al. Dec 2016 B1
9529885 Natanzon et al. Dec 2016 B1
9535800 Natanzon et al. Jan 2017 B1
9535801 Natanzon et al. Jan 2017 B1
9547459 BenHanokh et al. Jan 2017 B1
9547591 Natanzon et al. Jan 2017 B1
9552405 Moore et al. Jan 2017 B1
9557921 Cohen et al. Jan 2017 B1
9557925 Natanzon Jan 2017 B1
9563517 Natanzon et al. Feb 2017 B1
9563684 Natanzon et al. Feb 2017 B1
9575851 Natanzon et al. Feb 2017 B1
9575857 Natanzon Feb 2017 B1
9575894 Natanzon et al. Feb 2017 B1
9582382 Natanzon et al. Feb 2017 B1
9588703 Natanzon et al. Mar 2017 B1
9588847 Natanzon et al. Mar 2017 B1
9594822 Natanzon et al. Mar 2017 B1
9600377 Cohen et al. Mar 2017 B1
9619543 Natanzon et al. Apr 2017 B1
9632881 Natanzon Apr 2017 B1
9665305 Natanzon et al. May 2017 B1
9710177 Natanzon Jul 2017 B1
9720618 Panidis et al. Aug 2017 B1
9722788 Natanzon et al. Aug 2017 B1
9727429 Moore et al. Aug 2017 B1
9733969 Derbeko et al. Aug 2017 B2
9737111 Lustik Aug 2017 B2
9740572 Natanzon et al. Aug 2017 B1
9740573 Natanzon Aug 2017 B1
9740880 Natanzon et al. Aug 2017 B1
9749300 Cale et al. Aug 2017 B1
9772789 Natanzon et al. Sep 2017 B1
9798472 Natanzon et al. Oct 2017 B1
9798490 Natanzon Oct 2017 B1
9804934 Natanzon et al. Oct 2017 B1
9811431 Natanzon et al. Nov 2017 B1
9823865 Natanzon et al. Nov 2017 B1
9823973 Natanzon Nov 2017 B1
9832261 Don et al. Nov 2017 B2
9846698 Panidis et al. Dec 2017 B1
9875042 Natanzon et al. Jan 2018 B1
9875162 Panidis et al. Jan 2018 B1
20020129168 Kanai et al. Sep 2002 A1
20030048842 Fourquin et al. Mar 2003 A1
20030061637 Cha et al. Mar 2003 A1
20030110278 Anderson Jun 2003 A1
20030145317 Chamberlain Jul 2003 A1
20030196147 Hirata et al. Oct 2003 A1
20030217119 Raman Nov 2003 A1
20040024975 Morishita Feb 2004 A1
20040205092 Longo et al. Oct 2004 A1
20040250032 Ji et al. Dec 2004 A1
20040254964 Kodama et al. Dec 2004 A1
20050015663 Armangau et al. Jan 2005 A1
20050028022 Amano Feb 2005 A1
20050049924 DeBettencourt et al. Mar 2005 A1
20050172092 Lam et al. Aug 2005 A1
20050198083 Saika Sep 2005 A1
20050273655 Chow et al. Dec 2005 A1
20060031647 Hirakawa et al. Feb 2006 A1
20060047996 Anderson et al. Mar 2006 A1
20060064416 Sim-Tang Mar 2006 A1
20060107007 Hirakawa May 2006 A1
20060117211 Matsunami et al. Jun 2006 A1
20060161810 Bao Jul 2006 A1
20060179343 Kitamura Aug 2006 A1
20060195670 Iwamura et al. Aug 2006 A1
20070055833 Vu et al. Mar 2007 A1
20070180304 Kano Aug 2007 A1
20070198602 Ngo et al. Aug 2007 A1
20070198791 Iwamura et al. Aug 2007 A1
20130110966 Nagami May 2013 A1
Foreign Referenced Citations (2)
Number Date Country
1154356 Nov 2001 EP
WO 00 45581 Aug 2000 WO
Non-Patent Literature Citations (18)
Entry
U.S. Appl. No. 15/274,362, filed Sep. 23, 2016, Baruch et al.
U.S. Appl. No. 15/274,117, filed Sep. 23, 2016, Baruch.
U.S. Appl. No. 15/274,122, filed Sep. 23, 2016, Baruch et al.
U.S. Appl. No. 15/274,373, filed Sep. 23, 2016, Baruch et al.
U.S. Appl. No. 15/274,129, filed Sep. 23, 2016, Baruch et al.
U.S. Appl. No. 15/275,677, filed Sep. 23, 2016, Baruch et al.
Gibson, “Five Point Plan Lies at the Heart of Compression Technoiogy;” Tech Talk; Apr. 29, 1991; 1 Page.
Soules et al., “Metadata Efficiency in Versioning File Systems;” 2nd USENIX Conference on File and Storage Technologies; Mar. 31, 2003-Apr. 2, 2003; 16 Pages.
AIX System Management Concepts: Operating Systems and Devices; Bull Electronics Angers; May 2000; 280 Pages.
Soules et al., “Metadata Efficiency in a Comprehensive Versioning File System;” May 2002; CMU-CS-02-145; School of Computer Science, Carnegie Mellon University; 33 Pages.
“Linux Filesystems,” Sams Publishing; 2002; Chapter 1: Introduction to Filesystems pp. 17-22 and Chapter 3: Overview of Journaling Filesystems pp. 67-71; 12 Pages.
Bunyan et al., “Multiplexing in a BrightStor® ARCserve® Backup Release 11;” Mar. 2004; 4 Pages.
Marks, “Network Computing, 33;” Cover Story; Feb. 2, 2006; 8 Pages.
Hill, “Network Computing, NA;” Cover Story; Jun. 8, 2006; 9 Pages.
Microsoft Computer Dictionary, Fifth Edition; 2002; 3 Pages.
Wikipedia; Retrieved on Mar. 29, 2011 from http://en.wikipedia.org/wiki/DEFLATE: Deflate; 6 Pages.
Wikipedia; Retrieved on Mar. 29, 2011 from http://en.wikipedia.org/wiki/Huffman_coding: Huffman Coding; 11 Pages.
Wikipedia; Retrieved on Mar. 29, 2011 from http:///en.wikipedia.org/wiki/LZ77: LZ77 and LZ78; 2 Pages.