Computer data is vital to today's organizations and a significant part of protection against disasters is focused on data protection. Existing data protection systems may provide continuous data protection and snapshot-based replication.
A hypervisor is computer software, firmware, and/or hardware that creates and runs virtual machines (VMs). Hypervisors may provide the ability to generate snapshots of a VM disks (VMDKs). Existing techniques for generating VMDK snapshots may degrade the performance of a production system.
Backup storage systems may include features such as de-duplication that increase the efficiency of storing VMDK snapshots. Some backup storage systems are limited in the number of VMDK snapshots that can be stored, or in how frequently new snapshots can be added.
Described herein are embodiments of systems and methods for generating snapshots of virtual machine disks (VMDK). In some embodiments, the VMDK snapshots can be generated while having little or no impact on production performance. In various embodiments, the rate at which VMDK snapshots can be generated exceeds the rate at which snapshots can be added to backup storage.
According to one aspect of the disclosure, a method comprises: generating a plurality of thin differential virtual machine disks (VMDKs) associated with a VMDK; receiving, during a first time period starting after the first point in time, one or more first I/Os to be written from a virtual machine (VM) to the VMDK; writing the first I/Os to a first one of the thin differential VMDKs; receiving, during a second time period starting after the first time period, one or more second I/Os to be written from the VM to the VMDK; writing the second I/Os to a second one of the thin differential VMDKs; and generating a second snapshot of the VMDK for a second point in time after the second time period by applying data written to the first and second thin differential VMDKs to a first snapshot of the VMDK for a first point in time.
In various embodiments, generating a second snapshot of the VMDK comprises: generate an aggregate differential VMDK using data written to the first and second thin differential VMDKs, and applying the aggregate differential VMDK to the first snapshot of the VMDK to generate the second snapshot of the VMDK. In one embodiment, generating a plurality of thin differential VMDK comprises: generating a single VMDK file having each of the plurality of thin differential VMDK at different offsets within the file. In some embodiments, generating a second snapshot of the VMDK comprise: applying some data written to the first thin differential VMDK to the first snapshot of the VMDK, and applying all data written to the second thin differential VMDK to the first snapshot of the VMDK.
In certain embodiments, receiving the one or more first I/Os to be written from the VM to the VMDK comprises receiving the one or more first I/Os from a splitter. In one embodiment, the method further comprises storing the first snapshot in a backup storage system, and storing the second snapshot in a backup storage system. In various embodiments, generating the second snapshot of the VMDK comprises sending data written to the first and second thin differential VMDKs to the backup storage system. In certain embodiments, generating the plurality of thin differential VMDKs comprises thin provisioning a plurality of VMDKs.
In some embodiments, the method further comprises writing first metadata describing first I/Os to a journal, writing second metadata describing the second I/Os to the journal, and determining the data written to the first and second thin differential VMDKs using the first and second metadata. In one embodiment, the method further comprises deleting the plurality of VMDKs after generating the second snapshot of the VMDK. In various embodiments, the method further comprises receiving a request to restore the VM to a point in time; determining a most recent VMDK snapshot prior to the requested point in time; determining if thin differential VMDKs are available covering a time period from the VMDK snapshot time to the restore time; if thin differential VMDKs are available, restoring the VM using the VMDK snapshot prior to the requested point in time and the available thin differential VMDKs; and if thin differential VMDKs are not available, restoring the VM using the most recent VMDK snapshot prior to the requested point in time.
In some embodiments, the method further comprises receiving a request to restore the VM to a point in time; receiving an I/O request to read data from the VMDK; determining a time when the requested data was last changed prior to the requested point in time; determining if the requested data is available within a thin differential VMDK covering the time when the requested data was last changed prior to the requested point in time; if a thin differential VMDK is available, reading the requested data from the available thin differential VMDKs; if a thin differential VMDK is not available, reading the requested data from a most recent VMDK snapshot prior to the last change time; and returning the requested data. In one embodiment, the method further comprises receiving an I/O request to write data from the VMDK, and writing the data to a thin different VMDK.
According to another aspect of the disclosure, a system comprises one or more processors, a volatile memory, and a non-volatile memory storing computer program code that when executed on the processor causes execution across the one or more processors of a process operable to perform embodiments of the method described hereinabove.
According to yet another aspect of the disclosure, a computer program product tangibly embodied in a non-transitory computer-readable medium, the computer-readable medium storing program instructions that are executable to perform embodiments of the method described hereinabove.
The foregoing features may be more fully understood from the following description of the drawings in which:
The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.
Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. In some embodiments, the term “I/O request” or simply “I/O” may be used to refer to an input or output request. In some embodiments, an I/O request may refer to a data read or write request.
Referring to the embodiment of
In certain embodiments, Site I and Site II may be remote from one another. In other embodiments, the two sites may be local to one another. In particular embodiments, Site I and Site II may be connected via a local area network (LAN). In other embodiments, the two sites may be connected via a wide area network (WAN), such as the Internet.
In particular embodiments, the data protection system may include a failover mode of operation, wherein the direction of replicated data flow is reversed. In such embodiments, Site I may behave as a target side and Site II may behave as the source side. In some embodiments, failover may be triggered manually (e.g., by a user) or automatically. In many embodiments, failover may be performed in the event of a disaster at Site I. In some embodiments, both Site I and Site II may behave as source side for some stored data and may behave simultaneously as a target site for other stored data. In certain embodiments, a portion of stored data may be replicated from one site to the other, and another portion may not be replicated.
In some embodiments, Site I corresponds to a production site (e.g., a facility where one or more hosts run data processing applications that write data to a storage system and read data from the storage system) and Site II corresponds to a backup or replica site (e.g., a facility where replicated production site data is stored). In such embodiments, Site II may be responsible for replicating production site data and may enable rollback of Site I data to an earlier point in time. In many embodiments, rollback may be used in the event of data corruption of a disaster, or alternatively in order to view or to access data from an earlier point in time.
Referring again to
Referring again to
In the embodiment of
Referring back to
Referring again to
Referring back to
In some embodiments, a DPA may be a cluster of such computers. In many embodiments, a cluster may ensure that if a DPA computer is down, then the DPA functionality switches over to another computer. In some embodiments, computers within a DPA cluster may communicate with one another using at least one communication link suitable for data transfer via fiber channel or IP based protocols, or such other transfer protocol. In certain embodiments, one computer from the DPA cluster may serve as the DPA leader that coordinates other computers in the cluster, and may also perform other tasks that require coordination between the computers, such as load balancing.
In certain embodiments, a DPA may be a standalone device integrated within a SAN. In other embodiments, a DPA may be integrated into a storage system. In some embodiments, the DPAs communicate with their respective hosts through communication lines such as fiber channels using, for example, SCSI commands or any other protocol.
In various embodiments, the DPAs may be configured to act as initiators in the SAN. For example, the DPAs may issue I/O requests using to access LUs on their respective storage systems. In some embodiments, each DPA may also be configured with the necessary functionality to act as targets, e.g., to reply to I/O requests, such as SCSI commands, issued by other initiators in the SAN, including their respective hosts. In certain embodiments, the DPAs, acting as target nodes, may dynamically expose or remove one or more LUs.
Referring again to
In the embodiment of
In various embodiments, a protection agent may change its behavior for handling SCSI commands, for example as a result of an instruction received from the DPA. In certain embodiments, the behavior of a protection agent for a certain host device may depend on the behavior of its associated DPA with respect to the LU of the host device. In some embodiments, when a DPA behaves as a source site DPA for a certain LU, then during normal course of operation, the associated protection agent may split I/O requests issued by a host to the host device corresponding to that LU. In particular embodiments, when a DPA behaves as a target device for a certain LU, then during normal course of operation, the associated protection agent fails I/O requests issued by the host to the host device corresponding to that LU.
Referring back to
In certain embodiments, protection agents may be drivers located in their respective hosts. In other embodiments, a protection agent may be located in a fiber channel switch or in any other device situated in a data path between a host and a storage system or on the storage system itself. In some embodiments wherein, the protection agent may run at the hypervisor layer or in a virtual machine providing a virtualization layer.
Referring again to
In the embodiment of
In one embodiment, the journal processor may be configured to perform processing described in the patent titled “METHODS AND APPARATUS FOR OPTIMAL JOURNALING FOR CONTINUOUS DATA REPLICATION” and with U.S. Pat. No. 7,516,287, issued Apr. 7, 2009, which is hereby incorporated by reference.
Embodiments of the data replication system may be provided as physical systems for the replication of physical LUs, or as virtual systems for the replication of virtual LUs. In one embodiment, a hypervisor may consume LUs and may generate a distributed file system on the logical units such as VMFS, for example, generates files in the file system and exposes the files as LUs to the virtual machines (each virtual machine disk is seen as a SCSI device by virtual hosts). In another embodiment, a hypervisor may consume a network based file system and exposes files in the NFS as SCSI devices to virtual hosts.
Referring back to
When source DPA 112 receives a replicated I/O request from protection agent 144, source DPA 112 may transmit certain I/O information characterizing the write request, packaged as a “write transaction”, over WAN 128 to the target DPA 124 for journaling and for incorporation within target storage system 120. When applying write operations to storage system 120, the target DPA 124 may act as an initiator, and may send SCSI commands to LU 156 (“LU B”).
The source DPA 112 may send its write transactions to target DPA 124 using a variety of modes of transmission, including inter alia (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a batch mode. In synchronous mode, the source DPA 112 may send each write transaction to the target DPA 124, may receive back an acknowledgement from the target DPA 124, and in turns may send an acknowledgement back to protection agent 144.
In synchronous mode, protection agent 144 may wait until receipt of such acknowledgement before sending the I/O request to LU 136. In asynchronous mode, the source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from target DPA 124.
In batch mode, the source DPA 112 may receive several I/O requests and combines them into an aggregate “batch” of write activity performed in the multiple I/O requests, and may send the batch to the target DPA 124, for journaling and for incorporation in target storage system 120. In batch mode, the source DPA 112 may send an acknowledgement to protection agent 144 upon receipt of each I/O request, before receiving an acknowledgement back from the target DPA 124.
As discussed above, in normal operation, LU B 156 may be used as a backup of LU A 136. As such, while data written to LU A by host 104 is replicated from LU A to LU B, the target host 116 should not send I/O requests to LU B. To prevent such I/O requests from being sent, protection agent 164 may act as a target side protection agent for host device B 160 and may fail I/O requests sent from host 116 to LU B 156 through host device B 160.
Still referring to
In various embodiments, the source storage array 108 may not have snapshot replication capability. In other embodiments, the source storage array 108 may have snapshot replication capability, however this feature may negatively affect production performance (e.g., I/O performance between the source host 104 and the source storage array 108). In particular embodiments, the data protection system 100 may utilize structures and techniques described below in conjunction with
Referring to the embodiment of
Referring briefly to both
Since the journal contains the “undo” information necessary to rollback storage system 120, data that was stored in specific memory locations at a specified point in time may be obtained by undoing write transactions that occurred subsequent to such point in time.
Each of the four streams may hold a plurality of write transaction data. As write transactions are received dynamically by target DPA, the write transactions may be recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction.
In some embodiments, a metadata stream (e.g., UNDO METADATA stream or the DO METADATA stream) and the corresponding data stream (e.g., UNDO stream or DO stream) may be kept in a single stream by interleaving metadata and data.
Referring to
The production host 302 includes one or more virtual machines (VMs) 308, with two VMs 308a and 308b shown in
The IDPA 304 includes the DPA 316, a snapshot replication module 324, and a de-stage datastore 322. In some embodiments, the splitter may be the same as or similar to data protection agent 144 in
Referring again to
In certain embodiments, the backup storage system may be remote to the data protection system. In other embodiments, the backup storage system may be local to the data protection system. In some embodiments, the backup storage system is a de-duplicate storage system. In one embodiment, the backup storage system may be provided as EMC® DATA DOMAIN®.
In many embodiments the IDPA de-stage datastore may be provided as flash-based storage, allowing fast random I/O access. In some embodiments, the backup storage system is provided as spindle-based storage or another type of storage that has relatively slow I/O access compared with flash storage.
In various embodiments, there may be a limit on how frequently full VMDK snapshots can be added to backup storage (e.g., due to limited network connectivity, bandwidth costs, and/or storage limits). In one embodiment, new snapshots can be added to backup storage at most once per hour. In some embodiments, in order to achieve more granular snapshots, the data protections system may utilize differential VMDK snapshots, as described herein below.
Referring back to
Referring again to
As illustrated by the embodiment of
In some embodiments, the IDPA stores multiple thin differential VMDKs within a single VMDK file, wherein each of the thin differential VMDKs is located at a known offset within the single VMDK file. In certain embodiments, this approach may be used to reduce the number of VMDK files created within the IDPA's storage.
In the embodiment of
Referring back to
In the embodiment of
In some embodiments, the journal may be stored on separate physical and/or logical storage from the differential VMDKs.
Referring again to
In various embodiments, the DPA can operate in asynchronous mode. In such embodiments, when an I/O is received from the splitter, the DPA may buffer the I/O and send an acknowledgement back to the splitter; the DPA can then asynchronously process the buffered I/Os, applying changes to the active differential VMDK and DMS. In some embodiments, the DPA can operate in either asynchronous or synchronous mode.
In some embodiments, the de-stage datastore may be physically separate from the production datastore, so that writing to the differential VMDKs and journal will not affect production performance (e.g., I/O performance between the VMs 308 and the corresponding VMDKs 310).
Referring back to
The snapshot replication module 324 can send the aggregate differential VMDK 332 to the backup storage system 306, which generates a new full VMDK snapshot 326 based on the received differential VMDK and the previous snapshot 326. In some embodiments, the rate at which the snapshot replication module generates thin differential VMDKs may exceed the rate at which backup storage can process a differential VMDK and generate a new snapshot based thereon.
Referring to the embodiment of
In some embodiments, the snapshot replication module may delete thin differential VMDKs and/or DMS's after a new aggregate differential VMDK is generated (or after a new VMDK snapshot is generated in backup storage). In certain embodiments, the snapshot replication module may generate new thin differential VMDKs and/or DMS's after a new aggregate differential VMDK is generated (or after a new VMDK snapshot is generated in backup storage). In a certain embodiment, thin differential VMDKs may be retained after a corresponding aggregate differential VMDK is generated and/or after a corresponding VMDK snapshot is generated. In some embodiments, this may increase backup granularity by allowing the VM to be restored to points in time between consecutive VMDK snapshots. For example, if full VMDK snapshots are taken every hour, the system may delete thin differential VMDKs every other hour, so that if there are VMDK snapshots at 13:00 and 14:00 in backup storage, and differential VMDKs from 13:00-13:15, 13:15-13:30, 13:30-13:45, and 13:45-14:00 in the IDPA, then those differential VMDKs will not be deleted until time 15:00. In some embodiments, the techniques described herein allow for point-in-time recovery every fifteen minutes going back at least one hour.
Alternatively, the processing and decision blocks may represent steps performed by functionally equivalent circuits such as a digital signal processor (DSP) circuit or an application specific integrated circuit (ASIC). The flow diagrams do not depict the syntax of any particular programming language but rather illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables may be omitted for clarity. The particular sequence of blocks described is illustrative only and can be varied without departing from the spirit of the concepts, structures, and techniques sought to be protected herein. Thus, unless otherwise stated, the blocks described below are unordered meaning that, when possible, the functions represented by the blocks can be performed in any convenient or desirable order.
Referring to
Referring back to
Referring again to
Referring back to
In some embodiments, the written-to thin differential VMDK may be a thin differential VMDK that stores all changes for the first time period. In certain embodiments, a DPA sends an acknowledgement to a splitter before writing an I/O to the thin differential VMDK (i.e., the DPA may process writes asynchronously).
Referring again to
Referring back to
At block 414, the changes made to the VMDK between the first point in time and a second point in time after the second time period may be determined. This determination may utilize the first and second metadata generated at blocks 408 and 412, respectively. In some embodiments, determining the changes made to the VMDK includes consolidating the list of changed locations within a plurality of DMS's.
Referring again to
In some embodiments, the first and second thin differential VMDKs may be deleted after the second VMDK is generated. In other embodiments, they may be retained to provide granular VM data recovery for points in time before the second snapshot was taken. In certain embodiments, a new plurality of VMDKs may be generated after the second snapshot is generated (e.g., the method 400 in
The embodiment of the method shown in
Referring to
At block 434, the most recent VMDK snapshot prior to the restore time (or, in some embodiments, prior than or equal to the restore time) within the backup storage is determined. For example, if the requested restore time is 13:30 on a given date and VMDK snapshots are generated hourly, the snapshot at 13:00 for that date may be determined.
At block 436, a determination may be made as to whether a thin differential VMDKs are available for the time period that covers the restore time. Continuing the example above, if thin differential VMDKs are generated for fifteen (15) minute periods, then the thin differential VMDKs for the periods from 13:00-13:15 and 13:15-13:30 may be determined.
Referring back to
In some embodiments, restoring the VM may include overwriting data within the VM's existing VMDK. In other embodiments, a new VMDK may be generated and swapped in for use by the VM.
In some embodiments, thin differential VMDKs may be retained so as to provide a desired level of data recovery granularity.
Referring to
At block 464, an I/O request may be received to read data. In some embodiments, the I/O request is intercepted by a production agent within a production host (e.g., protection agent 314 in
Referring back to
Referring again to
If the requested data is not available within a thin differential VMDK is available, then the requested data may be read from the most recent VMDK snapshot prior to the last change time and returned (blocks 472 and 474). In some embodiments, this includes reading snapshot data from backup storage.
In some embodiments, the method 460 can also handle I/O writes. For example, if an I/O write is received while the VM is in recovery mode (but without the VMDK being restored to primary storage), then the write may be applied to a new thin differential VMDK.
Processing may be implemented in hardware, software, or a combination of the two. In various embodiments, processing is provided by computer programs executing on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate. The program logic may be run on a physical or virtual processor. The program logic may be run across one or more physical or virtual processors.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
All references cited herein are hereby incorporated herein by reference in their entirety.
Having described certain embodiments, which serve to illustrate various concepts, structures, and techniques sought to be protected herein, it will be apparent to those of ordinary skill in the art that other embodiments incorporating these concepts, structures, and techniques may be used. Elements of different embodiments described hereinabove may be combined to form other embodiments not specifically set forth above and, further, elements described in the context of a single embodiment may be provided separately or in any suitable sub-combination. Accordingly, it is submitted that the scope of protection sought herein should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the following claims.
| Number | Name | Date | Kind |
|---|---|---|---|
| 7203741 | Marco et al. | Apr 2007 | B2 |
| 7719443 | Natanzon | May 2010 | B1 |
| 7840536 | Ahal et al. | Nov 2010 | B1 |
| 7840662 | Natanzon | Nov 2010 | B1 |
| 7844856 | Ahal et al. | Nov 2010 | B1 |
| 7860836 | Natanzon et al. | Dec 2010 | B1 |
| 7882286 | Natanzon et al. | Feb 2011 | B1 |
| 7934262 | Natanzon et al. | Apr 2011 | B1 |
| 7958372 | Natanzon | Jun 2011 | B1 |
| 8037162 | Marco et al. | Oct 2011 | B2 |
| 8041940 | Natanzon et al. | Oct 2011 | B1 |
| 8060713 | Natanzon | Nov 2011 | B1 |
| 8060714 | Natanzon | Nov 2011 | B1 |
| 8103937 | Natanzon et al. | Jan 2012 | B1 |
| 8108634 | Natanzon et al. | Jan 2012 | B1 |
| 8214612 | Natanzon | Jul 2012 | B1 |
| 8250149 | Marco et al. | Aug 2012 | B2 |
| 8271441 | Natanzon et al. | Sep 2012 | B1 |
| 8271447 | Natanzon et al. | Sep 2012 | B1 |
| 8332687 | Natanzon et al. | Dec 2012 | B1 |
| 8335761 | Natanzon | Dec 2012 | B1 |
| 8335771 | Natanzon et al. | Dec 2012 | B1 |
| 8341115 | Natanzon et al. | Dec 2012 | B1 |
| 8370648 | Natanzon | Feb 2013 | B1 |
| 8380885 | Natanzon | Feb 2013 | B1 |
| 8392680 | Natanzon et al. | Mar 2013 | B1 |
| 8429362 | Natanzon et al. | Apr 2013 | B1 |
| 8433869 | Natanzon et al. | Apr 2013 | B1 |
| 8438135 | Natanzon et al. | May 2013 | B1 |
| 8464101 | Natanzon et al. | Jun 2013 | B1 |
| 8478955 | Natanzon et al. | Jul 2013 | B1 |
| 8495304 | Natanzon et al. | Jul 2013 | B1 |
| 8510279 | Natanzon et al. | Aug 2013 | B1 |
| 8521691 | Natanzon | Aug 2013 | B1 |
| 8521694 | Natanzon | Aug 2013 | B1 |
| 8543609 | Natanzon | Sep 2013 | B1 |
| 8583885 | Natanzon | Nov 2013 | B1 |
| 8600945 | Natanzon et al. | Dec 2013 | B1 |
| 8601085 | Ives et al. | Dec 2013 | B1 |
| 8627012 | Derbeko et al. | Jan 2014 | B1 |
| 8683592 | Dotan et al. | Mar 2014 | B1 |
| 8694700 | Natanzon et al. | Apr 2014 | B1 |
| 8706700 | Natanzon et al. | Apr 2014 | B1 |
| 8712962 | Natanzon et al. | Apr 2014 | B1 |
| 8719497 | Don et al. | May 2014 | B1 |
| 8725691 | Natanzon | May 2014 | B1 |
| 8725692 | Natanzon et al. | May 2014 | B1 |
| 8726066 | Natanzon et al. | May 2014 | B1 |
| 8738813 | Natanzon et al. | May 2014 | B1 |
| 8738870 | Balasubramanian et al. | May 2014 | B1 |
| 8745004 | Natanzon et al. | Jun 2014 | B1 |
| 8751828 | Raizen et al. | Jun 2014 | B1 |
| 8769336 | Natanzon et al. | Jul 2014 | B1 |
| 8805786 | Natanzon | Aug 2014 | B1 |
| 8806161 | Natanzon | Aug 2014 | B1 |
| 8825848 | Dotan et al. | Sep 2014 | B1 |
| 8832399 | Natanzon et al. | Sep 2014 | B1 |
| 8850143 | Natanzon | Sep 2014 | B1 |
| 8850144 | Natanzon et al. | Sep 2014 | B1 |
| 8862546 | Natanzon et al. | Oct 2014 | B1 |
| 8892835 | Natanzon et al. | Nov 2014 | B1 |
| 8898112 | Natanzon et al. | Nov 2014 | B1 |
| 8898407 | Balasubramanian et al. | Nov 2014 | B1 |
| 8898409 | Natanzon et al. | Nov 2014 | B1 |
| 8898515 | Natanzon | Nov 2014 | B1 |
| 8898519 | Natanzon et al. | Nov 2014 | B1 |
| 8914595 | Natanzon | Dec 2014 | B1 |
| 8924668 | Natanzon | Dec 2014 | B1 |
| 8930500 | Marco et al. | Jan 2015 | B2 |
| 8930947 | Derbeko et al. | Jan 2015 | B1 |
| 8935498 | Natanzon | Jan 2015 | B1 |
| 8949180 | Natanzon et al. | Feb 2015 | B1 |
| 8954673 | Natanzon et al. | Feb 2015 | B1 |
| 8954796 | Cohen et al. | Feb 2015 | B1 |
| 8959054 | Natanzon | Feb 2015 | B1 |
| 8977593 | Natanzon et al. | Mar 2015 | B1 |
| 8977826 | Meiri et al. | Mar 2015 | B1 |
| 8996460 | Frank et al. | Mar 2015 | B1 |
| 8996461 | Natanzon et al. | Mar 2015 | B1 |
| 8996827 | Natanzon | Mar 2015 | B1 |
| 9003138 | Natanzon et al. | Apr 2015 | B1 |
| 9026696 | Natanzon et al. | May 2015 | B1 |
| 9031913 | Natanzon | May 2015 | B1 |
| 9032160 | Natanzon et al. | May 2015 | B1 |
| 9037818 | Natanzon et al. | May 2015 | B1 |
| 9063994 | Natanzon et al. | Jun 2015 | B1 |
| 9069479 | Natanzon | Jun 2015 | B1 |
| 9069709 | Natanzon et al. | Jun 2015 | B1 |
| 9081754 | Natanzon et al. | Jul 2015 | B1 |
| 9081842 | Natanzon et al. | Jul 2015 | B1 |
| 9087008 | Natanzon | Jul 2015 | B1 |
| 9087112 | Natanzon et al. | Jul 2015 | B1 |
| 9104529 | Derbeko et al. | Aug 2015 | B1 |
| 9110914 | Frank et al. | Aug 2015 | B1 |
| 9116811 | Derbeko et al. | Aug 2015 | B1 |
| 9128628 | Natanzon et al. | Sep 2015 | B1 |
| 9128855 | Natanzon et al. | Sep 2015 | B1 |
| 9134914 | Derbeko et al. | Sep 2015 | B1 |
| 9135119 | Natanzon et al. | Sep 2015 | B1 |
| 9135120 | Natanzon | Sep 2015 | B1 |
| 9146878 | Cohen et al. | Sep 2015 | B1 |
| 9152339 | Cohen et al. | Oct 2015 | B1 |
| 9152578 | Saad et al. | Oct 2015 | B1 |
| 9152814 | Natanzon | Oct 2015 | B1 |
| 9158578 | Derbeko et al. | Oct 2015 | B1 |
| 9158630 | Natanzon | Oct 2015 | B1 |
| 9160526 | Raizen et al. | Oct 2015 | B1 |
| 9177670 | Derbeko et al. | Nov 2015 | B1 |
| 9189339 | Cohen et al. | Nov 2015 | B1 |
| 9189341 | Natanzon et al. | Nov 2015 | B1 |
| 9201736 | Moore et al. | Dec 2015 | B1 |
| 9223659 | Natanzon et al. | Dec 2015 | B1 |
| 9225529 | Natanzon et al. | Dec 2015 | B1 |
| 9235481 | Natanzon et al. | Jan 2016 | B1 |
| 9235524 | Derbeko et al. | Jan 2016 | B1 |
| 9235632 | Natanzon | Jan 2016 | B1 |
| 9244997 | Natanzon et al. | Jan 2016 | B1 |
| 9256605 | Natanzon | Feb 2016 | B1 |
| 9274718 | Natanzon et al. | Mar 2016 | B1 |
| 9275063 | Natanzon | Mar 2016 | B1 |
| 9286052 | Solan et al. | Mar 2016 | B1 |
| 9305009 | Bono et al. | Apr 2016 | B1 |
| 9323750 | Natanzon et al. | Apr 2016 | B2 |
| 9330155 | Bono et al. | May 2016 | B1 |
| 9336094 | Wolfson et al. | May 2016 | B1 |
| 9336230 | Natanzon | May 2016 | B1 |
| 9367260 | Natanzon | Jun 2016 | B1 |
| 9378096 | Erel et al. | Jun 2016 | B1 |
| 9378219 | Bono et al. | Jun 2016 | B1 |
| 9378261 | Bono et al. | Jun 2016 | B1 |
| 9383937 | Frank et al. | Jul 2016 | B1 |
| 9389800 | Natanzon et al. | Jul 2016 | B1 |
| 9405481 | Cohen et al. | Aug 2016 | B1 |
| 9405684 | Derbeko et al. | Aug 2016 | B1 |
| 9405765 | Natanzon | Aug 2016 | B1 |
| 9411535 | Shemer et al. | Aug 2016 | B1 |
| 9459804 | Natanzon et al. | Oct 2016 | B1 |
| 9460028 | Raizen et al. | Oct 2016 | B1 |
| 9471579 | Natanzon | Oct 2016 | B1 |
| 9477407 | Marshak et al. | Oct 2016 | B1 |
| 9501542 | Natanzon | Nov 2016 | B1 |
| 9507732 | Natanzon et al. | Nov 2016 | B1 |
| 9507845 | Natanzon et al. | Nov 2016 | B1 |
| 9514138 | Natanzon et al. | Dec 2016 | B1 |
| 9524218 | Veprinsky et al. | Dec 2016 | B1 |
| 9529885 | Natanzon et al. | Dec 2016 | B1 |
| 9535800 | Natanzon et al. | Jan 2017 | B1 |
| 9535801 | Natanzon et al. | Jan 2017 | B1 |
| 9547459 | BenHanokh et al. | Jan 2017 | B1 |
| 9547591 | Natanzon et al. | Jan 2017 | B1 |
| 9552405 | Moore et al. | Jan 2017 | B1 |
| 9557921 | Cohen et al. | Jan 2017 | B1 |
| 9557925 | Natanzon | Jan 2017 | B1 |
| 9563517 | Natanzon et al. | Feb 2017 | B1 |
| 9563684 | Natanzon et al. | Feb 2017 | B1 |
| 9575851 | Natanzon et al. | Feb 2017 | B1 |
| 9575857 | Natanzon | Feb 2017 | B1 |
| 9575894 | Natanzon et al. | Feb 2017 | B1 |
| 9582382 | Natanzon et al. | Feb 2017 | B1 |
| 9588703 | Natanzon et al. | Mar 2017 | B1 |
| 9588847 | Natanzon et al. | Mar 2017 | B1 |
| 9594822 | Natanzon et al. | Mar 2017 | B1 |
| 9600377 | Cohen et al. | Mar 2017 | B1 |
| 9619543 | Natanzon et al. | Apr 2017 | B1 |
| 9632881 | Natanzon | Apr 2017 | B1 |
| 9665305 | Natanzon et al. | May 2017 | B1 |
| 9699252 | Antony | Jul 2017 | B2 |
| 9710177 | Natanzon | Jul 2017 | B1 |
| 9720618 | Panidis et al. | Aug 2017 | B1 |
| 9722788 | Natanzon et al. | Aug 2017 | B1 |
| 9727429 | Moore et al. | Aug 2017 | B1 |
| 9733969 | Derbeko et al. | Aug 2017 | B2 |
| 9737111 | Lustik | Aug 2017 | B2 |
| 9740572 | Natanzon et al. | Aug 2017 | B1 |
| 9740573 | Natanzon | Aug 2017 | B1 |
| 9740880 | Natanzon et al. | Aug 2017 | B1 |
| 9749300 | Cale et al. | Aug 2017 | B1 |
| 9772789 | Natanzon et al. | Sep 2017 | B1 |
| 9798472 | Natanzon et al. | Oct 2017 | B1 |
| 9798490 | Natanzon | Oct 2017 | B1 |
| 9804934 | Natanzon et al. | Oct 2017 | B1 |
| 9811431 | Natanzon et al. | Nov 2017 | B1 |
| 9823865 | Natanzon et al. | Nov 2017 | B1 |
| 9823973 | Natanzon | Nov 2017 | B1 |
| 9832261 | Don et al. | Nov 2017 | B2 |
| 9846698 | Panidis et al. | Dec 2017 | B1 |
| 9875042 | Natanzon et al. | Jan 2018 | B1 |
| 9875162 | Panidis et al. | Jan 2018 | B1 |
| 9880777 | Bono et al. | Jan 2018 | B1 |
| 9881014 | Bono et al. | Jan 2018 | B1 |
| 9910620 | Veprinsky et al. | Mar 2018 | B1 |
| 9910621 | Golan et al. | Mar 2018 | B1 |
| 9910735 | Natanzon | Mar 2018 | B1 |
| 9910739 | Natanzon et al. | Mar 2018 | B1 |
| 9917854 | Natanzon et al. | Mar 2018 | B2 |
| 9921955 | Derbeko et al. | Mar 2018 | B1 |
| 9933957 | Cohen et al. | Apr 2018 | B1 |
| 9934302 | Cohen et al. | Apr 2018 | B1 |
| 9940205 | Natanzon | Apr 2018 | B2 |
| 9940460 | Derbeko et al. | Apr 2018 | B1 |
| 9946649 | Natanzon et al. | Apr 2018 | B1 |
| 9959061 | Natanzon et al. | May 2018 | B1 |
| 9965306 | Natanzon et al. | May 2018 | B1 |
| 9990256 | Natanzon | Jun 2018 | B1 |
| 9996539 | Natanzon | Jun 2018 | B1 |
| 10007626 | Saad et al. | Jun 2018 | B1 |
| 10019194 | Baruch et al. | Jul 2018 | B1 |
| 10025931 | Natanzon et al. | Jul 2018 | B1 |
| 10031675 | Veprinsky et al. | Jul 2018 | B1 |
| 10031690 | Panidis et al. | Jul 2018 | B1 |
| 10031692 | Elron et al. | Jul 2018 | B2 |
| 10031703 | Natanzon et al. | Jul 2018 | B1 |
| 10037251 | Bono et al. | Jul 2018 | B1 |
| 10042579 | Natanzon | Aug 2018 | B1 |
| 10042751 | Veprinsky et al. | Aug 2018 | B1 |
| 10055146 | Natanzon et al. | Aug 2018 | B1 |
| 10055148 | Natanzon et al. | Aug 2018 | B1 |
| 10061666 | Natanzon et al. | Aug 2018 | B1 |
| 10067694 | Natanzon et al. | Sep 2018 | B1 |
| 10067837 | Natanzon et al. | Sep 2018 | B1 |
| 10078459 | Natanzon et al. | Sep 2018 | B1 |
| 10082980 | Cohen et al. | Sep 2018 | B1 |
| 10083093 | Natanzon et al. | Sep 2018 | B1 |
| 10095489 | Lieberman et al. | Oct 2018 | B1 |
| 10101943 | Ayzenberg et al. | Oct 2018 | B1 |
| 20060069865 | Kawamura et al. | Mar 2006 | A1 |
| 20070113004 | Sugimoto et al. | May 2007 | A1 |
| 20070156984 | Ebata | Jul 2007 | A1 |
| 20070180208 | Yamasaki | Aug 2007 | A1 |
| 20100058011 | Satoyama | Mar 2010 | A1 |
| 20100250880 | Mimatsu | Sep 2010 | A1 |
| 20100299368 | Hutchins et al. | Nov 2010 | A1 |
| 20140095823 | Shaikh et al. | Apr 2014 | A1 |
| 20150373102 | Antony | Dec 2015 | A1 |
| 20150378636 | Yadav et al. | Dec 2015 | A1 |
| 20160378527 | Zamir | Dec 2016 | A1 |
| 20160378528 | Zamir | Dec 2016 | A1 |
| Entry |
|---|
| U.S. Appl. No. 14/979,897, filed Dec. 28, 2015, Natanzon et al. |
| EMC Corporation, “EMC Recoverpoint/Ex;” Applied Technology; White Paper; Apr. 2012; 17 Pages. |
| U.S Non-Final Office Action dated Mar. 14, 2018 for U.S. Appl. No. 14/979,897; 33 pages. |
| Notice of Allowance dated Aug. 8, 2018 for U.S. Appl. No. 14/979,897; 7 Pages. |
| Response to U.S. Non-Final Office Action dated Mar. 14, 2018 for U.S. Appl. No. 14/979,897; Response filed Jun. 13, 2018; 13 Pages. |