Compressing memory snapshots

Abstract
A computer program product, system, and method for generating coded fragments comprises receiving a request to generate a memory snapshot for a virtual machine (VM), copying the VM's memory to generate a memory snapshot, obtaining information about cache structures within the memory snapshot, invalidating one or more of the cache structures and zeroing out corresponding cache data within the memory snapshot, and storing the memory snapshot to storage.
Description
BACKGROUND

A hypervisor is computer software, firmware, and/or hardware that creates and runs virtual machines (VMs). Hypervisors may support two different types of virtual machine snapshots: with memory and without memory. A snapshot with memory (a “memory snapshot”) includes both a snapshot of a VM's storage and a snapshot of the VM's memory at a given point in time. A snapshot without memory (a “nonmemory snapshot”) includes VM storage but not memory. Snapshots with memory can be used to restore the state of a VM faster than snapshots without memory, as they allow the VM's guest operating system (OS) to resume without having to perform its normal boot process. Snapshots with memory may reduce startup time by several minutes, particularly for virtualized servers. Existing memory snapshots may be quite large (e.g., 128-1024 GB or larger).


SUMMARY

Described herein are embodiments of systems and methods for decreasing the size of VM memory snapshots. In some embodiments, the described systems and methods can significantly decrease the size of memory snapshots while incurring only a slight performance penalty. In various embodiments, the tradeoff between memory snapshot size and performance is configurable.


According to one aspect of the disclosure, a method comprises: receiving a request to generate a memory snapshot for a virtual machine (VM); copying the VM's memory to generate a memory snapshot; obtaining information about cache structures within the memory snapshot; invalidating one or more of the cache structures and zeroing out corresponding cache data within the memory snapshot; and storing the memory snapshot to storage.


In various embodiments, obtaining information about cache structures within the memory snapshot includes obtaining information about pages used by a filesystem cache or a buffer cache. In certain embodiments, invalidating the one or more cache structures comprises erasing the cache structures from cache. In some embodiments, erasing the cache structures from cache comprises setting an invalid bit within each of the one or more cache structures.


In certain embodiments, obtaining information about cache structures within the memory snapshot includes obtaining information about cache structures used by application processes running within the VM. In one embodiment, invalidating the one or more cache structures comprises selecting the one or more cache structures using a least-recently used (LRU) heuristic. In various embodiments, obtaining information about the cache structures within the memory snapshot comprises using a driver specific to a guest operating system (OS) of the VM.


In some embodiments, the method further comprises compressing the memory snapshot after invalidating one or more of the cache structures and zeroing out corresponding cache data within the memory snapshot, wherein storing the memory snapshot to storage comprises storing the compressed memory snapshot to storage. In one embodiment, storing the memory snapshot to storage comprises storing the compressed memory snapshot to a deduplicated storage system. In certain embodiments, the method further comprises retrieving the memory snapshot from storage, and restoring the VM using the retrieved memory snapshot.


According to another aspect of the disclosure, a system comprises one or more processors; a volatile memory; and a non-volatile memory storing computer program code that when executed on the processor causes execution across the one or more processors of a process operable to perform embodiments of the method described hereinabove.


According to yet another aspect of the disclosure, a computer program product tangibly embodied in a non-transitory computer-readable medium, the computer-readable medium storing program instructions that are executable to perform embodiments of the method described hereinabove.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features may be more fully understood from the following description of the drawings in which:



FIG. 1 is a block diagram of a system for compressing memory snapshots, according to an embodiment of the disclosure;



FIG. 2 is a diagram of illustrative memory snapshot, according to an embodiment of the disclosure;



FIG. 3 is a flow diagram of a method for compressing memory snapshots, according to an embodiment of the disclosure; and



FIG. 4 is a block diagram of a computer on which the method of FIG. 3 may be implemented, according to an embodiment of the disclosure.





The drawings are not necessarily to scale, or inclusive of all elements of a system, emphasis instead generally being placed upon illustrating the concepts, structures, and techniques sought to be protected herein.


DETAILED DESCRIPTION

Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. In some embodiments, the term “I/O request” or simply “I/O” may be used to refer to an input or output request. In some embodiments, an I/O request may refer to a data read or write request.


Referring to the embodiment of FIG. 1, a system 300 for compressing memory snapshots includes a host 302, and primary 304 and secondary storage systems 305 coupled thereto. The host 302 includes one or more virtual machines (VMs) 306 managed by a hypervisor 308. Each VM 306 includes a guest operating system (OS) 310, one or more applications 312 that can run on the guest OS 310, and memory 314 that may be used by the guest OS 310 and the applications 312. VM memory 314 may correspond to physical host memory resources (e.g., RAM and/or disk-backed virtual memory) allocated to the VM by the hypervisor 308.


The system 300 includes a VM file system (VMFS) 316 to store files within the primary storage system 304 and managed by the hypervisor 308. In the embodiment of FIG. 1, the VMFS 316 may be stored within one or more LUs (e.g., LU 304a) within primary storage 304 and can include one or more VM disks (VMDKs) 320. In some embodiments, a VMDK is a file within the VMFS used by a corresponding VM to store data used by its OS and applications. In many embodiments, a VM may have multiple VMDKs, one for each disk used by the VM.


Referring again to FIG. 1, the secondary storage system 305 may store one or more VM snapshots 322. A VM snapshot 322 may include one or more files that represent the data and/or state of a VM at a specific point in time. A given VM snapshot 322 may be a memory snapshot (i.e., disk and memory) or a nonmemory snapshot (i.e., disk only). In some embodiments, a VM snapshot is implemented as multiple files within the VMFS: (1) a collection of VMDK (or delta VMDK) files for the virtual disks connected to the VM at the time of the snapshot; (2) a database of the VM's snapshot information (e.g., a file having line entries which define the relationships between snapshots as well as the child disks for each snapshot); and (3) a file that includes the current configuration and, in the case of a memory snapshot, the active state of the VM (e.g., a copy of the VM's memory).


In some embodiments, the primary storage system may be a storage array having one or more logical units (LUs) (e.g., LU 304a). In certain embodiments, the primary storage system may correspond to a disk (or a disk array) directly attached to the host. In other embodiments, the primary storage system may be coupled to the host via a storage area network (SAN). In certain embodiments, the primary storage system may be an EMC® VMAX® system. In particular embodiments, the secondary storage system may be a deduplicated storage system, such as an EMC® DATADOMAIN® system.


Referring again to FIG. 1, the hypervisor 308 includes a snapshot manager 318 configured to generate VM snapshots 322, including memory snapshots and nonmemory snapshots. In some embodiments, memory snapshots may have OS-specific formatting and the snapshot manager may include one or more OS-specific drivers to perform at least a portion of the post-processing; the appropriate driver may be selected based on the VM's guest OS. In certain embodiments, the snapshot manager may be configured to perform at least some of the processing described further below in conjunction with FIGS. 2 and 3.


Referring to FIG. 2, a VM memory snapshot 400 includes state 402 and cache 404, according to an embodiment. State 402 corresponds to memory contents utilized by the guest OS, system processes, and application processes during normal operation. State includes memory contents required to restore the state of the VM to the point in time when a snapshot was generated. In one embodiment, state may include text and data pages used by the VM's guest OS and processes. Referring again to FIG. 2, cache 404 corresponds to memory used various caching subsystems within the VM. In some embodiments, cache includes pages maintained by a filesystem cache within the guest OS. In many embodiments, cache can improve system performance (e.g., by reducing I/O), but is not relied upon by system- or application-level processes to provide correct operation. For example, filesystem cache data can be readily invalidated without causing processes that use the filesystem to operate incorrectly. In many embodiments, particularly where the VM is configured as a server, the cache portion of a memory snapshot may be significantly larger than the state portion.


In various embodiments, the size of the memory snapshot can be reduced by overwriting at least a portion of cache with zeros (or another constant value), invalidating corresponding cache structures, and then compressing the memory snapshot.


In the embodiment of FIG. 2, the memory snapshot 400 is generated with the cache 404 in tact. The memory snapshot 400 may then be post-processed to zero out portions of the cache 404 and to invalidate corresponding cache structures, resulting in a modified version of the snapshot referred to herein as the “cache-invalidated snapshot” 400′. The cache-invalidated snapshot 400′ can then be compressed, resulting in a compressed snapshot 400″ that may be significantly smaller than the original snapshot 400. In some embodiments, the cache-invalidated snapshot can be compressed explicitly using a lossless compression technique such as DEFLATE, LZ77, Huffman coding, LZW, Burrows-Wheeler, etc. In other embodiments, the cache-invalidated snapshot may be stored within a de-duplicated storage system (e.g., EMC DATA DOMAIN©) that automatically compresses data by storing only unique data chunks. Referring to FIG. 2, in either case, the compressed snapshot 400″ can be decompressed to recover the cache-invalidated memory snapshot 400′, which can then be used to restore the VM.


In some embodiments, the memory snapshot may be generated by the hypervisor's snapshot manager (e.g., snapshot manager 318 of FIG. 1). In other embodiments, the hypervisor's snapshot manager may be configured to generate memory snapshots and to perform post-processing thereon.


In some embodiments, a memory analysis tool, such as The Volatility Framework, can be used to obtain information about cache structures within a memory snapshot. Such information may include the location of cache structures with the memory snapshot, in addition to the structure and contents of cache data. In various embodiments, OS-level and/or system-level cache structures may be zeroed out. In the case of a VM running WINDOWS or Linux as the guest OS, the cache data structures used by those OS's are well known. In some embodiments, application-level cache structures may be zeroed out and corresponding internal application cache data structures may be invalidate. Information about cache data structures used by open-source applications can be obtained using the application source code. Information about cache structures used in closed-source applications may be obtained using available documentation and/or by reverse engineering specific versions of those applications.


In some embodiments, the memory snapshot includes one or more pages used by the guest OS's filesystem cache. Each page may have a flag (e.g., a bit) indicating if the contents of that page are valid or invalid at any given time. The filesystem cache does not return cached data to a user/process if the corresponding page is marked as invalid. Instead, it will issue a read request to storage as needed. Other types of caches (e.g., application-specific caches) may use different types of cache structures that can likewise be invalidated.


Referring back to FIG. 2, once information about cache 404 structures has been obtained, the cache data stored therein may be zeroed out the cache structures marked as invalid. When the VM is restored using the memory snapshot, the corresponding caching subsystems will be signaled that the cache data is invalid and should not be used.


In many embodiments, there is a tradeoff between VM performance following a restore and the size of memory snapshots. For example, in the case of a filesystem cache, the more cache pages that are invalidated, the more data the VM may need to fetch from storage when it is restored. In some embodiments, the desired memory snapshot size can be configured (e.g., by an administrator) on a per-VM basis. In one embodiment, a configuration setting can be used to select the cache structures to be invalidated when the memory snapshot is post processed. In certain embodiments, a least-recently used (LRU) heuristic may be used to select the cache structures (e.g., pages of filesystem cache) to be invalidated.



FIG. 3 is a flow diagram showing illustrative processing that can be implemented within a system to compress memory snapshots (e.g., system 300 of FIG. 1). In certain embodiments, at least a portion of the processing described herein may be implemented within a hypervisor (e.g., hypervisor 308 of FIG. 1). In one embodiment, at least a portion of the processing described herein may be implemented within a snapshot manager (e.g., snapshot manager 318 of FIG. 1). In a particular embodiment, at least a portion of the processing described herein may be implemented within a snapshot post-processor (e.g., snapshot post-processor 330 of FIG. 1).


Rectangular elements (typified by element 502 in FIG. 3), herein denoted “processing blocks,” represent computer software instructions or groups of instructions. Alternatively, the processing blocks may represent steps performed by functionally equivalent circuits such as a digital signal processor (DSP) circuit or an application specific integrated circuit (ASIC). The flow diagrams do not depict the syntax of any particular programming language but rather illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables may be omitted for clarity. The particular sequence of blocks described is illustrative only and can be varied without departing from the spirit of the concepts, structures, and techniques sought to be protected herein. Thus, unless otherwise stated, the blocks described below are unordered meaning that, when possible, the functions represented by the blocks can be performed in any convenient or desirable order.


Referring to FIG. 3, a method 500 can be used to generate a VM memory snapshot, according to an embodiment of the disclosure. At block 502, a request is received to generate a memory snapshot for a VM. At block 504, the contents of the VM's memory are copied (or “dumped”) to generate a memory snapshot. The memory snapshot may include state and cache, as discussed above in conjunction with FIG. 2.


At block 506, information about cache structures within the memory snapshot is obtained. In certain embodiments, the cache structures may include pages used by a filesystem cache or a buffer cache (e.g., the buffer cache used within Linux systems). In some embodiments, an OS-specific driver may be used to obtain information about cache structures within the memory snapshot. In various embodiments, the cache structures may include cache structures used by application processes running in the VM. In this case, the application cache may be different from the OS filesystem cache. For example, a database application may include its own cache structures.


Referring back to FIG. 3, at block 508, at least one of the cache structures is invalidated and the corresponding cache data overwritten by zeros (or another constant value). In some embodiments, the method may use a LRU heuristic to select the cache structures to be invalidated.


Referring again to FIG. 3, at block 510, the cache-invalidated memory snapshot is stored. In some embodiments, the cache-invalidated memory snapshot may be stored to a deduplicated storage system. In other embodiments, the memory snapshot may be compressed before it is stored using a compression algorithm.


The compressed memory snapshot may be decompressed and used to restore the VM.



FIG. 4 shows a computer 600 that can perform at least part of the processing described herein, according to one embodiment. The computer 600 may include a processor 602, a volatile memory 604, a non-volatile memory 606 (e.g., hard disk), an output device 608 and a graphical user interface (GUI) 610 (e.g., a mouse, a keyboard, a display, for example), each of which is coupled together by a bus 618. The non-volatile memory 606 may be configured to store computer instructions 612, an operating system 614, and data 616. In one example, the computer instructions 612 are executed by the processor 602 out of volatile memory 604. In one embodiment, an article 620 comprises non-transitory computer-readable instructions. In some embodiments, the computer 600 corresponds to a virtual machine (VM). In other embodiments, the computer 600 corresponds to a physical computer.


Processing may be implemented in hardware, software, or a combination of the two. In various embodiments, processing is provided by computer programs executing on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.


The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate. The program logic may be run on a physical or virtual processor. The program logic may be run across one or more physical or virtual processors.


Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).


All references cited herein are hereby incorporated herein by reference in their entirety.


Having described certain embodiments, which serve to illustrate various concepts, structures, and techniques sought to be protected herein, it will be apparent to those of ordinary skill in the art that other embodiments incorporating these concepts, structures, and techniques may be used. Elements of different embodiments described hereinabove may be combined to form other embodiments not specifically set forth above and, further, elements described in the context of a single embodiment may be provided separately or in any suitable sub-combination. Accordingly, it is submitted that the scope of protection sought herein should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the following claims.

Claims
  • 1. A method comprising: receiving a request to generate a memory snapshot for a virtual machine (VM);generating a memory snapshot of the VM's memory, the memory snapshot including a first portion that represents a state of the VM's memory and a second portion that represents a state of a cache that is associated with the VM's memory;identifying a configuration setting that specifies a cache management policy for invalidating contents of the cache that is associated with the VM's memory;post-processing the memory snapshot based on the configuration setting, the post-processing including zeroing out cache data within the second portion of the memory snapshot based on the configuration setting; andcompressing the memory snapshot after zeroing out corresponding cache data within the second portion of the memory snapshot, and storing the memory snapshot to storage.
  • 2. The method of claim 1 further comprising invalidating one or more cache structures in the cache that is associated with the VM's memory.
  • 3. The method of claim 2 wherein invalidating the one or more cache structures comprises erasing the one or more cache structures.
  • 4. The method of claim 3 wherein erasing the one or more cache structures from cache comprises setting an invalid bit within each of the one or more cache structures.
  • 5. The method of claim 2 further comprising obtaining information about the one or more cache structures, wherein obtaining information about the one or more cache structures includes obtaining information about cache structures used by application processes running within the VM.
  • 6. The method of claim 2 wherein invalidating the one or more cache structures comprises selecting the one or more cache structures using a least-recently used (LRU) heuristic.
  • 7. The method of claim 1 wherein at least a portion of the post processing is performed using a driver specific to a guest operating system (OS) of the VM.
  • 8. The method of claim 1 wherein storing the memory snapshot to storage comprises storing the compressed memory snapshot to a deduplicated storage system.
  • 9. The method of claim 1 further comprising: retrieving the memory snapshot from storage; andrestoring the VM using the retrieved memory snapshot.
  • 10. A system comprising: one or more processors;a volatile memory; anda non-volatile memory storing computer program code that when executed on the processor causes execution across the one or more processors of a process operable to perform the operations of: receiving a request to generate a memory snapshot for a virtual machine (VM);generating a memory snapshot of the VM's memory, the memory snapshot including a first portion that represents a state of the VM's memory and a second portion that represents a state of a cache that is associated with the VM's memory;identifying a configuration setting that specifies a cache management policy for invalidating contents of the cache that is associated with the VM's memory;post-processing the memory snapshot based on the configuration setting, the post-processing including zeroing out corresponding cache data within the second portion of the memory snapshot; andcompressing the memory snapshot after zeroing out corresponding cache data within the second portion of the memory snapshot, and storing the memory snapshot to storage.
  • 11. The system of claim 10 wherein the one or more processors are further operable to perform the operation of invalidating one or more cache structures in the cache that is associated with the VM's memory.
  • 12. The system of claim 11 wherein invalidating the one or more cache structures comprises erasing the one or more cache structures.
  • 13. The system of claim 12 wherein erasing the one or more cache structures from cache comprises setting an invalid bit within each of the one or more cache structures.
  • 14. The system of claim 10 wherein: the one or more processors are further operable to perform the operation of obtaining information about one or more cache structures, andobtaining information about the one or more cache structures within the memory snapshot includes obtaining information about cache structures used by application processes running within the VM.
  • 15. The system of claim 11 wherein invalidating the one or more cache structures comprises selecting the one or more cache structures using a least-recently used (LRU) heuristic.
  • 16. The system of claim 10 wherein at least a portion of the post-processing is performed using a driver specific to a guest operating system (OS) of the VM.
  • 17. The system of claim 10 wherein storing the memory snapshot to storage comprises storing the compressed memory snapshot to a deduplicated storage system.
  • 18. A computer program product tangibly embodied in a non-transitory computer-readable medium, the computer-readable medium storing program instructions that are executable to: receive a request to generate a memory snapshot for a virtual machine (VM);generate a memory snapshot of the VM's memory, the memory snapshot including a first portion that represents a state of the VM's memory and a second portion that represents a state of a cache that is associated with the VM's memory;identify a configuration setting that specifies a cache management policy for invalidating contents of the cache that is associated with the VM's memory;post-process the memory snapshot based on the configuration setting, the post-processing including zeroing out corresponding cache data within the second portion of the memory snapshot; andcompress the memory snapshot after zeroing out corresponding cache data within the memory snapshot and store the memory snapshot to storage.
US Referenced Citations (226)
Number Name Date Kind
7203741 Marco et al. Apr 2007 B2
7719443 Natanzon May 2010 B1
7840536 Ahal et al. Nov 2010 B1
7840662 Natanzon Nov 2010 B1
7844856 Ahal et al. Nov 2010 B1
7860836 Natanzon et al. Dec 2010 B1
7882286 Natanzon et al. Feb 2011 B1
7934262 Natanzon et al. Apr 2011 B1
7958372 Natanzon Jun 2011 B1
8037162 Marco et al. Oct 2011 B2
8041940 Natanzon et al. Oct 2011 B1
8060713 Natanzon Nov 2011 B1
8060714 Natanzon Nov 2011 B1
8103937 Natanzon et al. Jan 2012 B1
8108634 Natanzon et al. Jan 2012 B1
8214612 Natanzon Jul 2012 B1
8250149 Marco et al. Aug 2012 B2
8271441 Natanzon et al. Sep 2012 B1
8271447 Natanzon et al. Sep 2012 B1
8332687 Natanzon et al. Dec 2012 B1
8335761 Natanzon Dec 2012 B1
8335771 Natanzon et al. Dec 2012 B1
8341115 Natanzon et al. Dec 2012 B1
8370648 Natanzon Feb 2013 B1
8380885 Natanzon Feb 2013 B1
8392680 Natanzon et al. Mar 2013 B1
8429362 Natanzon et al. Apr 2013 B1
8433869 Natanzon et al. Apr 2013 B1
8438135 Natanzon et al. May 2013 B1
8464101 Natanzon et al. Jun 2013 B1
8478955 Natanzon et al. Jul 2013 B1
8495304 Natanzon et al. Jul 2013 B1
8510279 Natanzon et al. Aug 2013 B1
8521691 Natanzon Aug 2013 B1
8521694 Natanzon Aug 2013 B1
8543609 Natanzon Sep 2013 B1
8583885 Natanzon Nov 2013 B1
8600945 Natanzon et al. Dec 2013 B1
8601085 Ives et al. Dec 2013 B1
8627012 Derbeko et al. Jan 2014 B1
8683592 Dotan et al. Mar 2014 B1
8694700 Natanzon et al. Apr 2014 B1
8706700 Natanzon et al. Apr 2014 B1
8712962 Natanzon et al. Apr 2014 B1
8719497 Don et al. May 2014 B1
8725691 Natanzon May 2014 B1
8725692 Natanzon et al. May 2014 B1
8726066 Natanzon et al. May 2014 B1
8738813 Natanzon et al. May 2014 B1
8745004 Natanzon et al. Jun 2014 B1
8751828 Raizen et al. Jun 2014 B1
8769336 Natanzon et al. Jul 2014 B1
8805786 Natanzon Aug 2014 B1
8806161 Natanzon Aug 2014 B1
8825848 Dotan et al. Sep 2014 B1
8832399 Natanzon et al. Sep 2014 B1
8850143 Natanzon Sep 2014 B1
8850144 Natanzon et al. Sep 2014 B1
8862546 Natanzon et al. Oct 2014 B1
8892835 Natanzon et al. Nov 2014 B1
8898112 Natanzon et al. Nov 2014 B1
8898409 Natanzon et al. Nov 2014 B1
8898515 Natanzon Nov 2014 B1
8898519 Natanzon et al. Nov 2014 B1
8914595 Natanzon Dec 2014 B1
8924668 Natanzon Dec 2014 B1
8930500 Marco et al. Jan 2015 B2
8930947 Derbeko et al. Jan 2015 B1
8935498 Natanzon Jan 2015 B1
8949180 Natanzon et al. Feb 2015 B1
8954673 Natanzon et al. Feb 2015 B1
8954796 Cohen et al. Feb 2015 B1
8959054 Natanzon Feb 2015 B1
8977593 Natanzon et al. Mar 2015 B1
8977826 Meiri et al. Mar 2015 B1
8996460 Frank et al. Mar 2015 B1
8996461 Natanzon et al. Mar 2015 B1
8996827 Natanzon Mar 2015 B1
9003138 Natanzon et al. Apr 2015 B1
9026696 Natanzon et al. May 2015 B1
9031913 Natanzon May 2015 B1
9032160 Natanzon et al. May 2015 B1
9037818 Natanzon et al. May 2015 B1
9063994 Natanzon et al. Jun 2015 B1
9069479 Natanzon Jun 2015 B1
9069709 Natanzon et al. Jun 2015 B1
9081754 Natanzon et al. Jul 2015 B1
9081842 Natanzon et al. Jul 2015 B1
9087008 Natanzon Jul 2015 B1
9087112 Natanzon et al. Jul 2015 B1
9104529 Derbeko et al. Aug 2015 B1
9110914 Frank et al. Aug 2015 B1
9116811 Derbeko et al. Aug 2015 B1
9128628 Natanzon et al. Sep 2015 B1
9128855 Natanzon et al. Sep 2015 B1
9134914 Derbeko et al. Sep 2015 B1
9135119 Natanzon et al. Sep 2015 B1
9135120 Natanzon Sep 2015 B1
9146878 Cohen et al. Sep 2015 B1
9152339 Cohen et al. Oct 2015 B1
9152578 Saad et al. Oct 2015 B1
9152814 Natanzon Oct 2015 B1
9158578 Derbeko et al. Oct 2015 B1
9158630 Natanzon Oct 2015 B1
9160526 Raizen et al. Oct 2015 B1
9177670 Derbeko et al. Nov 2015 B1
9189339 Cohen et al. Nov 2015 B1
9189341 Natanzon et al. Nov 2015 B1
9201736 Moore et al. Dec 2015 B1
9223659 Natanzon et al. Dec 2015 B1
9225529 Natanzon et al. Dec 2015 B1
9235481 Natanzon et al. Jan 2016 B1
9235524 Derbeko et al. Jan 2016 B1
9235632 Natanzon Jan 2016 B1
9244997 Natanzon et al. Jan 2016 B1
9256605 Natanzon Feb 2016 B1
9274718 Natanzon et al. Mar 2016 B1
9275063 Natanzon Mar 2016 B1
9286052 Solan et al. Mar 2016 B1
9305009 Bono et al. Apr 2016 B1
9323750 Natanzon et al. Apr 2016 B2
9330155 Bono et al. May 2016 B1
9336094 Wolfson et al. May 2016 B1
9336230 Natanzon May 2016 B1
9367260 Natanzon Jun 2016 B1
9378096 Erel et al. Jun 2016 B1
9378219 Bono et al. Jun 2016 B1
9378261 Bono et al. Jun 2016 B1
9383937 Frank et al. Jul 2016 B1
9389800 Natanzon et al. Jul 2016 B1
9405481 Cohen et al. Aug 2016 B1
9405684 Derbeko et al. Aug 2016 B1
9405765 Natanzon Aug 2016 B1
9411535 Shemer et al. Aug 2016 B1
9459804 Natanzon et al. Oct 2016 B1
9460028 Raizen et al. Oct 2016 B1
9471579 Natanzon Oct 2016 B1
9477407 Marshak et al. Oct 2016 B1
9501542 Natanzon Nov 2016 B1
9507732 Natanzon et al. Nov 2016 B1
9507845 Natanzon et al. Nov 2016 B1
9514138 Natanzon et al. Dec 2016 B1
9524218 Veprinsky et al. Dec 2016 B1
9529885 Natanzon et al. Dec 2016 B1
9535800 Natanzon et al. Jan 2017 B1
9535801 Natanzon et al. Jan 2017 B1
9547459 BenHanokh et al. Jan 2017 B1
9547591 Natanzon et al. Jan 2017 B1
9552405 Moore et al. Jan 2017 B1
9557921 Cohen et al. Jan 2017 B1
9557925 Natanzon Jan 2017 B1
9563517 Natanzon et al. Feb 2017 B1
9563684 Natanzon et al. Feb 2017 B1
9575851 Natanzon et al. Feb 2017 B1
9575857 Natanzon Feb 2017 B1
9575894 Natanzon et al. Feb 2017 B1
9582382 Natanzon et al. Feb 2017 B1
9588703 Natanzon et al. Mar 2017 B1
9588847 Natanzon et al. Mar 2017 B1
9594822 Natanzon et al. Mar 2017 B1
9600377 Cohen et al. Mar 2017 B1
9619543 Natanzon et al. Apr 2017 B1
9632881 Natanzon Apr 2017 B1
9665305 Natanzon et al. May 2017 B1
9710177 Natanzon Jul 2017 B1
9720618 Panidis et al. Aug 2017 B1
9722788 Natanzon et al. Aug 2017 B1
9727429 Moore et al. Aug 2017 B1
9733969 Derbeko et al. Aug 2017 B2
9737111 Lustik Aug 2017 B2
9740572 Natanzon et al. Aug 2017 B1
9740573 Natanzon Aug 2017 B1
9740880 Natanzon et al. Aug 2017 B1
9749300 Cale et al. Aug 2017 B1
9772789 Natanzon et al. Sep 2017 B1
9798472 Natanzon et al. Oct 2017 B1
9798490 Natanzon Oct 2017 B1
9804934 Natanzon et al. Oct 2017 B1
9811431 Natanzon et al. Nov 2017 B1
9823865 Natanzon et al. Nov 2017 B1
9823973 Natanzon Nov 2017 B1
9832261 Don et al. Nov 2017 B2
9846698 Panidis et al. Dec 2017 B1
9875042 Natanzon et al. Jan 2018 B1
9875162 Panidis et al. Jan 2018 B1
9880777 Bono et al. Jan 2018 B1
9881014 Bono et al. Jan 2018 B1
9910620 Veprinsky et al. Mar 2018 B1
9910621 Golan et al. Mar 2018 B1
9910735 Natanzon Mar 2018 B1
9910739 Natanzon et al. Mar 2018 B1
9917854 Natanzon et al. Mar 2018 B2
9921955 Derbeko et al. Mar 2018 B1
9933957 Cohen et al. Apr 2018 B1
9934302 Cohen et al. Apr 2018 B1
9940205 Natanzon Apr 2018 B2
9940460 Derbeko et al. Apr 2018 B1
9946649 Natanzon et al. Apr 2018 B1
9959061 Natanzon et al. May 2018 B1
9965306 Natanzon et al. May 2018 B1
9990256 Natanzon Jun 2018 B1
9996539 Natanzon Jun 2018 B1
10007626 Saad et al. Jun 2018 B1
10019194 Baruch et al. Jul 2018 B1
10025931 Natanzon et al. Jul 2018 B1
10031675 Veprinsky et al. Jul 2018 B1
10031690 Panidis et al. Jul 2018 B1
10031692 Elron et al. Jul 2018 B2
10031703 Natanzon et al. Jul 2018 B1
10037251 Bono et al. Jul 2018 B1
10042579 Natanzon Aug 2018 B1
10042751 Veprinsky et al. Aug 2018 B1
10055146 Natanzon et al. Aug 2018 B1
10055148 Natanzon et al. Aug 2018 B1
10061666 Natanzon et al. Aug 2018 B1
10067694 Natanzon et al. Sep 2018 B1
10067837 Natanzon et al. Sep 2018 B1
10078459 Natanzon et al. Sep 2018 B1
10082980 Cohen et al. Sep 2018 B1
10083093 Natanzon et al. Sep 2018 B1
10095489 Lieberman et al. Oct 2018 B1
10101943 Ayzenberg et al. Oct 2018 B1
20030221069 Azevedo Nov 2003 A1
20110246733 Usgaonkar Oct 2011 A1
20150127911 Steiss May 2015 A1
20150161151 Koryakina Jun 2015 A1
Non-Patent Literature Citations (1)
Entry
EMC Corporation, “EMC Recoverpoint/EX;” Applied Technology; White Paper; Apr. 2012; 17 Pages.