The present invention relates to data protection and recovery.
Data is at the heart of every enterprise, and is a core component of a data center infrastructure. As data applications become more and more critical, there is a growing need for disaster recovery systems to support application deployment, and provide complete business continuity.
Disaster recovery systems are responsible for data protection and application recovery. Some disaster recovery systems provide continuous data protection, and allow recovery to any point-in-time.
Some disaster recovery systems provide built-in test capabilities, which enable an administrator to test recovery to a previous point in time. When a previous point in time is selected for testing by a disaster recovery system, a disk image is presented to the enterprise data applications, as the disk image existed at the previous point in time. All reads from the disk are directed to the disaster recovery system, which determines where the data for the previous point in time is located—on a replica, or on a redo journal. All writes to the disk are recorded in a separate redo log, to be able to erase them after the test is complete.
There are many advantages to testing a previous point-in-time image, including ensuring that a replica is usable, and finding a point-in-time for recovery prior to a disaster. In a case where data became corrupted at an unknown time, it is of advantage to find a previous point in time as close as possible to the time of corruption, at which the disk image was uncorrupted, in order to minimize loss of data after recovery.
Objectives of disaster recovery plans are generally formulated in terms of a recovery time objective (RTO). RTO is the time it takes to get a non-functional system back on-line, and indicates how fast the organization will be up and running after a disaster. Specifically, RTO is the duration of time within which a business process must be restored after a disaster, in order to avoid unacceptable consequences associated with a break in business continuity. Searching for an appropriate point-in-time prior to failover generally requires testing multiple disk images at different points-in-time, which itself requires a long time to complete and significantly increases the RTO.
In addition, testing multiple disk images generally requires a complete copy of the data. As such, if a disk image is 2 TB and three points in time are to be tested, the storage consumption is at least 8 TB, corresponding to three tests and the replica's gold copy. This drawback makes it costly and impractical to test multiple disk copies in parallel.
It would thus be of advantage to expose multiple disk images at different points in time, as offsets from a gold image, to enable testing in parallel and then selecting a disk image for failover without duplication of data, to support the enterprise RTO.
Aspects of the present invention provide systems and methods to expose multiple disk images at different points in time, thereby enabling testing in parallel and then selecting a disk image for failover.
Aspects of the present invention relate to a dedicated virtual data service appliance (VDSA) within a hypervisor that can provide a variety of data services. Data services provided by the VDSA include inter alia replication, monitoring and quality of service. The VDSA is fully application-aware.
In an embodiment of the present invention, a tapping filter driver is installed within the hypervisor kernel. The tapping driver has visibility to I/O requests made by virtual servers running on the hypervisor.
A VDSA runs on each physical hypervisor. The VDSA is a dedicated virtual server that provides data services; however, the VDSA does not necessarily reside in the actual I/O data path. When a data service processes I/O asynchronously, the VDSA receives the data outside the data path.
Whenever a virtual server performs I/O to a virtual disk, the tapping driver identifies the I/O requests to the virtual disk. The tapping driver copies the I/O requests, forwards one copy to the hypervisor's backend, and forwards another copy to the VDSA.
Upon receiving an I/O request, the VDSA performs a set of actions to enable various data services. A first action is data analysis, to analyze the data content of the I/O request and to infer information regarding the virtual server's data state. E.g., the VDSA may infer the operating system level and the status of the virtual server. This information is subsequently used for reporting and policy purposes.
A second action, optionally performed by the VDSA, is to store each I/O write request in a dedicated virtual disk for journaling. Since all I/O write requests are journaled on this virtual disk, the virtual disk enables recovery data services for the virtual server, such as restoring the virtual server to an historical image.
A third action, optionally performed by the VDSA, is to send I/O write requests to different VDSAs, residing on hypervisors located at different locations, thus enabling disaster recovery data services.
The hypervisor architecture of the present invention scales to multiple host sites, each of which hosts multiple hypervisors. The scaling flexibly allows for different numbers of hypervisors at different sites, and different numbers of virtual services and virtual disks within different hypervisors. Each hypervisor includes a VDSA, and each site includes a data services manager to coordinate the VSDA's at the site, and across other sites.
Embodiments of the present invention enable flexibly designating one or more virtual servers within one or more hypervisors at a site as being a virtual protection group, and flexibly designating one or more hypervisors, or alternatively one or more virtual servers within one or more hypervisors at another site as being a replication target for the virtual protection group. Write order fidelity is maintained for virtual protection groups. A site may comprise any number of source and target virtual protection groups. A virtual protection group may have more than one replication target. The number of hypervisors and virtual servers within a virtual protection group and its replication target are not required to be the same.
The hypervisor architecture of the present invention may be used to provide cloud-based hypervisor level data services to multiple enterprises on a shared physical infrastructure, while maintaining control and data path separation between enterprises for security.
The present invention provides bi-directional cloud-based data replication services; i.e., from the enterprise to the cloud, and from the cloud to the enterprise. Moreover, replication targets may be assigned to a pool of resources that do not expose the enterprise infrastructure, thus providing an additional layer of security and privacy between enterprises that share a target physical infrastructure.
The cloud-based data replication services of the present invention support enforcement of data export regulations. As such, data transfer between a source and a destination is automatically restricted if data export regulations restrict data transfer between the corresponding jurisdictions of the source and the destination.
There is thus provided in accordance with an embodiment of the present invention an enterprise disaster recovery system, including at least one data disk, a processor for running at least one data application that reads data from the at least one data disk and writes data to the at least one data disk over a period of time, a recovery test engine that (i) generates in parallel a plurality of processing stacks corresponding to a respective plurality of previous points in time within the period of time, each stack operative to process a command to read data at a designated address from a designated one of the at least one data disk and return data at the designated address in an image of the designated data disk at the previous point in time corresponding to the stack, and (ii) that generates in parallel a plurality of logs of commands issued by the at least one data application to write data into designated addresses of designated ones of the plurality of data disks, each log corresponding to a respective previous point in time, wherein the plurality of previous points in time within the period of time are specified arbitrarily by a user of the system.
There is additionally provided in accordance with an embodiment of the present invention a method for testing enterprise disaster recovery, including receiving an arbitrarily designated plurality of points in time for conducting data recovery tests in parallel, generating in parallel a plurality of processing stacks, each stack corresponding to one of the designated points in time, and each stack operative to receive a command issued by at least one data application to read data from a designated address from a designated data disk and to return data at the designated address in an image of the designated data disk at the designated point in time corresponding to the stack, further generating in parallel a plurality of write commands issued by the at least one data application to write data into designated addresses of designated data disks, and logging the write commands in a plurality of logs, each log corresponding to one of the designated points in time.
The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
Appendix I is an application programming interface for virtual replication site controller web services, in accordance with an embodiment of the present invention;
Appendix II is an application programming interface for virtual replication host controller web services, in accordance with an embodiment of the present invention;
Appendix III is an application programming interface for virtual replication protection group controller web services, in accordance with an embodiment of the present invention;
Appendix IV is an application programming interface for virtual replication command tracker web services, in accordance with an embodiment of the present invention; and
Appendix V is an application programming interface for virtual replication log collector web services, in accordance with an embodiment of the present invention.
Aspects of the present invention relate to a dedicated virtual data services appliance (VDSA) within a hypervisor, which is used to provide a variety of hypervisor data services. Data services provided by a VDSA include inter alia replication, monitoring and quality of service.
Reference is made to
Hypervisor 100 also includes a tapping driver 150 installed within the hypervisor kernel. As shown in
Hypervisor 100 also includes a VDSA 160. In accordance with an embodiment of the present invention, a VDSA 160 runs on a separate virtual server within each physical hypervisor. VDSA 160 is a dedicated virtual server that provides data services via one or more data services engines 170. However, VDSA 160 does not reside in the actual I/O data path between I/O backend 130 and physical disk 140. Instead, VDSA 160 resides in a virtual I/O data path.
Whenever a virtual server 110 performs I/O on a virtual disk 120, tapping driver 150 identifies the I/O requests that the virtual server makes. Tapping driver 150 copies the I/O requests, forwards one copy via the conventional path to I/O backend 130, and forwards another copy to VDSA 160. In turn, VDSA 160 enables the one or more data services engines 170 to provide data services based on these I/O requests.
Reference is made to
As shown in
A first copy is stored in persistent storage, and used to provide continuous data protection. Specifically, VDSA 160 sends the first copy to journal manager 250, for storage in a dedicated virtual disk 270. Since all I/O requests are journaled on virtual disk 270, journal manager 250 provides recovery data services for virtual servers 110, such as restoring virtual servers 110 to an historical image. In order to conserve disk space, hash generator 220 derives a one-way hash from the I/O requests. Use of a hash ensures that only a single copy of any I/O request data is stored on disk.
An optional second copy is used for disaster recovery. It is sent via TCP transmitter 230 to remote VDSA 260. As such, access to all data is ensured even when the production hardware is not available, thus enabling disaster recovery data services.
An optional third copy is sent to data analyzer and reporter 240, which generates a report with information about the content of the data. Data analyzer and reporter 240 analyzes data content of the I/O requests and infers information regarding the data state of virtual servers 110. E.g., data analyzer and reporter 240 may infer the operating system level and the status of a virtual server 110.
Reference is made to
In accordance with an embodiment of the present invention, every write command from a protected virtual server in hypervisor 100A is intercepted by tapping driver 150 (
At Site B, the write command is passed to a journal manager 250 (
In addition to write commands being written to the Site B journal, mirrors 110B-1, 110B-2 and 110B-3 of the respective protected virtual servers 110A-1, 110A-2 and 110A-3 at Site A are created at Site B. The mirrors at Site B are updated at each checkpoint, so that they are mirrors of the corresponding virtual servers at Site A at the point of the last checkpoint. During a failover, an administrator can specify that he wants to recover the virtual servers using the latest data sent from the Site A. Alternatively the administrator can specify an earlier checkpoint, in which case the mirrors on the virtual servers 110B-1, 110-B-2 and 110B-3 are rolled back to the earlier checkpoint, and then the virtual servers are recovered to Site B. As such, the administrator can recover the environment to the point before any corruption, such as a crash or a virus, occurred, and ignore the write commands in the journal that were corrupted.
VDSAs 160A and 160B ensure write order fidelity; i.e., data at Site B is maintained in the same sequence as it was written at Site A. Write commands are kept in sequence by assigning a timestamp or a sequence number to each write at Site A. The write commands are sequenced at Site A, then transmitted to Site B asynchronously, then reordered at Site B to the proper time sequence, and then written to the Site B journal.
The journal file is cyclic; i.e., after a pre-designated time period, the earliest entries in the journal are overwritten by the newest entries.
It will be appreciated by those skilled in the art that the virtual replication appliance of the present invention operates at the hypervisor level, and thus obviates the need to consider physical disks. In distinction, conventional replication systems operate at the physical disk level. Embodiments of the present invention recover write commands at the application level. Conventional replication systems recover write commands at the SCSI level. As such, conventional replication systems are not fully application-aware, whereas embodiment of the present invention are full application-aware, and replicate write commands from an application in a consistent manner.
The present invention offers many advantages.
As indicated hereinabove, the architecture of
The hypervisors are shown in system 300 with their respective VDSA's 160A/1, 160A/2, . . . , and the other components of the hypervisors, such as the virtual servers 110 and virtual disks 120, are not shown for the sake of clarity. An example system with virtual servers 110 is shown in
The sites include respective data services managers 310A, 310B and 310C that coordinate hypervisors in the sites, and coordinate hypervisors across the sites.
The system of
Data services managers 310A, 310B and 310C are control elements. The data services managers at each site communicate with one another to coordinate state and instructions. The data services managers track the hypervisors in the environment, and track health and status of the VDSAs 160A/1, 160A/2, . . . .
It will be appreciated by those skilled in the art that the environment shown in
In accordance with an embodiment of the present invention, the data services managers enable designating groups of specific virtual servers 110, referred to as virtual protection groups, to be protected. For virtual protection groups, write order fidelity is maintained. The data services managers enable designating a replication target for each virtual protection group; i.e., one or more sites, and one or more hypervisors in the one or more sites, at which the virtual protection group is replicated. A virtual protection group may have more than one replication target. The number of hypervisors and virtual servers within a virtual protection group and its replication target are not required to be the same.
Reference is made to
Reference is made to
More generally, the recovery host may be assigned to a cluster, instead of to a single hypervisor, and the recovery datastore may be assigned to a pool of resources, instead of to a single datastore. Such assignments are of particular advantage when different enterprises share the same physical infrastructure for target replication, as such assignments mask the virtual infrastructure between the different enterprises.
The data services managers synchronize site topology information. As such, a target site's hypervisors and datastores may be configured from a source site.
Virtual protection groups enable protection of applications that run on multiple virtual servers and disks as a single unit. E.g., an application that runs on virtual servers many require a web server and a database, each of which run on a different virtual server than the virtual server that runs the application. These virtual servers may be bundled together using a virtual protection group.
Referring back to
For each virtual server 110 and its target host, each VDSA 160A/1, 160A/2, . . . replicates IOs to its corresponding replication target. The VDSA can replicate all virtual servers to the same hypervisor, or to different hypervisors. Each VDSA maintains write order fidelity for the IOs passing through it, and the data services manager coordinates the writes among the VDSAs.
Since the replication target hypervisor for each virtual server 110 in a virtual protection group may be specified arbitrarily, all virtual servers 110 in the virtual protection group may be replicated at a single hypervisor, or at multiple hypervisors. Moreover, the virtual servers 110 in the source site may migrate across hosts during replication, and the data services manager tracks the migration and accounts for it seamlessly.
Reference is made to
Site A
As further shown in
VPG1 (shown with upward-sloping hatching)
As such, it will be appreciated by those skilled in the art that the hypervisor architecture of
The scaling flexibility of the present invention also allows extension to cloud-based data services provided by a cloud provider on a shared infrastructure, as explained hereinbelow.
Cloud-based data services enable data center providers to service multiple enterprises at data centers that are remote from the enterprises. Cloud-based data services offer many advantages. Enterprises that use cloud-based data services obviate the needs for servers, SAN/NAS, networks, communication lines, installation, configuration and ongoing maintenance of information technology systems, and overhead expenses for electricity, cooling and space. However, conventional cloud-based data suffer from weakness of security due to multiple enterprises sharing the same physical infrastructure, and due to multiple enterprises using the same networks and IPs for their services.
Cloud-based systems of the present invention overcome these weaknesses. Reference is made to
System 500 has many advantages over conventional data service systems. Inter alia, system 500 enables protection of heterogenic environments, enables remote control of enterprise sites, enables economies of scale, enables complete workload mobility, enables a complete web services API for seamless integration, and enables integration with other cloud-based management systems.
Reference is made to
Cloud-based facility 490 infrastructure includes two hypervisors 400/1 and 400/2, and four physical disks 420-1, 420-2, 420-3 and 420-4. Hypervisor 400/1 includes six virtual servers 410/1-1, 410/1-2, 410/1-3, 410/1-4, 410/1-5 and 410/1-6; and hypervisor 400/2 includes two virtual servers 410/2-1 and 410/2-2. Hypervisor 400/1 services Enterprises A and B, and hypervisor 400/2 services Enterprise B. As such, the infrastructure of cloud-based facility 490 is shared between Enterprises A and B.
The architecture of
Reference is made to
Reference is made to
Reference is made to
The different architectures in
The architecture of
The architectures of
As such, it will be appreciated by those skilled in the art that the cloud-based hypervisor level data services systems of the present invention enable multi-tenancy and multi-site services. I.e., multiple enterprises and multiple sites may be serviced by the same physical infrastructure including inter alia the same hypervisors and storage, with minimized footprint on the cloud side, allowing for centralized cloud management. By providing each enterprise with its own data services manager on the clod side, as in
By deploying additional cloud connectors on the enterprise side, as in
The systems of the present invention provide bi-directional cloud-based data replication services; i.e., from an enterprise to the cloud, and from the cloud to an enterprise, for the same enterprise or for different enterprises, simultaneously using the same shared infrastructure. Moreover, replication targets may be set as resources that do not expose the enterprise infrastructure, thus providing an additional layer of security and privacy between enterprises.
It will be appreciated by those skilled in the art that systems of the present invention may be used to enforce jurisdictional data export regulations. Specifically, cloud-based facility 490 infrastructure is partitioned according to jurisdictions, and data recovery and failover for an enterprise is limited to one or more specific partitions according to jurisdictional regulations.
Reference is made to
Privacy and data security regulations prevent data from being exported from one jurisdiction to another. In order to enforce these regulations, system 600 includes a rights manager 610 that blocks access to a data center by an enterprise if data export is regulations restrict data transfer between their respective jurisdictions. Thus rights manager 610 blocks access by Enterprise A to Data Centers 3 and 4, blocks access by Enterprise B to Data Centers 1,2 and 4, and blocks access by Enterprise C to Data Centers 1, 2, and 3. Enterprises A, B and C may be commonly owned, but access of the data centers by the enterprises is nevertheless blocked, in order to comply with data export regulations.
In accordance with an embodiment of the present invention, when configuring a virtual protection group, an administrator may set a territory/data center restriction. When the administrator subsequently selects a destination resource for data replication for a virtual protection group, system 600 verifies that the resource is located in a geography that does not violate a territory/data center restriction.
The present invention provides built-in test capabilities, which enable an administrator to run multiples tests in parallel, to test recovery of data to multiple points in time. When a desired previous point in time is selected for testing by a disaster recovery system, each disk image is presented to the enterprise data applications, as the disk's data existed at the desired point in time. The data in the disk image corresponding to the desired point in time is generally determined from a replica disk and from an undo log of write commands. The replica disk generally corresponds to a disk image at a time later than the desired point in time. Some of the data in the replica disk may have been written prior to the desired point in time and some of the data may have been written subsequent to the desired point in time. For addresses to which data was written subsequent to the desired point in time, the undo journal may be used to undo the writes from the replica disk back to the desired point in time, to determine the disk image at the desired point in time. For addresses to which data was not written subsequent to the desired point in time, the data from the replica disk is used to determine the disk image at the desired point in time.
During recovery testing, all reads from a disk are directed to the disaster recovery system, which responds to the reads by providing the data for the disk image corresponding to the desired point in time. All writes to disks are recorded in a separate write log, so as to be able to erase them after the test is complete, thereby ensuring that production data is not affected by the recovery test.
There are many advantages to testing a previous point in time disk image, including ensuring that a replica is usable, and finding a safe point in time for recovery prior to a disaster.
The present invention enables running multiple recovery tests in parallel, at multiple points in time. When multiple points in time are selected for multiple tests, each test is redirected through a different processing stack, which reads data according to the appropriate point in time. Each test has its own write log. Each test may be stopped independently of the other tests. When a test is stopped, the test ends and is summarized and marked as pass or fail.
Reference is made to
Shown in
Clicking on the “+Add” control, marked with a circled 2, causes the window shown in
Shown in
Shown in
Shown in
Reference is made to
A data recovery system 770 includes a recovery test engine 780, which enables simultaneous recovery testing of images of disks 750 and 760 at multiple points in time. As shown in
Processing stacks 781, 782 and 783 are each operative to receive a read command for a data address in one of the disks 750 and 760 issued by a data application, and to return data for the data address in the disk image as it existed at the point in time corresponding to the stack.
Recovery test engine 780 is operative to receive a write command for a data address in one of the disks 750 and 760, and log the write command in a temporary write journal 791, 792 or 793 corresponding to the point in time being tested. The write journals 791, 792 and 793 are generally discarded at the end of the recovery tests, thus ensuring that the recovery tests do not affect production data.
Reference is made to
At operation 1020 a plurality of processing stacks, such as processing stacks 781, 782 and 783 (
At operation 1030 a read command to read data at a designated address from a designated data disk, is received from a data processing application for one of the recovery tests. At operation 1040 the processing stack corresponding to the recovery test returns data at the designated address corresponding to the image of the designated disk at the point in time being tested.
At operation 1050 a write command to write data in a designated address of a designated data disk, is received from a data processing application for one of the recovery tests. At operation 1050, the write command is logged into a write journal used specifically for the recovery test, such as one of the write journals 791, 792 and 793.
At operation 1070 a determination is made if an instruction to stop one of the recovery tests is received. If not, then the method returns to operation 1030, to continue processing read and write commands. If so, then the processing stack for the recovery test is stopped at step 1080, thereby ending the test, and a summary of test results is generated. In one embodiment of the present invention, the summary is provided through the FailoverTestInfo data object listed in Appendix III.
At step 1090 a determination is made whether any remaining recovery tests are running. If so, the method returns to operation 1030 to continue processing read and write commands for the remaining recovery tests being performed. If not, then all tests have been stopped and the method ends.
It will thus be appreciated that the present invention enables parallel recovery testing of disk images at multiple points in time, thereby saving time and resources in performing multiple recovery tests vis-à-vis conventional recovery systems.
The present invention may be implemented through an application programming interface (API), exposed as web service operations. Reference is made to Appendices I-V, which define an API for virtual replication web services, in accordance with an embodiment of the present invention. The API for recovery tests for virtual protection groups is provided in Appendix III.
It will thus be appreciated that the present invention provides many advantages, including inter alia:
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Site Controller Web Services
These web services include methods and properties for pairing and un-pairing sites, and for managing site details.
Properties
This property is a globally unique identifier for the peer site of a given site.
This property includes parameters to access a site, including
This property is a globally unique identifier for a site.
This property includes a name, location and contact information for a site, including
This property indicates the globally unique identifier for the local site.
Methods
This method retrieves the IP address and port of the site paired with a given site.
This method retrieves site details, including inter alia the IP address and port of a designated server.
This method retrieves the name, location and contact information specified by an administrator for a designated site.
This method retrieves the identifiers for a site and for its paired site.
This method retrieves the TCP port to access the virtual data services application for a designated site.
This method retrieves the username for a hypervisor.
This method retrieves the IP address or a hostname for a hypervisor.
This method returns true if the local site is paired with another site. Otherwise, it returns false.
This method pairs a local site with another site.
This method reconfigures hypervisor information.
This method sets the TCP port used to access the virtual data services appliances at a designated site.
This method un-pairs a local and remote site.
Host Controller Web Services
These web services include methods and properties to identify hypervisors, and to deploy virtual data services appliances on hypervisors.
Properties
This property identifies a hypervisor, and includes
This method retrieves a list of hypervisors where a virtual data services appliance is in the process of being installed, at a designated site.
This method retrieves a list of hypervisors where a virtual data services appliance is in the process of being un-deployed, at a designated site.
This method retrieves a list of hypervisors where a virtual data services appliance is installed, at a designated site.
This method retrieves a list of hypervisors where a virtual data services appliance is not installed, at a designated site.
This method deploys a virtual data services appliance on a specified hypervisor at a designated site, in accordance with a specified datastore, a specified type of network, and access details including inter alia an IP a subnet mask and gateway for the VDSA.
This method un-deploys a virtual data services appliance from a specified hypervisor, at a designated site.
Protection Group Controller Web Services
These web services include methods and properties to manage virtual protection groups.
Properties
This property identifies a checkpoint by an unsigned integer.
This property includes information returned from a failover test, including
This property is a globally unique identifier for a virtual protection group.
This property defines settings for a protection group, including
This property defines settings for a virtual protection group, including
This property indicates the status of a virtual protection group, from among Protecting, NeedReverseConfiguration, Promoting, PromotingAndNeedReverseConfiguration, Test, Failover, PromotionCompleteMirrorsNotYetActivated, MissingConfiguration, PromotingAndMissingConfiguration, RemoveInProgress.
This property indicates settings for a virtual application, including
This property indicates settings for a virtual server, including
This method returns true if checkpoints exit for a designated virtual protection group.
This method removes the virtual protection groups defined at a designated site.
This method creates a virtual protection group at a designated site.
This method performs a failover of the virtual servers in a designated virtual protection group, to a designated checkpoint instance or to the latest checkpoint.
This method performs a failover of the virtual servers in a designated virtual protection group, to a designated checkpoint or to the latest checkpoint, without creating reverse replication and without stopping protection of the virtual servers in the designated virtual protection group.
This method removes a virtual protection group irrespective of the state of the group. This method is used if the RemoveProtectionGroup method is unable to complete successfully.
This method updates virtual protection group settings, including removal of virtual servers and disks that should have been removed using the RemoveProtectionGroup method. This method is used if the UpdateProtectionGroup method is unable to complete successfully.
This method retrieves a list of checkpoints for a specified virtual protection group.
This method retrieves information about failover tests for a specified virtual protection group.
This method retrieves the virtual protection group settings for a specified virtual protection group, for use as default values for reverse replication.
This method retrieves the settings for a designated virtual protection group.
This method retrieves a list of virtual protection groups.
This method retrieves the state of a specified virtual protection group, the state being “protected” or “recovered”. If the group is protected, 0 is returned; and if the group is recovered, 1 is returned.
This method retrieves the status of a specified virtual protection group, the status being inter alia “protecting”, “testing” or “promoting”.
This method inserts a named checkpoint for a designated virtual protection group. The method returns immediately, without verifying whether or not the checkpoint was successfully written to the journal in the peer site.
This method returns true if the connection to the paired site is up.
This method migrates a specified virtual protection group to the peer site.
This method adds a designated virtual server to a virtual protection group, in accordance with designated settings.
This method removes a virtual protection group, unless the group is being replicated during a test failover or an actual failover, and unless the group is being migrated to the peer site. If this method does not return a Success completion code, the ForceRemoveProtectionGroup method may be used to force removal of the group.
This method stops a failover test, and removes the test virtual servers from the peer site.
This method discards information about a specified number of old failover tests for a designated virtual protection group, from the oldest test to the most recent test.
This method removes a designated virtual server from a designated virtual protection group.
This method updates settings of a specified virtual protection group. If the method does not return a Success completion code, the ForceUpdateProtectionGroup method can be used to force the update.
This method waits for a checkpoint to be written to a journal on the peer site, after it was inserted, or times out if it takes too long.
Command Tracker Web Services
These web services include methods and properties to monitor procedures being executed.
Properties
This property includes.
This method retrieves a list of all tasks that are currently active.
This method returns the completion code of a specified task. Completion codes include Success, Aborted, Failed or HadException. If the task is still running, NotAvailable is returned.
This method returns the command type, the completion code input parameters, and the virtual protection group identifier of a designated task.
This method returns the string associated with an exception, for a designated task that had an exception. The method GetCompletionCode returns HadException if a task had an exception.
This method returns progress as a percentage of a whole task, as an integer, for a specified task.
This method returns the identifier of a task currently being performed on a designated protection group.
This method returns the identifier of a task currently being performed on a specified protection group at a local site.
This commend returns the result for a designated task. The returned result may be one of the following:
This method retrieves the current status of a specified task. The status may be Active, Running, Aborted or Completed.
This method waits for a specified task to complete, by polling the task at specified time intervals, until a specified time out.
Log Collector Web Services
These web services include methods and properties to retrieve information for troubleshooting.
Properties
This property indicates details of a log request, including a level of detail of information, indicating whether information about a virtual data service appliance and core information should be included, and including start and end times for the information.
Methods
This method initiates a log request.
This method retrieves results of a log request.
This application is a continuation-in-part of U.S. application Ser. No. 13/175,898 entitled METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION, filed on Jul. 4, 2011 by inventors Ziv Kedem, Gil Levonai, Yair Kuszpet and Chen Burshan, which is a continuation-in-part of U.S. application Ser. No. 13/039,446, entitled METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION, filed on Mar. 3, 2011 by inventor Ziv Kedem, which claims priority benefit of U.S. Provisional Application No. 61/314,589, entitled METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION, filed on Mar. 17, 2010 by inventor Ziv Kedem.
Number | Name | Date | Kind |
---|---|---|---|
5212784 | Sparks | May 1993 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5649152 | Ohran et al. | Jul 1997 | A |
5664186 | Bennett et al. | Sep 1997 | A |
5835953 | Ohran | Nov 1998 | A |
5933653 | Ofek | Aug 1999 | A |
5935260 | Ofer | Aug 1999 | A |
5991813 | Zarrow | Nov 1999 | A |
6073209 | Bergsten | Jun 2000 | A |
6073222 | Ohran | Jun 2000 | A |
6658591 | Arndt | Dec 2003 | B1 |
6910160 | Bajoria et al. | Jun 2005 | B2 |
6944847 | Desai et al. | Sep 2005 | B2 |
7063395 | Gagne et al. | Jun 2006 | B2 |
7143307 | Witte et al. | Nov 2006 | B1 |
7325159 | Stager et al. | Jan 2008 | B2 |
7421617 | Anderson et al. | Sep 2008 | B2 |
7464126 | Chen | Dec 2008 | B2 |
7475207 | Bromling et al. | Jan 2009 | B2 |
7516287 | Ahal et al. | Apr 2009 | B2 |
7523277 | Kekre et al. | Apr 2009 | B1 |
7557867 | Goo | Jul 2009 | B2 |
7577817 | Karpoff et al. | Aug 2009 | B2 |
7577867 | Lewin et al. | Aug 2009 | B2 |
7603395 | Bingham et al. | Oct 2009 | B1 |
7647460 | Wilson et al. | Jan 2010 | B1 |
7720817 | Stager et al. | May 2010 | B2 |
7765433 | Krishnamurthy | Jul 2010 | B1 |
7791091 | Nagai | Sep 2010 | B2 |
7849361 | Ahal et al. | Dec 2010 | B2 |
7865893 | Omelyanchuk et al. | Jan 2011 | B1 |
7971091 | Bingham et al. | Jun 2011 | B1 |
8156301 | Khandelwal et al. | Apr 2012 | B1 |
8352941 | Protopopov | Jan 2013 | B1 |
8650299 | Huang et al. | Feb 2014 | B1 |
20040068561 | Yamamoto et al. | Apr 2004 | A1 |
20040153639 | Cherian et al. | Aug 2004 | A1 |
20050071588 | Spear et al. | Mar 2005 | A1 |
20050171979 | Stager et al. | Aug 2005 | A1 |
20050182953 | Stager et al. | Aug 2005 | A1 |
20050188256 | Stager et al. | Aug 2005 | A1 |
20060047996 | Anderson et al. | Mar 2006 | A1 |
20060112222 | Barrall | May 2006 | A1 |
20060129562 | Pulamarasetti et al. | Jun 2006 | A1 |
20060161394 | Dulberg et al. | Jul 2006 | A1 |
20070028244 | Landis et al. | Feb 2007 | A1 |
20070162513 | Lewin et al. | Jul 2007 | A1 |
20070220311 | Lewin et al. | Sep 2007 | A1 |
20080086726 | Griffith et al. | Apr 2008 | A1 |
20080177963 | Rogers | Jul 2008 | A1 |
20080195624 | Ponnappan et al. | Aug 2008 | A1 |
20090187776 | Baba et al. | Jul 2009 | A1 |
20090249330 | Abercrombie et al. | Oct 2009 | A1 |
20090283851 | Chen | Nov 2009 | A1 |
20100017801 | Kundapur | Jan 2010 | A1 |
20100027552 | Hill | Feb 2010 | A1 |
20100150341 | Dodgson et al. | Jun 2010 | A1 |
20100198972 | Umbehocker | Aug 2010 | A1 |
20100250824 | Belay | Sep 2010 | A1 |
20100274886 | Nahum et al. | Oct 2010 | A1 |
20110022812 | van der Linden | Jan 2011 | A1 |
20110075674 | Li | Mar 2011 | A1 |
20110099200 | Blount et al. | Apr 2011 | A1 |
20110099342 | Ozdemir | Apr 2011 | A1 |
20110125980 | Brunet et al. | May 2011 | A1 |
20110131183 | Chandhok et al. | Jun 2011 | A1 |
20110153569 | Fachan et al. | Jun 2011 | A1 |
20110161299 | Prahlad et al. | Jun 2011 | A1 |
20110161301 | Pratt et al. | Jun 2011 | A1 |
20110264786 | Kedem et al. | Oct 2011 | A1 |
20120110086 | Baitinger et al. | May 2012 | A1 |
20120110572 | Kodi et al. | May 2012 | A1 |
20120185913 | Martinez | Jul 2012 | A1 |
20130014104 | Natanzon | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
2009151445 | Dec 2009 | WO |
Entry |
---|
Notification Concerning Transmittal of International Preliminary Report on Patentability dated Jan. 7, 2014 in corresponding PCT Application No. PCT/IL2012/000271, 12 pages. |
U.S. Final Office Action dated Dec. 30, 2013 in related U.S. Appl. No. 13/039,446, filed Mar. 3, 2011, 10 pages. |
U.S. Non-Final Office Action dated Apr. 26, 2013 in related U.S. Appl. No. 13/367,448, filed Feb. 7, 2012, 19 pages. |
U.S. Non-Final Office Action dated Jun. 21, 2013 in related U.S. Appl. No. 13/175,892, filed Jul. 4, 2011, 15 pages. |
U.S. Non-Final Office Action dated Jun. 6, 2013 in related U.S. Appl. No. 13/039,446, filed Mar. 3, 2011, 12 pages. |
U.S. Non-Final Office Action dated Mar. 4, 2013 in related U.S. Appl. No. 13/039,446, filed Mar. 3, 2011, 13 pages. |
Amendment “B” and Response to Office Action from Prosecution History of U.S. Pat. No. 7,603,395, Apr. 9, 2009, (13 pages). |
Amendment “E” and Response to Office Action from Prosecution History of U.S. Pat. No. 7,971,091., Nov. 19, 2010, (14 pages). |
Amendment and Response to Office Action from Prosecution History of U.S. Pat. No. 7,647,460, Aug. 30, 1999(22 pages). |
Answer Claim Construction Brief of Plaintiffs EMC Corporation and EMC Israel Development Center, LTD., EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), May 9, 2014, (24 pages). |
Appellants' Brief Pursuant to 37 C.F.R section 1.192 from Prosecution History of U.S. Pat. No. 7,647,460., May 9, 2002, (34 pages). |
Complaint, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No.—Demand for Jury Trial, Jul. 20, 2012, (13 pages). |
Defendant Zerto, Inc.'s Amended Answer to the First Amended Complaint, Affirmative Defense, and Counterclaims, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Aug. 7, 2014, (34 pages). |
Defendant Zerto, Inc.'s Claim Construction Answering Brief, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956 (GMS), May 9, 2014, (23 pages). |
Defendant Zerto, Inc.'s Opening Brief in Support of its Motion for Leave to Amend its Answer to the First Amended Complaint, Affirmative Defense and Counterclaims, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jun. 6, 2014, (24 pages). |
Defendant Zerto, Inc.'s Opening Brief in Support of Its Motion for Leave to Amend Its Answer to the First Amended Complaint, Affirmative Defense Aned Counterclaims, EMC Corporation and EMC Israel Development Center, LTD., vs. Zerto, Inc., Case No. 12-956(GMS) 24 pages, Jun. 6, 2014. |
Defendant Zerto, Inc.'s Opening Claim Construction Brief., EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), Apr. 11, 2014, (26 pages). |
Defendant Zerto, Inc.'s Reply Brief in Support of its Motion for Leave to Amend its Answer to the First Amended Compliant, Affrimative Defense and Counterclaims, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jul. 9, 2014, (16 pages). |
Defendant's Answering Brief in Opposition to Plaintiffs' Motion to Strik and Dismiss Defendant's Affirmative Defense and Counterclaims of “Invalidity” based on Assignor Estoppel, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), Nov. 5, 2012, (21 pages). |
EMC Corporation and EMC Israel Development Center, LTD.'s ANswer to the Amended Counterclaims of Zerto Inc., EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Aug. 25, 2014, (12 pages). |
EMC's Answer Brief in Opposition to Zerto's Motion for Judgment on the Pleadings on Count III o fthe First Amended Complaint, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Aug. 11, 2014, (25 pages). |
EMC's Answering Brief in Opposition to Zerto's Motion for Leave to Amend its Answer to the First Amended Complaint by Adding an Inequitable Conduct Defense and Counterclaims, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jun. 23, 2014 (25 pages). |
Joint Appendix of Intrinsic and Dictionary Evidence, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, May 12, 2014, (366 pages). |
Joint Claim Construction Chart, EMC Corporation and EMC Israel Development Center, Ltd., v. Zerto, Inc., Case No. 12-956(GMS), Mar. 21, 2014, (24 pages). |
Memorandum, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jul. 31, 2014 (8 pages). |
Opening Brief in Support of Defendant Zerto, Inc.'s Motion for Judgment on the Pleadings on Count III of the First Amended Compliant, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jul. 25, 2014, (19 pages). |
Order Construing the Terms of U.S. Pat. No. 7,647,460; 6,073,222; 7,603,395; 7,971,091; and 7,577,867, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Sep. 5, 2014, (0 pages). |
Plaintiffs EMC Corporation and EMC Israel Development Center, LTD.'s Opening Claim Construction Brief, EMC Corporation and EMC Israel Development Center, LTD.,v. Zerto, Inc., Case No. 12-956(GMS), Apr. 11, 2014, (26 pages). |
Plaintiffs' Opening Brief in Support of their Motion to Strike and Dismiss Defendant's Affirmative Defense and Counterclaims of “Invalidity” Based on Assignor Estoppel, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), Oct. 4, 2012, (18 pages). |
Revised Joint Claim Construction Chart, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), Apr. 11, 2014, (19 pages). |
Revised Joint Claim Construction Chart, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956(GMS), Jun. 6, 2014, (19 pages). |
Transcript of Markman Hearing, EMC Corporation and EMC Israel Development Center, LTD., v. Zerto, Inc., Case No. 12-956-GMS, Jun. 25, 2014 (94 pgs). |
FreezeFrame User's Guide, Version 1.1, Document Version 1.1, 60 pgs, Nov. 1993. |
Harper Collins, Collins English Dictionary, Third Edition Updated 1994, Section JA-258-260(3 pages). |
Merriam-Webster, Inc., Webster's Third New International Dictionary, Section JA-276-279 (4 pages) Copyright 2002. |
Microsoft Press, Microsoft Computer Dictionary, Fifth Edition, Section JA-341-343, p. 296 (4 pages) 2002. |
Reference Model for Open Storage Systems Interconnection, Mass Storage System Reference Model Version 5, Sep. 1994 (36 pages). |
Storage Networking Industry Association Dictionary, http://web.archive.org/web20060220153102/http://www.snia.org/education/dictionary/a, pp. JA-261-273 (13 pages) 2006. |
Tech Target Search.,http://searchstorage.techtarget.com/definition/storage-snapshot.html, (p. JA-274) Jul. 2005. |
The Kashya KB 4000 Administrators User Guide Product Release 2.0, 105 pgs, Aug. 2004. |
The RecoveryONE Solution, Architecture Guide, Product Version 1.0, 22 pgs, Jan. 2006. |
Warrick, et al, “IBM Total Storage Enterprise Storage Server Implementing ESS Copy Services in Open Environments”, 642 pgs, IBM Jul. 2004. |
Webster's New World Dictionary, Dictionary of Computer Terms, Sixth Edition, (4 pages). |
Notice of Allowance for U.S. Appl. No. 13/175,892 mailed Dec. 23, 2014. |
U.S. Final Office Action dated Feb. 13, 2014 in related U.S. Appl. No. 13/367,448, filed Feb. 7, 2012. |
US Office Action dated Apr. 18, 2014 in related U.S. Appl. No. 13/175,892, filed Jul. 4, 2011. |
US Office Action dated Jul. 17, 2014 in related U.S. Appl. No. 13/175,898, filed Jul. 7, 2011. |
US Office Action in related U.S. Appl. No. 13/367,448, filed Feb. 3, 2015. |
US Office Action on 13/039,446 dated Jan. 2, 2015. |
Illuminata EMC RecoverPoint: Beyond Basics CDP Searched via internet on Nov. 10, 2013. |
Mendocino: The RecoveryOne Solution, Architecture Guide, 22 pages Product Version 1.0, Jan. 3, 2006. |
NetWorker PowerSnap Module for EMC Symmetrix, Realease 2.1 Installation and Administrator's Guide, 238 pgs, printed Sep. 2005. |
Notice of Allowance for U.S. Appl. No. 13/175,892 dated Apr. 3, 2015. |
US Office Action for U.S. Appl. No. 13/175,898 dated Mar. 25, 2015. |
Olzak, T., “Secure hypervisor-based vitual server environments”, Feb. 26, 2007. http://www.techrepublic.com/blog/security/secure-hypervisor-based-virtual-server-environments/160. |
US Office Action on U.S. Appl. No. 14/687,341 dated Mar. 3, 2016. |
Office Action on U.S. Appl. No. 13/175,898 dated Dec. 18, 2015. |
“Zerto Hits Triple-Digit Growth Once Againl Builds Toward a Future of Uninterrupted Technology,” Zerto, Feb. 3, 2015, 2 pages. |
A Comparison of Hypervisor-based Replication vs. Current and Legacy BC/DR Technologies, 2012. |
Choosing A VSS Provider Veeam Backup Guide for HyperV, Mar. 18, 2015. |
Data Loss Avoidance: Near Continuous Data Protection and Streamlined Disaster Recovery, www.veeam.com. |
Defendant Zerto, Inc's Motion for Judgment as a Matter of Law of No Willful Infringement of the '867 Patent . . . , Apr. 30, 2015. |
Deploy Hyper-V Replica, published May 31, 2012. |
Double-Take Availability for vSphere: Technical Data Sheet, 2014. |
EMC Recoverpoint Family, 2012. |
EMC Recoverpoint for Virtual Machines: Protects and Recovers VMs to Any Point in Time, 2012. |
EMC's Answering Brief in Opposition of Zerto's Renewed Motion for Judgment as a Matter of Law or, in the Alternative, for a New Trial, Jul. 17, 2015. |
EMC's Answering Brief in Opposition to Zerto's Motion for a New Trial and to Alter or Amend the Judgment, due to an Inconsistent Verdict, Jul. 17, 2015. |
EMC's Opening Brief in Support of Its Motion for an Accounting and to Amend the Judgment, Jun. 24, 2015. |
EMC's Opening Brief in Support of Its Renewed Motion for Judgment as a Matter of Law, Jun. 5, 2015. |
Failed to Create a Quiesced Snapshot of a VM, Nov. 5, 2014, http://nakivo.com. |
Features Nakivo Backup and Replication, accessed Jul. 7, 2015, http:www/nakivo.com/VMware-VM-backup-replication-features.htm. |
HP 3PAR Remote Copy Software User Guide HP 3PAR OS 3.2.1 MU2, copyright 2009. |
HP 3PAR Remote Copy Software, 2015, www.hp.com. |
Hyper-V Replica Feature Overview, published Feb. 29, 2012. |
Is Synchronous Replication Enough, May 29, 2013, http://www.zerto.com/blog/general/is-synchronous-replication-enough Judgment, May 21, 2015. |
Letter regarding EMC's Request to File Summary Judgment Motions—Redacted, dated Feb. 13, 2015, 120 pages. |
Letter to Judge Sleet re. EMC Corporation v. Zerto, Inc., Feb. 6, 2015. |
Letter to Judge Sleet Regarding EMC's Request for Summary Judgment, Oct. 21, 2014, 120 pages. |
Managing VM Data with Tintri, Phillips, John, 2013. |
Metro Mirror and Global Mirror. |
Plaintiff's Motion for Judgment as a Matter of Law Pursuant to Fed. R. Civ. P. 50(a), May 6, 2015. |
ReplicateVM, Replicate VMs, not LUNS, Jul. 7, 2015, http://www.tintri.com/producs/replicatevm. |
Report on the Filing or Determination of an Action Regarding a Patent or Trademark, May 22, 2015. |
Scalable, High-Performance, and Cost-Effective Remote Replication on Hitachi Unified Storage and Brocade Extension Platforms, 2012, www.brocade.com. |
Unitrends Enterprise Backup Software and Solutions, 2015, http://www.unitrends.com/products/enterprise-backup-software/unitrends-enterprise-backup. |
Unitrends Release 7.3 Beta Now Available, Nov. 26, 2013, http://blogs.unitrends.com/unitrends-release-7-3-beta-now-available/. |
Using Double-Take Software and the Virtual Recovery Appliance, http://www.discoposse.com/index.php/category/technology/. |
Veeam Backup and Replication v8, www.veeam.com. |
VMware ESXi and ESX Replication for Simple Fast Disaster Recovery, http://software.dell.com/products/vreplicator/. |
VMware vCenter Site Recovery Manager5 with vSphere Replication, 2011. |
VMware vSphere Replication 6.0, Apr. 2015. |
vReplicator Version 3.0, 2009. |
Zerto Announces General Availability of Zerto Virtual Replication Version 4.0, May 5, 2015. |
Zerto Inc.'s Motion for Judgment as a Matter of Law, May 6, 2015. |
Zerto Raises 26 Million in Series D Financing to Accelerate Hybrid Cloud Adoption, Jun. 18, 2014. |
Zerto Virtual Replication Release Notes, 2015. |
Zerto, Inc's Brief in Support of Its Motion for a New Trial, and to Alter or Amend the Judgment, Due to an Inconsistent Verdict, Jun. 19, 2015. |
Zerto, Inc's Brief in Support of Its Renewed Motion for Judgment as a Matter of Law or, In the Alternative, for a New Trial, Jun. 19, 2015. |
Zerto's Hypervisor-based Replication: A New Approach to Business/Continuity Disaster Recovery, 2012. |
Zerto's Hypervisor-Based, Enterprise-Class Replication and Continuous Data Protection, 2012. |
Zerto's Protect Applications, Not Just Data: BC/DR for Virtualized Applications, 2012. |
Zerto's ZVR and Hyper-V, 2014. |
US Office Action on 103564-0103 DTD Sep. 1, 2015. |
US Office Action on 103564-0108 DTD Sep. 1, 2015. |
US Office Action on 103564-0121 DTD Sep. 11, 2015. |
Number | Date | Country | |
---|---|---|---|
20120151273 A1 | Jun 2012 | US |
Number | Date | Country | |
---|---|---|---|
61314589 | Mar 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13175898 | Jul 2011 | US |
Child | 13367456 | US | |
Parent | 13039446 | Mar 2011 | US |
Child | 13175898 | US |