1. Technical Field
This application relates to the field of storage devices, and more particularly to the field of migrating virtual machines between storage devices.
2. Description of Related Art
It is desirable to be able to move Virtual Machines (VMs) from one data site to another for a number of reasons, including, for example, disaster avoidance, testing, load balancing, and following the sun/moon. The VMs may be organized in multi-volume repositories, where each volume may include multiple VMs. One way to do this would be to freeze all VMs on a particular volume at a first data site and then copy the volume from the first site to a second site using any appropriate technique, such as synchronous or asynchronous volume replication. Once the entire volume has been copied, the VMs may then be restarted at the second data site.
However, such a straightforward solution, while relatively simple to implement, may be unsatisfactory for a number of reasons. For instance, it may be desirable to migrate only some of the VMs of a volume rather than all the VMs (provide granularity), which the above does not do. In addition, it is desirable to make the operation as seamless as possible so that, while it may be OK to freeze and restart a particular VM for a short period of time, it may not be acceptable to do so for as long as it takes to replicate an entire volume. Furthermore it is desirable that the VMs be protected at all times so that data is not lost (or at least remains consistent) even if one of the sites stops working. It is also desirable that, if a link and/or a site stops working during a transition, the system does not crash.
Accordingly, it is desirable to provide a system that can migrate VMs in a way that addresses the issues set forth above.
According to the system described herein, migrating an active VM from a first data center to a second data center having a passive counterpart of the active VM includes freezing the active VM at the first data center, creating an active VM at the second data center that corresponds to the passive counterpart, and restarting the active VM at the second data center. Migrating an active VM from a first data center to a second data center may also include waiting for the passive counterpart to be synchronized with the active VM at the first data center prior to creating the active VM at the second data center. Migrating an active VM from a first data center to a second data center may also include creating on the first data center a passive counterpart to the active VM on the second data center. Migrating an active VM from a first data center to a second data center may also include waiting for the passive counterpart on the first data center to be synchronized with the active VM on the second data center prior to restarting the active VM on the second data center. Creating the active VM at the second data center may include providing a snapshot of a volume containing the passive VM. Migrating an active VM from a first data center to a second data center may also include, following providing the snapshot, copying data from the snapshot at the second data center to the active VM at the second data center. Migrating an active VM from a first data center to a second data center may also include converting the active VM at the first data center to a passive VM at the first data center. The passive VM at the first data center may be a passive counterpart to the active VM at the second data center.
According further to the system described herein, computer software, provided in a non-transitory computer-readable medium, migrates an active VM from a first data center to a second data center having a passive counterpart of the active VM. The software includes executable code that creates an active VM at the second data center following the active VM at the first data center being frozen where the active VM at the second data center corresponds to the passive counterpart and executable code that restarts the active VM at the second data center. The software may also include executable code that waits for the passive counterpart to be synchronized with the active VM at the first data center prior to creating the active VM at the second data center. The software may also include executable code that creates on the first data center a passive counterpart to the active VM on the second data center. The software may also include executable code that waits for the passive counterpart on the first data center to be synchronized with the active VM on the second data center prior to restarting the active VM on the second data center. Executable code that creates the active VM at the second data center may provide a snapshot of a volume containing the passive VM. The software may also include executable code that copies data from the snapshot at the second data center to the active VM at the second data center following providing the snapshot. The software may also include executable code that converts the active VM at the first data center to a passive VM at the first data center. The passive VM at the first data center may be a passive counterpart to the active VM at the second data center.
Referring to
The data centers 32, 34 may contain any number of processors and storage devices that are configured to provide the functionality described herein. In an embodiment herein, the storage devices may be Symmetrix storage arrays provided by EMC Corporation of Hopkinton, Mass. Other types/models of storage devices may be used in addition to, or instead of, the Symmetrix storage arrays. Different types of storage devices and different types of processing devices may be used. The data centers 32, 34 may be configured similarly to each other or may be configured differently.
The network 36 may be any network or similar mechanism allowing data communication between the data centers 32, 34. In an embodiment herein, the network 36 may be the public Internet, in which case each of the data centers 32, 34 are coupled thereto using any appropriate mechanism. In other embodiments, the network 36 may represent a direct connection (e.g., a physical connection) between the data centers 32, 34.
Referring to
The active VMs 42-44 may be provided in an active repository 62, which includes one or more logical storage volumes (not shown in
Referring to
In an embodiment herein, the active VMs may be provided on the R1 volumes while the passive VMs are provided on the R2 volumes. Thus, an already-existing mirroring/copying mechanism may be used to maintain the passive VMs rather than needing to create an entirely new mechanism for this. As the active VMs are modified during running, the passive VMs are continuously updated by, for example, the SRDF mechanism that mirrors data between the data centers 32, 34.
It is desirable to be able to efficiently migrate one or more VMs from a source one of the data centers 32, 34 to a target one of the data centers 32, 34. For example, initially, one or more particular VMs may be active at the data center 32 and passive at the data center 34. It is desirable to perform a migration so that the one or more particular VMs end up active at the data center 34 and passive at the data center 32. This is described in detail below.
Referring to
Processing for the flow chart 100 begins at a step 102 where the VM being migrated is frozen (suspended). In some embodiments, the VM being migrated may be shut down completely. Following the step 102 is a step 104 where the system waits for the corresponding passive VM to be synchronized with the active VM using any appropriate data synchronization technique (e.g., SRDF/A). As discussed elsewhere herein, there may be a delay from when data changes on the active VM to when the change is reflected on the corresponding passive VM. Note, however, that freezing the VM at the step 102 prevents the active VM from changing further so that, irrespective of the delay, it is possible for the active and passive VMs to have the same data after the step 104. Waiting at the step 104 allows the passive VM at the target data center to be synchronized with the active VM at the source data center.
Following the step 104 is a step 106 where the passive VM at the target data center is moved/copied to a writeable volume by, for example, copying data from the passive VM from the R2 device at the target data center to a writeable volume at the target data center. However, as described in more detail elsewhere herein, there are other ways to instantiate the VM to a writeable volume without necessarily copying all of the data. The step 106 represents creating an active VM at the target data center that corresponds to the passive VM at the target data center.
Following the step 106 is a step 108 where a new corresponding passive VM is set up (e.g., on the old source). This may be performed, for example, in instances where the active VM is provided at an R1 device at the target data center. Following the step 108 is a step 112 where the system waits for all of the data to be copied to the new passive VM (i.e., for the active and passive VMs to be synchronized). Waiting at the step 112 provides failover for the new active VM. Following the step 112 is a step 114 where the active VM is removed from the R1 volume of the old source. Following the step 114 is a step 116 where the VM is restarted at the target. Following the step 116, processing is complete.
In some embodiments, it is possible to remove the VM at the R1 of the old source and restart the VM at the target prior to the VM image being completely copied to the passive VM at the source data center. This is illustrated by an alternative path 118 where the step 112 is skipped. Of course, doing this means that the active VM may be running at the target data center prior to there being a passive VM to be used in case of a failover.
In some oases, it may be desirable to create a writeble version of the active VM at the target more efficiently than occurs when all of the data for the passive VM is copied from the R2 volume to a writeable volume, such as an R1 volume. One way to do this is to use a snapshot of the R2 image. This is discussed in more detail below.
Referring to
Referring to
The steps 108, 112, 114 of the flowchart 100, discussed above, represent copying data for the VM from the target center back to the source data center to establish the passive VM. However, in some cases, it may be desirable to use the data for the VM that is already present at the source data center rather than copy the same data from the target data center back to the source data center.
Referring to
Following the step 152 is a step 154 where any differences between the formerly active VM at the source data center and the now passive VM at the source data center are copied from the target data center. Note that, in some instances, there may be slight differences between the active VM as it existed at the source data center and the active VM as it now exists at the target data center. For example, local IP addresses may change and/or the identification of local resources (e.g., printers) may change. The target data center may handle those differences in connection with transitioning the VM from a passive VM to an active VM. At the step 154, any differences are transmitted from the target data center back to the source data center so that the passive VM contains the same data as the active VM. Following the step 154, processing is complete.
Note that additional storage space may be used as VMs are migrated from one of the data centers 32, 34 to the other one of the data centers 32, 34. For example, during migration of an active VM from the data center 32 to the data center 34, there may be two versions of the VM (active and passive) on the data center 32 and possibly also on the data center 34. In an embodiment herein, the data centers 32, 34 may be provisioned with additional available storage (e.g., an additional ten percent of total capacity) for migrating the VMs. In addition, one or more of the data centers 32, 34 may use thin provisioning with a mechanism for reclaiming released storage space. Such a mechanism is disclosed in U.S. patent application Ser. No. 12/924,388, filed on Sep. 27, 2010 titled: “STORAGE SPACE RECLAIMING FOR VIRTUAL PROVISIONING”, which is incorporated by reference herein and is assigned to the assignee of the present application. Users who wish to move (rather than copy) the VMs from one data center to another may delete the source VMs once the copy operation is completed, thus allowing the space to be reclaimed. In such a case, no additional storage is used irrespective of the number of VMs that are moved.
Note that, in some instances, the order of steps in the flowcharts may be modified, where appropriate. The system described herein may be implemented using the hardware described herein, variations thereof, or any other appropriate hardware capable of providing the functionality described herein. Thus, for example, one or more storage devices having components as described herein may, alone or in combination with other devices, provide an appropriate platform that executes any of the steps described herein. The system described herein includes computer software, in a non-transitory computer readable medium, that executes any of the steps described herein.
While the invention has been disclosed in connection with various embodiments, modifications thereon will be readily apparent to those skilled in the art. Accordingly, the spirit and scope of the invention is set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6625751 | Starovic et al. | Sep 2003 | B1 |
7313637 | Tanaka et al. | Dec 2007 | B2 |
7340489 | Vishlitzky et al. | Mar 2008 | B2 |
7370164 | Nagarkar et al. | May 2008 | B1 |
7383405 | Vega et al. | Jun 2008 | B2 |
7484208 | Nelson | Jan 2009 | B1 |
7577722 | Khandekar et al. | Aug 2009 | B1 |
7603670 | van Rietschote | Oct 2009 | B1 |
7680919 | Nelson | Mar 2010 | B2 |
8135930 | Mattox et al. | Mar 2012 | B1 |
8234469 | Ranade | Jul 2012 | B2 |
8407518 | Nelson et al. | Mar 2013 | B2 |
8413145 | Chou et al. | Apr 2013 | B2 |
8549515 | Bobroff et al. | Oct 2013 | B2 |
8621274 | Forgette et al. | Dec 2013 | B1 |
8656388 | Chou et al. | Feb 2014 | B2 |
20040010787 | Traut et al. | Jan 2004 | A1 |
20070094659 | Singh et al. | Apr 2007 | A1 |
20080163239 | Sugumar et al. | Jul 2008 | A1 |
20080184225 | Fitzgerald et al. | Jul 2008 | A1 |
20080201711 | Amir Husain | Aug 2008 | A1 |
20090037680 | Colbert et al. | Feb 2009 | A1 |
20090113109 | Nelson et al. | Apr 2009 | A1 |
20090198817 | Sundaram et al. | Aug 2009 | A1 |
20100106885 | Gao et al. | Apr 2010 | A1 |
20100107158 | Chen et al. | Apr 2010 | A1 |
20100191845 | Ginzton | Jul 2010 | A1 |
20100250824 | Belay | Sep 2010 | A1 |
20110072430 | Mani | Mar 2011 | A1 |
20110208908 | Chou et al. | Aug 2011 | A1 |
20110289345 | Agesen et al. | Nov 2011 | A1 |
20110314155 | Narayanaswamy et al. | Dec 2011 | A1 |
20110314470 | Elyashev et al. | Dec 2011 | A1 |
20120084445 | Brock et al. | Apr 2012 | A1 |
20120084782 | Chou et al. | Apr 2012 | A1 |
20120096458 | Huang et al. | Apr 2012 | A1 |
20120102135 | Srinivasan et al. | Apr 2012 | A1 |
20120110574 | Kumar | May 2012 | A1 |
20140201574 | Manchek et al. | Jul 2014 | A1 |
Entry |
---|
“VMware Virtual Machine File System: Technical Overview and Best Practices,” VMware Technical White Paper, Version 1.0, WP-022-PRD-01-01, 2007, 19 pp. |
“Implementing Virtual Provisioning on EMC Symmetrix DMX with VMware Virtual Infrastructure,” VMWare, EMC Corporation, White paper, 2008, 30 pp. |
U.S. Appl. No. 12/924,388, filed Sep. 27, 2010, Balcha et al. |