The present invention relates to the field of gang migration, i.e. the simultaneous live migration of multiple virtual machines that run on multiple physical machines in a cluster.
Live migration of a virtual machine (VM) refers to the transfer of a running VM over the network from one physical machine to another. Within a local area network (LAN), live VM migration mainly involves the transfer of the VM's CPU and memory state, assuming that the VM uses network attached storage, which does not require migration. Some of the key metrics to measure the performance of VM migration are as follows.
The present invention relates to gang migration [8], i.e. the simultaneous live migration of multiple VMs that run on multiple physical machines in a cluster. The cluster, for example, may be assumed to have a high-bandwidth low-delay interconnect such has Gigabit Ethernet [10], 10GigE [9], or Infiniband [15], or the like. Datacenter administrators may need to perform gang migration to handle resource re-allocation for peak workloads, imminent failures, cluster maintenance, or powering down of several physical machines to save energy.
The present technology specifically focuses on reducing the network traffic overhead due to gang migration. Users and service providers of a virtualized infrastructure have many reasons to perform live VM migration such as routine maintenance, load balancing, scaling to meet performance demands during peak hours, and consolidation to save energy during non-peak hours by using fewer servers. Since gang migration can transfer hundreds of Gigabytes of data over the network, it can overload the core links and switches of the datacenter network. Gang migration can also adversely affect the performance at the network edges where the migration traffic competes with the bandwidth requirements of applications within the VMs. Reducing the network traffic overhead can also indirectly reduce the total time for migrating multiple VMs and the application degradation, depending upon how the traffic reduction is achieved.
The development of new techniques to improve the performance, robustness, and security of live migration of virtual machines (VM) [100] have emerged as one of the critical building blocks of modern cloud infrastructures due to cost savings, elasticity, and ease of administration. Virtualization technologies [118, 58, 79] have been rapidly adopted in large Infrastructure-as-a-Service (IaaS) platforms [46, 107, 111, 112] that offer cloud computing services on a utility-like model. Live migration of VMs [116, 5, 13] is a key feature and selling point for virtualization technologies.
Live VM migration mechanisms must move active VMs as quickly as possible and with minimal impact on the applications and the cluster infrastructure. These requirements translate into reducing the total migration time, downtime, application degradation, and cluster resource overheads such as network traffic, computation, memory, and storage overheads. Even though a large body of work in both industry and academia has advanced these goals, several challenges related to performance, robustness, and security remain to be addressed.
First, while the migration of a single VM has been well studied [74, 5, 18, 58, 129], the simultaneous migration of multiple VMs has not been thoroughly investigated. Secondly, the failure of the participating nodes during live VM migration and the resulting loss of VM state has not been investigated, even though high-availability solutions [130, 108] exist for steady-state VM operation.
Prior efforts to reduce the data transmitted during VM migration have focused on the live and non-live migration of a single VM [74, 5, 13, 133, 58, 129, 134, 95, 81, 123, 122, 135, 92, 94], live migration of multiple VMs running on the same physical machine [8], live migration of a virtual cluster across a wide-area network (WAN) [91], or non-live migration of multiple VM images across a WAN [57]. Numerous cluster job schedulers exist such as [136, 107, 137, 138, 139, 63, 109], among many others, as well as virtual machine management systems, such as VMWare's DRS [117], XenEnterprise [140], Usher [68], Virtual Machine Management Pack [141], and CoD [142] that let administrators control jobs/VM placement based on cluster load or specific policies such as affinity or anti-affinity rules.
The present technology seeks to focus on reducing the network traffic overhead due to gang migration. The present technology seeks, for example, to reduce the network traffic overhead uses the following observation. See, Deshpande, Umesh, et al. “Gang Migration of Virtual Machines using Cluster-wide Deduplication.” Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on. IEEE, 2013. (Applicant's prior work), expressly incorporated herein by reference.
VMs within a cluster often have similar memory content, given that they may execute the same operating system, libraries, and applications. Hence, a significant number of their memory pages may be identical [26], [30]. One can reduce the network overhead of gang migration using deduplication, i.e. by avoiding the transmission of duplicate copies of identical pages. We present an approach called gang migration using global (cluster-wide) deduplication (GMGD). During normal execution, a duplicate tracking mechanism keeps track of identical pages across different VMs in the cluster. During gang migration, a distributed coordination mechanism suppresses the retransmission of identical pages over the core links. Specifically, only one copy of each identical page is transferred to a target rack (i.e., the rack where a recipient physical machine for a VM resides). Thereupon, the machines within each target rack coordinate the exchange of necessary pages. In contrast to GMGD, gang migration using local deduplication (GMLD) [8] suppresses the retransmission of identical pages from among VMs within a single host.
The present technology therefore seeks to identify and track identical memory pages across VMs running on different physical machines in a cluster, including non-migrating VMs running on the target machines. These identical pages are deduplicated during gang migration, while keeping the coordination overhead low.
A prototype implementation of GMGD was created on the QEMU/KVM [18] platform, and evaluated on a 30-node cluster testbed having three switches, 10GigE core links and 1 Gbps edge links. GMGD was compared against two techniques—the QEMU/KVM's default live migration technique, called online compression (OC), and GMLD.
Prior efforts to reduce the data transmitted during VM migration have focused on live migration of a single VM [5], [20], [13], [16], live migration of multiple VMs running on the same physical machine (GMLD) [8], live migration of a virtual cluster across a wide-area network (WAN) [22], or non-live migration of multiple VM images across a WAN [17].
Compared to GMLD, GMGD faces the additional challenge of ensuring that the cost of global deduplication does not exceed the benefit of network traffic reduction during live migration. In contrast to migration over a WAN, which has high-bandwidth high-delay links, a datacenter LAN has high-bandwidth low-delay links. This difference is important because hash computations, which are used to identify and deduplicate identical memory pages, are CPU-intensive operations. When migrating over a LAN, hash computations become a serious bottleneck if performed online during migration, whereas over a WAN, the large round-trip latency can mask the online hash computation overhead.
Two lines of research are related to the present technologies—content deduplication among VMs and optimization of VM migration. Deduplication has been used to reduce the memory footprint of VMs in [3], [26], [19], [1], [29] and [11]. These techniques use deduplication to reduce memory consumption either within a single VM or between multiple co-located VMs. In contrast, the present technology uses cluster-wide deduplication across multiple physical machines to reduce the network traffic overhead when simultaneously migrating multiple VMs.
Non-live migration of a single VM can be speeded up by using content hashing to detect blocks within the VM image that are already present at the destination [23]. VMFlock [17] speeds up the non-live migration of a group of VM images over a high-bandwidth high-delay wide-area network by deduplicating blocks across the VM images. In contrast, the present technology focuses on reducing the network performance impact of the live and simultaneous migration of the memories of multiple VMs within a high-bandwidth low-delay datacenter network. Cloudnet [28] optimizes the live migration of a single VM over wide-area network. It reduces the number of pre-copy iterations by starting the downtime based on page dirtying rate and page transfer rate. [31] and [28] further use page-level deduplication along with the transfer of differences between dirtied and original pages, eliminating the need to retransmit the entire dirtied page. [16] uses an adaptive page compression technique to optimize the live migration of a single VM. Post-copy [13] transfers every page to the destination only once, as opposed to the iterative pre-copy [20], [5], which transfers dirtied pages multiple times. [14] employs low-overhead RDMA over Infiniband to speed up the transfer of a single VM. [21] excludes the memory pages of processes communicating over the network from being transferred during the initial rounds of migration, thus limiting the total migration time. [30] shows that there is an opportunity and feasibility for exploiting large amounts of content sharing when using certain benchmarks in high performance computing.
In the context of live migration of multiple VMs, prior work of the inventors on GMLD [8] deduplicates the transmission of identical memory content among VMs co-located within a single host. It also exploits sub-page level deduplication, page similarity, and delta difference for dirtied pages, all of which can be integrated into GMGD. Shrinker [22] migrates virtual clusters over high-delay links of WAN. It uses an online hashing mechanism in which hash computation for identifying duplicate pages (a CPU-intensive operation) is performed during the migration. The large round-trip latency of the WAN link masks the hash computation overhead during migration. A preferred embodiment employs offline hashing, rather than online hashing, because it was found that online hashing is impractical over low-delay links such as those in a Gigabit Ethernet LAN. In addition, issues such as desynchronizing page transfers, downtime synchronization, and target-to-target transfers need special consideration in a low-delay network. Further, when migrating a VM between datacenters over WAN, the internal topology of the datacenters may not be relevant. However, when migrating within a datacenter (as with GMGD), the datacenter switching topology and rack-level placement of nodes play important roles in reducing the traffic on core links. Preliminary results on this topic were published in a workshop paper [7] that focused upon the migration of multiple VMs between two racks.
The present technology therefore presents the comprehensive design, implementation, and evaluation of GMGD for a general cluster topology and also includes additional optimizations such as better downtime synchronization, improved target-to-target transfer, greater concurrency within the deduplication servers and per-node controllers, and more in-depth evaluations on a larger 30-node testbed.
In order to improve the performance, robustness, and security of VM migration beyond their current levels, one cannot simply treat each VM in isolation. Rather, the relationships between multiple VMs as well as their interaction with cluster-wide resources must be taken into account.
Simultaneous live migration of multiple VMs (gang migration), is a resource intensive operation that can adversely impact the entire cluster. Distributed deduplication may be used to reduce the network traffic overhead of migration and the total migration time on the core links of the datacenter LAN.
A distributed duplicate tracking phase identifies and tracks identical memory content across VMs running on same/different physical machines in a cluster, including non-migrating VMs running on the target machines. A distributed indexing mechanism computes content hashes on VMs' memory content on different machines and allows individual nodes to efficiently query and locate identical pages. A distributed hash table or a centralized indexing server may be provided, which have their relative merits and drawbacks. The former prevents a single point of bottleneck/failure, whereas the latter simplifies the overall indexing and lookup operation during runtime. Distributed deduplication during the migration phase may also be provided, i.e., to avoid the re-transmission of identical memory content, that was identified in the first step, during the simultaneous live migration of multiple VMs. The goal here is to reduce the network traffic generated by migration of multiple VMs by eliminating the re-transmission of identical pages from different VMs. Note that the deduplication operation would itself introduce control traffic to identify which identical pages have already been transferred from the source to the target racks. One of key challenges is to keep this control traffic overhead low, in terms of both additional bandwidth and latency introduced due to synchronization.
An important consideration in live VM migration is the robustness of the migration mechanism itself. Specifically, either the source or destination node can fail during migration. The key concern is whether the VM itself can be recovered after a failure of the source/destination nodes or any other component participating in the migration. Existing research has focused on high-availability solutions that focus on providing a hot-standby copy of a VM in execution. For instance, solutions such as [130, 108] perform high-frequency incremental checkpointing of a VM over the network using a technique similar to iterative pre-copy migration. However, the problem of recovering a VM after a failure during live migration has not been investigated. This problem is important because a VM is particularly vulnerable to failure during live migration. VM migration may last anywhere from a few seconds to several minutes, depending on a number of factors such as VM size and load on the cluster. During this time, a VM's state at the source and the destination nodes may be inconsistent, its state may be distributed across multiple nodes, and the software stack of a VM, including its virtual disk contents, may be in different stages of migration.
It is therefore an object to provide a system and method of tracking duplication of memory content in a plurality of servers, comprising: computing a hash value for each of a plurality of memory pages or sub-pages in each server; communicating the hash values to a deduplication server process executing on a server in the same rack; communicating from each respective deduplication server process of multiple racks to the respective deduplication server processes of other racks; and comparing the hash values at a deduplication server process to determine duplication of the memory pages or sub-pages. The plurality of memory pages or sub-pages may comprise a plurality of sub-pages each having a predetermined size.
It is a further object to provide a method of tracking duplication of memory content in a plurality of servers, each server having a memory pool comprising a plurality of memory pages and together residing in a common rack, comprising: computing a hash value for each of the plurality of memory pages or sub-pages in each server; communicating the hash values to a deduplication server process executing on a server in the common rack; receiving communications from respective deduplication server processes of multiple racks comprising respective hash values, to the deduplication server process executing in the server of the common rack; and comparing the respective hash values with the deduplication server process executing on the server in the common rack process to determine duplication of the memory pages or sub-pages between the plurality of servers in the common rack and the multiple racks.
It is also an object to provide a system and method for gang migration of a plurality of servers to a server rack having a network link external to the server rack and an internal data distribution system for communicating within the server rack, comprising: determining the content redundancy in the memory across a plurality of servers to be gang migrated; initiating a gang migration, wherein only a single copy of each unique memory page is transferred to the server rack during the gang migration, with a reference to the unique memory page for servers that require, but do not receive, a copy of the unique memory page; and after receipt of a unique memory page within the server rack, communicating the unique memory page to each server that requires but did not receive the copy of the unique memory page.
It is a still further object to provide a method for transfer of information to a plurality of servers in a server rack, comprising: determining the content redundancy in the memory across the plurality of servers; transferring a copy of each unique memory page or sub-page to the server rack; determining which of the plurality of servers in the server rack require the unique memory page or sub-page; and duplicating the unique memory page or sub-page within the server rack for each server that requires, but did not receive, the copy of the unique memory page or sub-page.
A single copy of each unique memory page or sub-page may be transferred to the server rack.
The copy of a respective unique memory page may be transferred to a respective server in the server rack, and the respective server may execute a process to copy the respective unique memory page for other servers within the server rack that require the respective unique memory page.
Each respective unique memory page may be associated with a hash that has a low probability of collision with hashes of distinct memory pages, and occupies less storage than the respective unique memory page itself, such that a respective unique memory page may be reliably identified by a correspondence of a hash of the respective unique memory page with an entry in a hash table.
The plurality of servers may be involved in a gang migration of a plurality of servers not in the server rack to the plurality of servers in the server rack. The live gang migration may comprise a simultaneous or concurrent migration of a plurality of live servers not in the rack whose live functioning may be assumed by the plurality of servers in the server rack, each live server having at least an associated central processing unit state and a memory state which may be transferred to a respective server in the server rack. The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages. At least one virtual machine may use network attach storage.
The server rack may communicate with the plurality of servers not in the rack through a local area network.
The plurality of servers may be organized in a cluster, running a plurality of virtual machines, which communicate with each other using a communication medium selected from the group consisting of Gigabit Ethernet, 10GigE, or Infiniband.
The plurality of servers may implement a plurality of virtual machines, and the determination of the content redundancy in the memory across the plurality of servers may comprise determining, for each virtual machine, a hash for each memory page or sub-page used by the respective virtual machine.
The plurality of servers in the server rack may implement a plurality of virtual machines before the transferring, and suppress transmission of memory pages or sub-pages already available in the server rack during a gang migration.
The transferring may comprise selectively suppressing a transfer of memory pages or sub-pages already stored in the rack by a process comprising: computing in real time hashes of the memory pages or sub-pages in the rack; storing the hashes in a hash table; receiving a hash representing a memory page or sub-page of a virtual machine to be migrated to the server rack; comparing the received hash to the hashes in the hash table; if the hash does not correspond to a hash in the hash table and adding the hash of the memory page or sub-page of a virtual machine to be migrated to the server rack to the hash table, transferring the copy of the memory page or sub-page of a virtual machine to be migrated to the server rack; and if the hash corresponds to a hash in the hash table, duplicating the unique memory page or sub-page within the server rack associated with the entry in the hash table and suppressing the transferring of the copy of the memory page or sub-page of a virtual machine to be migrated to the server rack.
The transferring may be prioritized with respect to a memory page or sub-page dirtying rate.
The transferring may comprise a delta difference for dirtied memory pages or sub-pages.
The determination of the content redundancy in the memory across the plurality of servers may comprise a distributed indexing mechanism which computes content hashes on a plurality of respective virtual machine's memory content, and responds to a query with a location of identical memory content.
The distributed indexing mechanism may comprise a distributed hash table.
The distributed indexing mechanism may comprise a centralized indexing server.
A distributed deduplication process may be employed.
Each memory page or sub-page may have a unique identifier comprising a respective identification of an associated virtual machine, an identification of target server in the server rack, a page or sub-page offset and a content hash.
The method may further comprise maintaining a copy of a respective virtual machine outside the server rack until at least a live migration of the virtual machine may be completed.
The determination of which of the plurality of servers in the server rack require the unique memory page or sub-page comprises determining an SHA1 hash of each memory page, and storing the hash in a hash table along with a list of duplicate pages.
The information for transfer may be initially stored in at least one source server rack, having a plurality of servers, wherein each of the source server rack comprises a deduplication server which determines a hash of each memory page in the respective source server rack, storing the hashes of the memory pages in a hash table along with a list of duplicate pages, and controls a deduplicating of the memory pages or sub-pages within the source server rack before the transferring to the server rack. The deduplication server at a source server rack may receive from the server rack a list of servers in the server rack that require a copy of a respective memory page or sub-page. A server in the server rack may receive from the server rack a list of servers in the server rack that require a copy of a respective memory page or sub-page, retrieve a copy of the respective memory page or sub-page, send a copy of the retrieved memory page or sub-page to each server in the server rack that requires a copy of the memory page or sub-page, and mark the page as having been sent in the hash table. The list of servers may be sorted in order of most recently changed memory page, and after a memory page or sub-page is marked as having been sent, references to earlier versions of that memory page or sub-page are removed from the list without overwriting the more recent copy of the memory page or sub-page.
The transfer of information may be part of a live gang migration of virtual machines, executing on at least one source rack, wherein a virtual machine executing on the at least one source rack remains operational until at least one version of each memory page of the virtual machine is transferred to the server rack, the virtual machine is then inactivated, subsequently changed versions of memory pages or sub-pages are transferred, and the corresponding virtual machine on the server rack is then activated.
The server rack may employ memory deduplication for the plurality of servers during operation.
Each of a plurality of virtual machines may transfer memory pages or sub-pages to the server rack in desynchronized manner to avoid a race condition wherein different copies of the same page from different virtual machines are sent to the server rack concurrently.
Architecture of GMGD
VMs are live migrated from one rack of machines to another rack using GMGD. For each VM being migrated, the target physical machine is provided as an input to GMGD. Target mapping of VMs could be provided by another VM placement algorithm that maximizes some optimization criteria such as reducing inter-VM communication overhead [27] or maximizing the memory sharing potential [29]. GMGD does not address the VM placement problem nor does it assume the lack or presence of any inter-VM dependencies.
As shown in
Migrating VMs from one rack to another increases the network traffic overhead on the core links. To reduce this overhead, GMGD employs a cluster-wide deduplication mechanism to identify and track identical pages across VMs running on different machines. As illustrated in
As shown in
In the prototype, GMGD was implemented within the default pre-copy mechanism in QEMU/KVM. The pre-copy [5] VM migration technique transfers the memory of a running VM over the network by performing iterative passes over its memory. Each successive round transfers the pages that were dirtied by the VM in the previous iteration. Such iterations are carried out until a very small number of dirty pages are left to be transferred. Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target. For GMGD each VM is migrated independently with the pre-copy migration technique. Although the GMGD prototype is based on pre-copy VM migration, nothing in its architecture prevents GMGD from working with other live VM migration techniques such as post-copy [13].
Two phases of GMGD are now described, namely duplicate tracking and live migration.
A. Duplicate Tracking Phase
This phase is carried out during the normal execution of VMs at the source machines before the migration begins. Its purpose is to identify all duplicate memory content (presently at the page-level) across all VMs residing on different machines. We use content hashing to detect identical pages. The pages having the same content yield the same hash value. When the hashing is performed using a standard 160-bit SHA1 hash [12], the probability of collision is less than the probability of an error in memory or in a TCP connection [4].
In each machine, a per-node controller process coordinates the tracking of identical pages among all VMs in the machine. The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM's memory image, perform content based hashing and record identical pages. Since each VM is constantly executing, some of the identical pages may be modified (dirtied) by the VM, either during the hashing, or after its completion. To identify these dirtied pages, the controller uses the dirty logging mode of QEMU/KVM. In this mode, all VM pages are marked as read-only in the shadow page table maintained by the hypervisor. The first write attempt to any read-only page results in a trap into the hypervisor which marks the faulted page as dirty in its dirty bitmap and allows the write access to proceed. The QEMU/KVM process uses a hypercall to extract the dirty bitmap from KVM to identify the modified pages.
The per-rack deduplication servers maintain a hash table, which is populated by carrying out a rack-wide content hashing of the 160-bit hash values pre-computed by per-node controllers. Each hash is also associated with a list of hosts in the rack containing the corresponding pages. Before migration, all deduplication servers exchange the hash values and host list with other deduplication servers.
B. Migration Phase
In the migration phase, all VMs are migrated in parallel to their destination machines. The pre-computed hashing information is used to perform the deduplication of the transferred pages at both the host and the rack levels. QEMU/KVM queries the deduplication server for its rack to acquire the status of each page. If the page has not been transferred already by another VM, then its status is changed to send and it is transferred to the target QEMU/KVM. For subsequent instances of the same page from any other VM migrating to the same rack, QEMU/KVM transfers the page identifier. Deduplication servers also periodically exchange the information about the pages marked as sent, which allows the VMs in one rack to avoid retransmission of the pages that are already sent by the VMs from another rack.
C. Target-Side VM Deduplication
The racks used as targets for VM migration are often not empty. They may host VMs containing pages that are identical to the ones being migrated into the rack. Instead of transferring such pages from the source racks via the core links, they are forwarded within the target rack from the hosts running the VMs to the hosts receiving the migrating VMs. The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers. Per-node controllers perform this forwarding of identical pages among hosts in the target rack.
D. Reliability
When a source host fails during migration, the reliability of GMGD is no worse than that of single-VM pre-copy in that only the VMs running on the failed source hosts will be lost, whereas other VMs can continue migrating successfully. However, when a target host fails during migration, or if a subset of its pages are corrupted during migration, then MGD has an additional point of potential failure arising from deduplication. Specifically more VMs may suffer collateral damage using GMGD than using single-VM pre-copy. This is because each deduplicated page temporarily resides at an intermediate node in the target rack till it is pushed to all the VMs that need that identical page. If the intermediate node fails, then all the deduplicated pages it holds are lost and, consequently, all the VM that need those pages will fail. Since each deduplicated page, by definition, is needed by multiple VMs, the magnitude of failure will be far greater than without deduplication. Two solutions are available for this problem. (a) Replication: Host each deduplicated page at two (or more) distinct nodes on the target rack. Alternatively, to conserve memory, the deduplicated page could be asynchronously replicated to a network-attached storage server, if the server offers enough bandwidth to keep up. (b) Parity: Maintain parity information for stripes of deduplicated pages, in much that same way that a RAID system computes parity across disk blocks on multiple disks. (c) Retransmission: The source hosts can resend copies of the lost pages from when an intermediate host fails.
Implementation Details
A prototype of GMGD was implemented in the QEMU/KVM virtualization environment. The implementation is completely transparent to the users of the VMs. With QEMU/KVM, each VM is spawned as a process on a host machine. A part of the virtual address space of the QEMU/KVM process is exported to the VM as its physical memory.
A. Per-Node Controllers
Per-node controllers are responsible for managing the deduplication of outgoing and incoming VMs. We call the controller component managing the outgoing VMs as the source side and the component managing the incoming VMs as the target side. The controller sets up a shared memory region that is accessible only by other QEMU/KVM processes. The shared memory contains a hash table which is used for tracking identical pages. Note that the shared memory poses no security vulnerabilities because it is outside the physical memory region of the VM in the QEMU/KVM process' address space and is not accessible by the VM itself.
The source side of the per-node controller coordinates the local deduplication of memory among co-located VMs. Each QEMU/KVM process scans its VM's memory and calculates a 160-bit SHA1 hash for each page. These hash values are stored in the hash table, where they are compared against each other. A match of two hash values indicates the existence of two identical pages. Scanning is performed by a low priority thread to minimize interference with the VMs' execution.
The target side of the per-node controller receives incoming identical pages from other controllers in the rack. It also forwards the identical pages received on behalf of other machines in the rack to their respective controllers. Upon reception of an identical page, the controller copies the page into the shared memory region, so that it becomes available to incoming VMs. The shared memory region is freed once the migration is complete.
B. Deduplication Server
Deduplication servers are to per-node controllers what per-node controllers are to VMs. Each rack contains a deduplication server that tracks the status of identical pages among VMs that are migrating to the same target rack and the VMs already at the target rack. Deduplication servers maintain a content hash table to store this information. Upon reception of a 160-bit hash value from the controllers, the last 32-bits of the 160-bit hash are used to find a bucket in the hash table. In the bucket, the 160-bit hash entry is compared against the other entries present. If no matching entry is found, a new entry is created.
Each deduplication server can currently process up to 200,000 queries per second over a 1 Gbps link. This rate can potentially handle simultaneous VM migrations from up to 180 physical hosts. For context, common 19-inch racks can hold 44 servers of 1 U (1 rack unit) height [25]. A certain level of scalability is built into the deduplication server, by using multiple threads for query processing, fine-grained reader/writer locks, and batching queries from VMs to reduce the frequency of communication with the deduplication server. Finally, the deduplication server does not need to be a separate server per rack. It can potentially run as a background process within one of the machines in the rack that also runs VMs provided that a few spare CPU cores are available for processing during migration.
Dirty pages and unique pages that have no match with other VMs are transferred in their entirety to the destination.
C. Operations at the Source Machine
Upon initiating simultaneous migration of VMs, the controllers instruct individual QEMU/KVM processes to begin the migration. From this point onward, the QEMU/KVM processes communicate directly with the deduplication servers, without any involvement from the controllers. After commencing the migration, each QEMU/KVM process starts transmitting every page of its respective VM. For each page it checks in the local hash table whether the page has already been transferred. Each migration process periodically queries its deduplication server for the status of next few pages it is about to transfer. The responses from the deduplication server are stored in the hash table, in order to be accessible to the other co-located VMs. If the QEMU/KVM process discovers that a page has not been transferred, then it transmits the entire page to its peer QEMU/KVM process at the target machine along with its unique identifier. QEMU/KVM at the source also retrieves from the deduplication server a list of other machines in the target rack that need an identical page. This list is also sent to the target machine's controller, which then retrieves the page and sends it to the machines in the list. Upon transfer the page is marked as sent in the source controller's hash table.
The QEMU/KVM process periodically updates its deduplication server with the status of the sent pages. The deduplication server also periodically updates other deduplication servers with a list of identical pages marked as sent by hosts other than the source host. Handling of such pages, known as remote pages, is discussed below.
D. Operations at the Target Machine
On the target machine, each QEMU/KVM process allocates a memory region for its respective VM where incoming pages are copied. Upon reception of an identical page, the target QEMU/KVM process copies it into the VM's memory and inserts it into the target hash table according to its identifier.
If only an identifier is received, a page corresponding to the identifier is retrieved from the target hash table, and copied into the VM's memory. Unique and dirty pages are directly copied into the VM's memory space. They are not copied to the target shared memory.
E. Remote Pages
Remote pages are deduplicated pages that were transferred by hosts other than the source host. Identifiers of such pages are accompanied by a remote flag. Such pages become available to the waiting hosts in the target rack only after the carrying host forwards them. Therefore, instead of searching for such remote pages in the target hash table immediately upon reception of an identifier, the identifier and the address of the page are inserted into a per-host waiting list.
A per-QEMU/KVM process thread, called a remote thread, periodically traverses the list, and checks for each entry whether the page corresponding to the identifier has been added into the target shared memory. The received pages are copied into the memory of the respective VMs after removing the entry from the list. Upon reception of a more recent dirtied copy of the page whose entry happens to be on the waiting list, the corresponding entry is removed from the list to prevent the thread from over-writing the page with its stale copy.
The identical pages already present at the target rack before the migration are also treated as the remote pages. The per-node controllers in the target rack forward such pages to the listed target hosts. This avoids their transmission over the core network links from the source racks. However, pages dirtied by VMs running in the target rack are not forwarded to other hosts, instead they are requested by the corresponding hosts from their respective source hosts.
F. Coordinated Downtime Start
A VM cannot be resumed at the target unless all of its pages have been received. Therefore initiating the VM's downtime before completing target-to-target transfers can increase its downtime. However, in the default QEMU/KVM migration technique, downtime is started at the source's discretion and the decision is made solely on the basis of the number of pages remaining to be transferred and the perceived link bandwidth at the source. Therefore, to avoid the overlap between the downtime and target-to-target transfers, a co-ordination mechanism is implemented between the source and the target of each QEMU/KVM process. The source QEMU/KVM process is prevented from starting the VM downtime and to keep it in the live pre-copy iteration mode until all of its pages have been retrieved at the target and copied into memory. Thereon, the source is instructed by the target to initiate the downtime. This allows VMs to reduce their downtime, as only the remaining dirty pages at the source are transferred during the downtime. While the source side waits for a permission to initiate the downtime, the VM may dirty more pages. Hence, depending on its dirtying rate, the transfer of additional dirty pages may lead to an increase in the amount of data transferred and hence the total migration time.
It is noted that, although not implemented in the prototype, memory pages for the rack (or data center) may be stored in a deduplicated virtual memory environment, such that redundant memory pages are not duplicated after receipt within the rack except at a cache memory level, but rather the memory pages are retrieved from a memory server, such as MemX, when needed. See, each of which is expressly incorporated herein by reference in its entirety: [7, 8, 19, 22, 30, 31, 75, 121, 145, 165, 174-187].
In some cases, the system may be implemented to segregate information stored in memory as either implementation-specific pages, in which the likelihood of duplication between VMs is high, and data-specific pages, which are unlikely to contain duplicate information. In this way, pages which are hybrid or heterogeneous are avoided, thus increasing efficiency of the virtual memory traffic usage. Likewise, in a transaction processing system, the data specific pages are likely to be short-lived, and therefore greater efficiency may be achieved by avoiding virtual memory overhead by removing this data from local memory storage, and allowing these pages to expire or be purged in local memory. On the other hand, pages that are common for multiple servers, but rarely used, may be efficiently and effectively stored remotely, and as discussed above, gang migrated without massive redundant data transfers.
G. Desynchronizing Page Transfers
An optimization was also implemented to improve the efficiency of deduplication. There is a small time lag between the transfer of an identical page by a VM and the status of the page being reflected at the deduplication server. This lag can result in duplicate transfer of some identical pages if two largely identical VMs start migration at the same time and transfer their respective memory pages in the same order of page offsets. To reduce the likelihood of such duplicate transfers, each VM transfers pages in different order depending upon their assigned VM number. With desynchronization, identical memory regions from different VMs are transferred at different times, allowing each QEMU/KVM process enough time to update the deduplication servers about the sent pages before other VMs transfer the same pages.
Evaluation
The GMGD implementation was evaluated in a 30-node cluster testbed having high-bandwidth low-delay Gigabit Ethernet. Each physical host has two quad-core 2 GHz CPUs, 16 GB of memory, and 1 Gbps network card.
GMGD was compared against the following VM migration techniques:
(1) Online Compression (OC): This is the default VM migration technique used by QEMU/KVM. Before transmission, it compresses pages that are filled with uniform content (primarily pages filled with zeros) by representing the entire page with just one byte. At the target, such pages are reconstructed by filling an entire page with the same byte. Other pages are transmitted in their entirety to the destination.
(2) Gang Migration with Local Deduplication (GMLD): [8] This technique uses content hashing to deduplicate the pages across VMs co-located on the same host. Only one copy of identical pages is transferred from the source host.
In initial implementations of GMGD, the use of online hashing was considered, in which hash computation and deduplication are performed during migration (as opposed to before migration). Hash computation is a CPU-intensive operation. In evaluations, it was found that the online hashing variant performed very poorly, in terms of total migration time, on high-bandwidth low-delay Gigabit Ethernet. For example, online hashing takes 7.3 seconds to migrate a 1 GB VM and 18.9 seconds to migrate a 4 GB VM, whereas offline hashing takes only 3.5 seconds and 4.5 seconds respectively. CPU-heavy online hash computation became a performance bottleneck and, in fact, yielded worse total migration times than even the simple OC technique described above. Given that the total migration time of online hashing variant is considerably worse than offline hashing while achieving only comparable savings in network traffic, the results for online hashing are omitted in the experiments reported below.
A. Network Load Reduction
1) Idle VMs: An equal number of VMs from each of the two source racks, i.e. for 12×4 configuration, 4 VMs are migrated from each of the 6 hosts on each source rack.
2) Busy VMs: To evaluate the effect of busy VMs on the amount of data transferred during their migration, Dbench [6], a filesystem benchmark, is run inside VMs. Dbench performs file I/O on a network attached storage. It provides an adversarial workload for GMGD because it uses the network interface for communication and DRAM as a buffer. The execution of Dbench is initiated after the deduplication phase of GMGD to ensure that the memory consumed by Dbench is not deduplicated. The VMs are migrated while execution of Dbench is in progress.
B. Total Migration Time
1) Idle VMs: To measure the total migration time of different migration techniques, the end-to-end (E2E) total migration time is measured, i.e. the time taken from the start of the migration of the first VM to the end of the migration of the last VM. Cluster administrators may be concerned with E2E total migration time of groups of VMs since it measures the time for which the migration traffic occupies the core links.
The idle VM section of Table I shows the total migration time for each migration technique with an increasing number of hosts containing idle VMs. Note that even with the maximum number of hosts (i.e. 12 with 6 from each source rack), the core optical link remains unsaturated. Therefore, for each migration technique, nearly constant total migration time is observed, irrespective of the number of hosts. Further, among all three techniques, OC has highest total migration time for any number of hosts, which is proportional to the amount of data it transfers. GMGD's total migration time is slightly higher than that of GMLD, approximately 5% higher for 12 hosts. The difference between the total migration time of GMGD and GMLD can be attributed to the overhead associated with GMGD for performing deduplication across the hosts. While the migration is in progress, it queries with the deduplication server to read, or update the status of deduplicated pages. Such requests need to be sent frequently for effective deduplication.
2) Busy VMs: Table I shows that Dbench increases the total migration time of all the VM migration techniques compared to their idle VM migration times. Since the Dbench traffic competes with the migration traffic for the source network interface card (NIC), the total migration time of each technique is proportional to the amount of data it transfers. Therefore GMGD's total migration time is slightly lower than that of GMLD.
C. Downtime
D. Background Traffic
In datacenters, the switches along the migration path of VMs may experience network traffic other than the VM migration traffic. In overloaded switches, the VM migration traffic may impact the performance of applications running across the datacenter, and vice versa. The effect of background network traffic is first compared on different migration techniques. Conversely, the effect of different migration techniques is also compared on other network-bound applications in the cluster. For this experiment, the 10GigE core link between the switches is saturated with VM migration traffic and background network traffic. 8 Gbps of background Netperf [2] UDP traffic is transmitted between two source racks such that it competes with the VM migration traffic on the core link.
E. Application Degradation
Table II compares the degradation of applications running inside the VMs during migration using 12×4 configuration.
Sysbench: Here, the impact of migration on the performance of I/O operations from VMs in the above scenario is evaluated. A Sysbench [24] database is hosted on a machine located outside the source racks and connected to the switch with a 1 Gbps Ethernet link. Each VM performs transactions on the database over the network. The VMs are migrated while the benchmark is in progress to observe the effect of migration on the performance of the benchmark. Table II shows the average transaction rate per VM for Sysbench.
TCP_RR: The Netperf TCP_RR VM workload is used to analyze the effect of VM migration on the inter-VM communication. TCP_RR is a synchronous TCP request-response test. 24 VMs from 6 hosts are used as senders, and 24 VMs from the other 6 hosts as receivers. The VMs are migrated while the test is in progress and the performance of TCPRR measured. Table II shows the average transaction rate per sender VM. Due to the lower amount of data transferred through the source NICs, GMGD keeps the NICs available for the inter-VM TCP_RR traffic. Consequently, it least affects the performance of TCP_RR and gives the highest number of transactions per second among the three.
Sum of Subsets: Thistle is a CPU-intensive workload that, given a set of integers and an integer k, finds a non-empty subset that sum to k. This program is run in the VMs during their migration to measure the average per-VM completion time of the program. Due to the CPU-intensive nature of the workload, the difference in the completion time of the application with the three migration techniques is insignificant.
F. Performance Overheads
Duplicate Tracking: Low priority threads perform hash computation and dirty-page logging in the background. With 4 VMs and 8 cores per machine, a CPU-intensive workload (sum of subsets) experienced 0.4% overhead and a write-intensive workload (random writes to memory) experienced 2% overhead. With 8 VMs per machine, the overheads were 6% and 4% respectively due to CPU contention.
Worst-case workload: To evaluate the VM migration techniques against a worst-case workload, a write-intensive workload is run inside VMs that reduces the likelihood of deduplication by modifying two times as much data as the size of each VM. GMGD does not introduce any observed additional overheads, compared against OC and GMLD.
Space overhead: At the source side, the shared memory region for local deduplication contains a 160-bit hash value for each VM page. In the worst case when all VM pages are unique, the source side space consumption is around 4% of the aggregate memory of VMs. At the target side, the worst-case space overhead in the shared memory could be 100% of the aggregate memory of VMs when each page has exactly one identical counterpart on another host. However, target shared memory only contains identical pages. Unique pages are directly copied into VMs' memories, so they do not incur any space overhead. Further, both the source and the target shared memory areas are used only during the migration and are freed after the migration completes.
Hardware Overview
The computer system may include a graphics processing unit (GPU), which, for example, provides a parallel processing system which is architected, for example, as a single instruction-multiple data (SIMD) processor. Such a GPU may be used to efficiently compute transforms and other readily parallelized and processed according to mainly consecutive unbranched instruction codes.
Computer system 400 may be coupled via bus 402 to a display 412, such as a liquid crystal display (LCD), for displaying information to a computer user. An input device 414, including alphanumeric and other keys, is coupled to bus 402 for communicating information and command selections to processor 404. Another type of user input device is cursor control 416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 404 and for controlling cursor movement on display 412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
According to one embodiment of the invention, those techniques are performed by computer system 400 in response to processor 404 executing one or more sequences of one or more instructions contained in main memory 406. Such instructions may be read into main memory 406 from another machine-readable medium, such as storage device 410. Execution of the sequences of instructions contained in main memory 406 causes processor 404 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 400, various machine-readable media are involved, for example, in providing instructions to processor 404 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media. Non-volatile media includes, for example, semiconductor devices, optical or magnetic disks, such as storage device 410. Volatile media includes dynamic memory, such as main memory 406. All such media are tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine. Common forms of machine-readable media include, for example, hard disk (or other magnetic medium), CD-ROM, DVD-ROM (or other optical or magnetoptical medium), semiconductor memory such as RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 404 for execution.
For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over the Internet through an automated computer communication network. An interface local to computer system 400, such as an Internet router, can receive the data and communicate using an Ethernet protocol (e.g., IEEE-802.X) to a compatible receiver, and place the data on bus 402. Bus 402 carries the data to main memory 406, from which processor 404 retrieves and executes the instructions. The instructions received by main memory 406 may optionally be stored on storage device 410 either before or after execution by processor 404.
Computer system 400 also includes a communication interface 418 coupled to bus 402. Communication interface 418 provides a two-way data communication coupling to a network link 420 that is connected to a local network 422. For example, communication interface 418 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. In any such implementation, communication interface 418 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 420 typically provides data communication through one or more networks to other data devices. For example, network link 420 may provide a connection through local network 422 to a host computer 424 or to data equipment operated by an Internet Service Provider (ISP) 426. ISP 426 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 428. Local network 422 and Internet 428 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 420 and through communication interface 418, which carry the digital data to and from computer system 400, are exemplary forms of carrier waves transporting the information.
Computer system 400 can send messages and receive data, including memory pages, memory sub-pages, and program code, through the network(s), network link 420 and communication interface 418. In the Internet example, a server 430 might transmit a requested code for an application program through Internet 428, ISP 426, local network 422 and communication interface 418. The received code may be executed by processor 404 as it is received, and/or stored in storage device 410, or other non-volatile storage for later execution.
Gang migration is presented with global deduplication (GMGD)—a solution to reduce the network load resulting from the simultaneous live migration of multiple VMs within a datacenter that has high-bandwidth low-delay interconnect. The present solution employs cluster-wide deduplication to identify, track, and avoid the retransmission of identical pages over core network links. Evaluations on a 30-node testbed show that GMGD reduces the amount of data transferred over the core links during migration by up to 65% and the total migration time by up to 42% compared to online compression.
In this description, several preferred embodiments were discussed. Persons skilled in the art will, undoubtedly, have other ideas as to how the systems and methods described herein may be used. It is understood that this broad invention is not limited to the embodiments discussed herein. Rather, the invention is limited only by the following claims. The various embodiments and sub-embodiments may be combined together in various consistent combinations sub-combinations and permutations, without departing from the spirit of this disclosure.
See, each of which is expressly incorporated by reference in its entirety:
The present application is Continuation of U.S. patent application Ser. No. 14/709,957, filed May 12, 2015, now U.S. Pat. No. 9,823,842, issued Nov. 21, 2017, which is a non-provisional of U.S. 61/992,037, filed May 12, 2014, the entirety of which are expressly incorporated herein by reference.
This invention was made with government support under CNS-0845832 and CNS-0855204 awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
5003463 | Coyle et al. | Mar 1991 | A |
5185874 | Trent et al. | Feb 1993 | A |
5235684 | Becker et al. | Aug 1993 | A |
5261057 | Coyle et al. | Nov 1993 | A |
5379379 | Becker et al. | Jan 1995 | A |
5457694 | Smith | Oct 1995 | A |
5684994 | Tanaka et al. | Nov 1997 | A |
5910175 | Malson | Jun 1999 | A |
5951615 | Malson | Sep 1999 | A |
6078856 | Malson | Jun 2000 | A |
6706619 | Miller et al. | Mar 2004 | B2 |
6707077 | Miller | Mar 2004 | B2 |
6993762 | Pierre | Jan 2006 | B1 |
6993818 | Smith et al. | Feb 2006 | B2 |
7133953 | Barrenscheen | Nov 2006 | B2 |
7313177 | Radjassamy | Dec 2007 | B2 |
7313575 | Carr et al. | Dec 2007 | B2 |
7369900 | Zdravkovic | May 2008 | B2 |
7702660 | Chan et al. | Apr 2010 | B2 |
7783360 | Zdravkovic et al. | Aug 2010 | B2 |
7783363 | Zdravkovic et al. | Aug 2010 | B2 |
7814142 | Mamou et al. | Oct 2010 | B2 |
7814470 | Mamou et al. | Oct 2010 | B2 |
7991800 | Lawrence et al. | Aug 2011 | B2 |
8041760 | Mamou et al. | Oct 2011 | B2 |
8060553 | Mamou et al. | Nov 2011 | B2 |
8140491 | Mandagere et al. | Mar 2012 | B2 |
8166265 | Feathergill | Apr 2012 | B1 |
8285681 | Prahlad et al. | Oct 2012 | B2 |
8307177 | Prahlad et al. | Nov 2012 | B2 |
8335902 | Feathergill | Dec 2012 | B1 |
8370560 | Dow et al. | Feb 2013 | B2 |
8407190 | Prahlad et al. | Mar 2013 | B2 |
8407193 | Gruhl et al. | Mar 2013 | B2 |
8407428 | Cheriton et al. | Mar 2013 | B2 |
8429649 | Feathergill et al. | Apr 2013 | B1 |
8433682 | Ngo | Apr 2013 | B2 |
8442955 | Al Kiswany et al. | May 2013 | B2 |
8468135 | Constantinescu et al. | Jun 2013 | B2 |
8504670 | Wu et al. | Aug 2013 | B2 |
8504791 | Cheriton et al. | Aug 2013 | B2 |
8516158 | Wu et al. | Aug 2013 | B1 |
8527544 | Colgrove et al. | Sep 2013 | B1 |
8589640 | Colgrove et al. | Nov 2013 | B2 |
8595188 | Gruhl et al. | Nov 2013 | B2 |
8595191 | Prahlad et al. | Nov 2013 | B2 |
8595460 | Bhat et al. | Nov 2013 | B2 |
8612439 | Prahlad et al. | Dec 2013 | B2 |
8620870 | Dwarampudi et al. | Dec 2013 | B2 |
8635615 | Chiang et al. | Jan 2014 | B2 |
8639658 | Kumaresan | Jan 2014 | B1 |
8639989 | Sorenson, III et al. | Jan 2014 | B1 |
8645664 | Colgrove et al. | Feb 2014 | B1 |
8650359 | Vaghani et al. | Feb 2014 | B2 |
8650566 | Vaghani et al. | Feb 2014 | B2 |
8676937 | Rapaport et al. | Mar 2014 | B2 |
8677085 | Vaghani et al. | Mar 2014 | B2 |
8688650 | Mutalik et al. | Apr 2014 | B2 |
8694469 | Parab | Apr 2014 | B2 |
8707070 | Muller | Apr 2014 | B2 |
8719540 | Miller et al. | May 2014 | B1 |
8725973 | Prahlad et al. | May 2014 | B2 |
8732434 | Hwang et al. | May 2014 | B2 |
8739244 | Wu et al. | May 2014 | B1 |
8745320 | Gupta et al. | Jun 2014 | B2 |
8769105 | Lacapra | Jul 2014 | B2 |
8769174 | Sokolinski et al. | Jul 2014 | B2 |
8775773 | Acharya et al. | Jul 2014 | B2 |
8775774 | Desai et al. | Jul 2014 | B2 |
8782395 | Ly | Jul 2014 | B1 |
8788788 | Colgrove et al. | Jul 2014 | B2 |
8793427 | Lim et al. | Jul 2014 | B2 |
8793467 | Colgrove et al. | Jul 2014 | B2 |
8806160 | Colgrove et al. | Aug 2014 | B2 |
8806489 | Freimuth et al. | Aug 2014 | B2 |
8843636 | Wu et al. | Sep 2014 | B1 |
8849761 | Prahlad et al. | Sep 2014 | B2 |
8849955 | Prahlad et al. | Sep 2014 | B2 |
8856489 | Colgrove et al. | Oct 2014 | B2 |
8856790 | Feathergill et al. | Oct 2014 | B1 |
8874863 | Mutalik et al. | Oct 2014 | B2 |
8886691 | Colgrove et al. | Nov 2014 | B2 |
8898114 | Feathergill et al. | Nov 2014 | B1 |
8898166 | Navrides et al. | Nov 2014 | B1 |
8904113 | Chen et al. | Dec 2014 | B2 |
8909845 | Sobel et al. | Dec 2014 | B1 |
8914610 | Bhat et al. | Dec 2014 | B2 |
8930307 | Colgrove et al. | Jan 2015 | B2 |
8930647 | Smith | Jan 2015 | B1 |
8935506 | Gopalan | Jan 2015 | B1 |
8938723 | Tormasov et al. | Jan 2015 | B1 |
8949570 | Desai et al. | Feb 2015 | B2 |
8950009 | Vijayan et al. | Feb 2015 | B2 |
8954710 | Colgrove et al. | Feb 2015 | B2 |
8959312 | Acharya et al. | Feb 2015 | B2 |
8966191 | Flynn et al. | Feb 2015 | B2 |
8966453 | Zamfir et al. | Feb 2015 | B1 |
8977838 | Mass et al. | Mar 2015 | B1 |
8982656 | Mitani et al. | Mar 2015 | B2 |
8983915 | Mutalik et al. | Mar 2015 | B2 |
8984221 | Satoyama et al. | Mar 2015 | B2 |
8996468 | Mattox | Mar 2015 | B1 |
8996800 | Venkatesh et al. | Mar 2015 | B2 |
8996814 | Peinado et al. | Mar 2015 | B2 |
8997179 | Kruglick | Mar 2015 | B2 |
9009437 | Bjornsson et al. | Apr 2015 | B1 |
9015417 | Agrawal et al. | Apr 2015 | B2 |
9020890 | Kottomtharayil et al. | Apr 2015 | B2 |
9021282 | Muller | Apr 2015 | B2 |
9021314 | Sorenson, III et al. | Apr 2015 | B1 |
9021452 | Kripalani | Apr 2015 | B2 |
9027020 | Edholm et al. | May 2015 | B2 |
9032157 | Ghai et al. | May 2015 | B2 |
9032181 | Ahmad et al. | May 2015 | B2 |
9037822 | Meiri et al. | May 2015 | B1 |
9043790 | Castillo et al. | May 2015 | B2 |
9047221 | Guthrie et al. | Jun 2015 | B2 |
9058195 | Ghai et al. | Jun 2015 | B2 |
9058212 | Wang et al. | Jun 2015 | B2 |
9059976 | Lacapra | Jun 2015 | B2 |
9069677 | Balani et al. | Jun 2015 | B2 |
9069701 | Guthrie et al. | Jun 2015 | B2 |
9069786 | Colgrove et al. | Jun 2015 | B2 |
9069799 | Vijayan | Jun 2015 | B2 |
9069997 | Partington et al. | Jun 2015 | B2 |
9081787 | Bolte et al. | Jul 2015 | B2 |
9098514 | Dwarampudi et al. | Aug 2015 | B2 |
9110965 | Shah et al. | Aug 2015 | B1 |
9116633 | Sancheti et al. | Aug 2015 | B2 |
9116737 | Aswathanarayana et al. | Aug 2015 | B2 |
9116803 | Agrawal et al. | Aug 2015 | B1 |
9116812 | Joshi et al. | Aug 2015 | B2 |
9135171 | Baskakov et al. | Sep 2015 | B2 |
9146980 | Navrides et al. | Sep 2015 | B1 |
9147373 | Cunningham et al. | Sep 2015 | B2 |
9158546 | Smith | Oct 2015 | B1 |
9164679 | Smith | Oct 2015 | B2 |
9170744 | Smith | Oct 2015 | B1 |
9171008 | Prahlad et al. | Oct 2015 | B2 |
9176671 | Smith | Nov 2015 | B1 |
9176883 | Yang | Nov 2015 | B2 |
9182914 | Smith | Nov 2015 | B1 |
9189442 | Smith | Nov 2015 | B1 |
9195395 | Smith | Nov 2015 | B1 |
9195489 | Bolte et al. | Nov 2015 | B2 |
9201906 | Kumarasamy et al. | Dec 2015 | B2 |
9202076 | Chazin et al. | Dec 2015 | B1 |
9208161 | Dow et al. | Dec 2015 | B2 |
9213711 | Dow et al. | Dec 2015 | B2 |
9213848 | Vijayan et al. | Dec 2015 | B2 |
9223507 | Smith | Dec 2015 | B1 |
9223597 | Deshpande et al. | Dec 2015 | B2 |
9223767 | Powell | Dec 2015 | B1 |
9229645 | Nakajima | Jan 2016 | B2 |
9235589 | DAmore et al. | Jan 2016 | B2 |
9239688 | Colgrove et al. | Jan 2016 | B2 |
9239786 | Ki et al. | Jan 2016 | B2 |
9244967 | Provenzano et al. | Jan 2016 | B2 |
9250817 | Flynn et al. | Feb 2016 | B2 |
9251066 | Colgrove et al. | Feb 2016 | B2 |
9251198 | Mutalik et al. | Feb 2016 | B2 |
9823842 | Gopalan et al. | Nov 2017 | B2 |
20030126339 | Barrenscheen | Jul 2003 | A1 |
20040223542 | Radjassamy | Nov 2004 | A1 |
20050039074 | Tremblay et al. | Feb 2005 | A1 |
20050222931 | Mamou et al. | Oct 2005 | A1 |
20050223109 | Mamou et al. | Oct 2005 | A1 |
20050228808 | Mamou et al. | Oct 2005 | A1 |
20050232046 | Mamou et al. | Oct 2005 | A1 |
20050234969 | Mamou et al. | Oct 2005 | A1 |
20050235274 | Mamou et al. | Oct 2005 | A1 |
20050240354 | Mamou et al. | Oct 2005 | A1 |
20050240592 | Mamou et al. | Oct 2005 | A1 |
20050262188 | Mamou et al. | Nov 2005 | A1 |
20050262189 | Mamou et al. | Nov 2005 | A1 |
20050262190 | Mamou et al. | Nov 2005 | A1 |
20050262191 | Mamou et al. | Nov 2005 | A1 |
20050262192 | Mamou et al. | Nov 2005 | A1 |
20050262193 | Mamou et al. | Nov 2005 | A1 |
20050262194 | Mamou et al. | Nov 2005 | A1 |
20050278270 | Carr et al. | Dec 2005 | A1 |
20060010195 | Mamou et al. | Jan 2006 | A1 |
20060020735 | Barrenscheen | Jan 2006 | A1 |
20060069717 | Mamou et al. | Mar 2006 | A1 |
20080027788 | Lawrence et al. | Jan 2008 | A1 |
20080147906 | Hamamura | Jun 2008 | A1 |
20090240916 | Tremblay et al. | Sep 2009 | A1 |
20100011368 | Arakawa et al. | Jan 2010 | A1 |
20100070725 | Prahlad et al. | Mar 2010 | A1 |
20100241654 | Wu et al. | Sep 2010 | A1 |
20100241673 | Wu et al. | Sep 2010 | A1 |
20100241726 | Wu | Sep 2010 | A1 |
20100241807 | Wu et al. | Sep 2010 | A1 |
20100299667 | Ahmad et al. | Nov 2010 | A1 |
20100332401 | Prahlad et al. | Dec 2010 | A1 |
20100332454 | Prahlad et al. | Dec 2010 | A1 |
20100332456 | Prahlad et al. | Dec 2010 | A1 |
20100332479 | Prahlad et al. | Dec 2010 | A1 |
20100332818 | Prahlad et al. | Dec 2010 | A1 |
20100333116 | Prahlad et al. | Dec 2010 | A1 |
20110153391 | Tenbrock | Jun 2011 | A1 |
20110161291 | Taleck et al. | Jun 2011 | A1 |
20110161297 | Parab | Jun 2011 | A1 |
20110161723 | Taleck et al. | Jun 2011 | A1 |
20110191522 | Condict et al. | Aug 2011 | A1 |
20110238775 | Wu et al. | Sep 2011 | A1 |
20110271010 | Kenchammana et al. | Nov 2011 | A1 |
20120016845 | Bates | Jan 2012 | A1 |
20120017027 | Baskakov et al. | Jan 2012 | A1 |
20120084261 | Parab | Apr 2012 | A1 |
20120089764 | Baskakov et al. | Apr 2012 | A1 |
20120102455 | Ambat et al. | Apr 2012 | A1 |
20120158674 | Lillibridge | Jun 2012 | A1 |
20120159081 | Agrawal et al. | Jun 2012 | A1 |
20120216052 | Dunn | Aug 2012 | A1 |
20120260060 | Hwang et al. | Oct 2012 | A1 |
20120290950 | Rapaport et al. | Nov 2012 | A1 |
20120291027 | Chiang et al. | Nov 2012 | A1 |
20130013865 | Venkatesh et al. | Jan 2013 | A1 |
20130024424 | Prahlad et al. | Jan 2013 | A1 |
20130024645 | Cheriton et al. | Jan 2013 | A1 |
20130031331 | Cheriton et al. | Jan 2013 | A1 |
20130031499 | Vishnubhatta et al. | Jan 2013 | A1 |
20130042052 | Colgrove et al. | Feb 2013 | A1 |
20130046949 | Colgrove et al. | Feb 2013 | A1 |
20130054888 | Bhat et al. | Feb 2013 | A1 |
20130054889 | Vaghani et al. | Feb 2013 | A1 |
20130054890 | Desai et al. | Feb 2013 | A1 |
20130054910 | Vaghani et al. | Feb 2013 | A1 |
20130054932 | Acharya et al. | Feb 2013 | A1 |
20130055248 | Sokolinski et al. | Feb 2013 | A1 |
20130055249 | Vaghani et al. | Feb 2013 | A1 |
20130061014 | Prahlad et al. | Mar 2013 | A1 |
20130073821 | Flynn et al. | Mar 2013 | A1 |
20130086006 | Colgrove et al. | Apr 2013 | A1 |
20130086353 | Colgrove et al. | Apr 2013 | A1 |
20130093565 | Partington et al. | Apr 2013 | A1 |
20130097377 | Satoyama et al. | Apr 2013 | A1 |
20130097380 | Colgrove et al. | Apr 2013 | A1 |
20130185457 | Campbell | Jul 2013 | A1 |
20130198459 | Joshi et al. | Aug 2013 | A1 |
20130227236 | Flynn et al. | Aug 2013 | A1 |
20130232198 | Tenbrock | Sep 2013 | A1 |
20130232215 | Gupta et al. | Sep 2013 | A1 |
20130238572 | Prahlad et al. | Sep 2013 | A1 |
20130262386 | Kottomtharayil et al. | Oct 2013 | A1 |
20130262410 | Liu et al. | Oct 2013 | A1 |
20130262615 | Ankireddypalle et al. | Oct 2013 | A1 |
20130297854 | Gupta et al. | Nov 2013 | A1 |
20130297855 | Gupta et al. | Nov 2013 | A1 |
20130297907 | Ki et al. | Nov 2013 | A1 |
20130311433 | Gero et al. | Nov 2013 | A1 |
20130318051 | Kumar et al. | Nov 2013 | A1 |
20130332660 | Talagala et al. | Dec 2013 | A1 |
20130339297 | Chen | Dec 2013 | A1 |
20130339302 | Zhang et al. | Dec 2013 | A1 |
20130339303 | Potter et al. | Dec 2013 | A1 |
20130339319 | Woodward et al. | Dec 2013 | A1 |
20130339471 | Bhargava et al. | Dec 2013 | A1 |
20130339643 | Tekade et al. | Dec 2013 | A1 |
20130346720 | Colgrove et al. | Dec 2013 | A1 |
20130346723 | Kawamura | Dec 2013 | A1 |
20140006731 | Uluski et al. | Jan 2014 | A1 |
20140025770 | Warfield et al. | Jan 2014 | A1 |
20140025872 | Flynn et al. | Jan 2014 | A1 |
20140059279 | He et al. | Feb 2014 | A1 |
20140074804 | Colgrove et al. | Mar 2014 | A1 |
20140082145 | Lacapra | Mar 2014 | A1 |
20140090016 | Kruglick | Mar 2014 | A1 |
20140095439 | Ram | Apr 2014 | A1 |
20140101134 | Bohrer et al. | Apr 2014 | A1 |
20140115182 | Sabaa et al. | Apr 2014 | A1 |
20140136810 | Colgrove et al. | May 2014 | A1 |
20140164618 | Alicherry et al. | Jun 2014 | A1 |
20140164701 | Guthrie et al. | Jun 2014 | A1 |
20140164709 | Guthrie et al. | Jun 2014 | A1 |
20140164710 | Ghat et al. | Jun 2014 | A1 |
20140165056 | Ghai et al. | Jun 2014 | A1 |
20140181085 | Gokhale et al. | Jun 2014 | A1 |
20140181398 | Bhat et al. | Jun 2014 | A1 |
20140189040 | Gero et al. | Jul 2014 | A1 |
20140189070 | Gero | Jul 2014 | A1 |
20140189071 | Leighton et al. | Jul 2014 | A1 |
20140189432 | Gokhale et al. | Jul 2014 | A1 |
20140189680 | Kripalani | Jul 2014 | A1 |
20140195551 | Colgrove et al. | Jul 2014 | A1 |
20140195749 | Colgrove et al. | Jul 2014 | A1 |
20140195762 | Colgrove et al. | Jul 2014 | A1 |
20140196033 | Bobroff et al. | Jul 2014 | A1 |
20140196037 | Gopalan | Jul 2014 | A1 |
20140196038 | Kottomtharayil et al. | Jul 2014 | A1 |
20140196039 | Kottomtharayil et al. | Jul 2014 | A1 |
20140196049 | Bobroff et al. | Jul 2014 | A1 |
20140196056 | Kottomtharayil et al. | Jul 2014 | A1 |
20140201137 | Vibhor et al. | Jul 2014 | A1 |
20140201140 | Vibhor et al. | Jul 2014 | A1 |
20140201141 | Vibhor et al. | Jul 2014 | A1 |
20140201142 | Varadharajan et al. | Jul 2014 | A1 |
20140201144 | Vibhor et al. | Jul 2014 | A1 |
20140201154 | Varadharajan et al. | Jul 2014 | A1 |
20140215155 | Miller et al. | Jul 2014 | A1 |
20140219037 | Mitani et al. | Aug 2014 | A1 |
20140244929 | Acharya et al. | Aug 2014 | A1 |
20140245016 | Desai et al. | Aug 2014 | A1 |
20140250093 | Prahlad et al. | Sep 2014 | A1 |
20140297734 | Lacapra | Oct 2014 | A1 |
20140304472 | Colgrove et al. | Oct 2014 | A1 |
20140304489 | Colgrove et al. | Oct 2014 | A1 |
20140310246 | Vijayan et al. | Oct 2014 | A1 |
20140310247 | Vijayan et al. | Oct 2014 | A1 |
20140310496 | Eguro et al. | Oct 2014 | A1 |
20140325170 | Aswathanarayana et al. | Oct 2014 | A1 |
20140337285 | Gokhale et al. | Nov 2014 | A1 |
20140337662 | Gokhale et al. | Nov 2014 | A1 |
20140337663 | Gokhale et al. | Nov 2014 | A1 |
20140337664 | Gokhale et al. | Nov 2014 | A1 |
20140344211 | Allan et al. | Nov 2014 | A1 |
20140344216 | Abercrombie et al. | Nov 2014 | A1 |
20140344718 | Rapaport et al. | Nov 2014 | A1 |
20140351214 | Abercrombie et al. | Nov 2014 | A1 |
20140351545 | Nakajima | Nov 2014 | A1 |
20140358944 | Brower, Jr. et al. | Dec 2014 | A1 |
20140365745 | Colgrove et al. | Dec 2014 | A1 |
20140372468 | Collins et al. | Dec 2014 | A1 |
20140372689 | Colgrove et al. | Dec 2014 | A1 |
20140372723 | Bobroff et al. | Dec 2014 | A1 |
20140379723 | Page et al. | Dec 2014 | A1 |
20150006728 | Parakh et al. | Jan 2015 | A1 |
20150006729 | Parakh et al. | Jan 2015 | A1 |
20150010143 | Yang | Jan 2015 | A1 |
20150012495 | Prahlad et al. | Jan 2015 | A1 |
20150019727 | Parakh et al. | Jan 2015 | A1 |
20150019732 | Parakh et al. | Jan 2015 | A1 |
20150052322 | Tsirkin et al. | Feb 2015 | A1 |
20150052323 | Noel et al. | Feb 2015 | A1 |
20150052523 | Raghu | Feb 2015 | A1 |
20150066784 | Powers | Mar 2015 | A1 |
20150067283 | Basu et al. | Mar 2015 | A1 |
20150067286 | Colgrove et al. | Mar 2015 | A1 |
20150074060 | Varadharajan et al. | Mar 2015 | A1 |
20150074064 | Goldberg et al. | Mar 2015 | A1 |
20150074536 | Varadharajan et al. | Mar 2015 | A1 |
20150106803 | Srivastava et al. | Apr 2015 | A1 |
20150134829 | Kruglick | May 2015 | A1 |
20150143037 | Smith | May 2015 | A1 |
20150143055 | Guthrie et al. | May 2015 | A1 |
20150153961 | Satoyama et al. | Jun 2015 | A1 |
20150154219 | Kruglick | Jun 2015 | A1 |
20150156174 | Fahey et al. | Jun 2015 | A1 |
20150160879 | Flynn et al. | Jun 2015 | A1 |
20150161267 | Sugawara et al. | Jun 2015 | A1 |
20150172120 | Dwarampudi et al. | Jun 2015 | A1 |
20150180891 | Seward et al. | Jun 2015 | A1 |
20150188823 | Williams et al. | Jul 2015 | A1 |
20150188943 | Williams et al. | Jul 2015 | A1 |
20150199265 | Kripalani | Jul 2015 | A1 |
20150199367 | Hammer et al. | Jul 2015 | A1 |
20150205817 | Kottomtharayil et al. | Jul 2015 | A1 |
20150212889 | Amarendran et al. | Jul 2015 | A1 |
20150212893 | Pawar et al. | Jul 2015 | A1 |
20150212894 | Pawar et al. | Jul 2015 | A1 |
20150212895 | Pawar et al. | Jul 2015 | A1 |
20150212896 | Pawar et al. | Jul 2015 | A1 |
20150212897 | Kottomtharayil et al. | Jul 2015 | A1 |
20150227543 | Venkatesh et al. | Aug 2015 | A1 |
20150234669 | Ben-Yehuda et al. | Aug 2015 | A1 |
20150242227 | Nair | Aug 2015 | A1 |
20150242263 | Klose | Aug 2015 | A1 |
20150242264 | Vibhor et al. | Aug 2015 | A1 |
20150244775 | Vibhor et al. | Aug 2015 | A1 |
20150254088 | Chou et al. | Sep 2015 | A1 |
20150256617 | Klose et al. | Sep 2015 | A1 |
20150256639 | Chow et al. | Sep 2015 | A1 |
20150261439 | Kumar et al. | Sep 2015 | A1 |
20150261768 | Ahn et al. | Sep 2015 | A1 |
20150261776 | Attarde et al. | Sep 2015 | A1 |
20150261792 | Attarde et al. | Sep 2015 | A1 |
20150268864 | Bernat et al. | Sep 2015 | A1 |
20150268876 | Ahn et al. | Sep 2015 | A1 |
20150278024 | Barman et al. | Oct 2015 | A1 |
20150278037 | Wada | Oct 2015 | A1 |
20150278101 | Zhou et al. | Oct 2015 | A1 |
20150278812 | Partington et al. | Oct 2015 | A1 |
20150281360 | Lacapra | Oct 2015 | A1 |
20150286442 | Hudzia et al. | Oct 2015 | A1 |
20150286536 | Klose | Oct 2015 | A1 |
20150286537 | Klose | Oct 2015 | A1 |
20150293881 | Raikin et al. | Oct 2015 | A1 |
20150301108 | Hamid et al. | Oct 2015 | A1 |
20150301903 | Mutha et al. | Oct 2015 | A1 |
20150302120 | Hamid et al. | Oct 2015 | A1 |
20150302126 | Hamid et al. | Oct 2015 | A1 |
20150324236 | Gopalan et al. | Nov 2015 | A1 |
20150324255 | Kochunni et al. | Nov 2015 | A1 |
20150326481 | Rector | Nov 2015 | A1 |
20150339141 | Hogstrom et al. | Nov 2015 | A1 |
20150339166 | Hogstrom et al. | Nov 2015 | A1 |
20150350321 | Klose et al. | Dec 2015 | A1 |
20150363270 | Hammer | Dec 2015 | A1 |
20150363324 | Joshi et al. | Dec 2015 | A1 |
20150378704 | Davis | Dec 2015 | A1 |
20150378711 | Cameron et al. | Dec 2015 | A1 |
20150378712 | Cameron et al. | Dec 2015 | A1 |
20150378713 | Powell et al. | Dec 2015 | A1 |
20150378766 | Beveridge et al. | Dec 2015 | A1 |
20150378770 | Guthrie et al. | Dec 2015 | A1 |
20150378969 | Powell et al. | Dec 2015 | A1 |
20150379245 | Powell | Dec 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20180113610 A1 | Apr 2018 | US |
Number | Date | Country | |
---|---|---|---|
61992037 | May 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14709957 | May 2015 | US |
Child | 15818163 | US |