The presently disclosed subject matter relates to the field of computing, and more particularly, to computer virtualization, although virtualization is merely an exemplary and non-limiting field.
A virtual machine monitor (VMM), such as a hypervisor, is a program that creates virtual machines, each with virtualized hardware resources which may be backed by underlying physical hardware resources. To virtualize memory, the VMM can implement virtual translation look-aside buffers (TLBs) that cache address translations from page tables specified by guest operating systems, much like a TLB in a physical processor. However, some operations associated with such virtual TLBs may be costly, since virtualization may entail several layers of translations between virtual memories (such as guest and hypervisor virtual memories) and physical memories (such as guest and system physical memories). Furthermore, virtual TLBs may consist of a large number of shadow page tables, so it may be impractical to implement one TLB for each virtual processor. Thus, it would be advantageous to provide mechanisms that could cope with virtual machines that have multi-processor architectures and share a virtual TLB between more than one virtual processor in an efficient and scalable manner.
Various mechanisms are disclosed herein for improvement of scalability of virtual translation look-aside buffers (TLBs) in multi-processor virtual machines. These mechanisms can be manifested in the form of operations to be performed in any virtual machine running in a virtual environment. By way of example and not limitation, in one operation the virtual machine monitor (VMM) can implicitly lock shadow page tables (SPTs) using per-processor generation counters; using another operation, the VMM can wait for pending fills on other virtual processors to complete before servicing a guest virtual address (GVA) invalidation using the per-processor generation counters; using yet another operation, the VMM can write-protect or unmap guest pages in a deferred two-stage process; and, in a similar vein, the VMM can reclaim SPTs in a deferred two-stage process.
The VMM can also use additional optimization operations, such as: periodically coalescing two SPTs that shadow the same guest page table (GPT) with the same attributes; sharing SPTs between two shadow address spaces (SASes) only at a specified level in a shadow page table tree (SPTT); and flushing the entire virtual TLB using a generation counter. Furthermore, in combination with all these operations (or separately for that matter), the following operations can be performed: the virtual TLB can allocate a SPT to GPT from a non-uniform memory access (NUMA) node on which the GPT resides; the virtual TLB can have an instance for each NUMA node on which a virtual machine runs; and, lastly, the virtual TLB can correctly handle the serializing instructions executed by a guest in a virtual machine with more than one virtual processor sharing the virtual TLB.
It should be noted that this Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The foregoing Summary, as well as the following Detailed Description, is better understood when read in conjunction with the appended drawings. In order to illustrate the present disclosure, various aspects of the disclosure are illustrated. However, the disclosure is not limited to the specific aspects shown. The following figures are included:
The various aspect of the presently disclosed subject matter are divided into the following sections: (1) Virtual machines in general terms; (2) virtual machine translations and caches; and (3) improvement of scalability of virtual TLBs in multi-processor virtual machines in a virtual machine environment. All of these sections, when read individually, are meant to be read in light of the remaining sections. The present disclosure is not limited any one of these aforementioned sections or aspects disclosed therein.
Virtual Machines in General Terms
Next,
Referring again to
In regard to
All of these variations for implementing the above mentioned partitions are just exemplary implementations, and nothing herein should be interpreted as limiting the disclosure to any particular virtualization aspect.
Virtual Machine Translations and Caches
As was mentioned above, a virtual machine monitor (VMM), such as a hypervisor, is a program that creates virtual machines, each with virtualized hardware resources which may be backed by underlying physical hardware resources. The operating system that runs within a virtual machine can be referred to as a guest. Each page of guest memory may be backed by a page of physical memory, but the physical address exposed to the guest is typically not the same as the actual physical address on the physical machine. In addition, the guest typically cannot access physical memory that has not been allocated to the virtual machine.
Many processor architectures can enforce a translation from virtual addresses (VA) to physical addresses (PA), specified by the operating system using data structures such as page tables. An address space can comprise of a tree of page tables, which may correspond to a sparse map from VAs to PAs. Programs running on the operating system access memory via virtual addresses, which enables operating systems to virtualize their memory and control their access to memory. The VMM can make an additional translation from guest physical addresses (GPA) to system physical addresses (SPA) to virtualize guest memory.
The guest operating system maintains guest page tables (GPT) that specifies GVA-to-GPA translations. The VMM enforces GPA-to-SPA translations and maintains shadow page tables (SPTs) that specify GVA-to-SPA translations, caching GVA-to-GPA translations from the guest page tables. The VMM points the physical processor to the SPTs so the guest software gets the correct system physical page when accessing a GVA.
Many processor architectures have a translation lookaside buffer (TLB) to cache VA-to-PA translations to avoid having to walk the page tables on every memory access, which is expensive. When the accessed VA is not cached in the TLB, which is known as a TLB miss, the processor's memory management unit (MMU) must walk the page tables starting from the base of the page table tree specified by the operating system, or the VMM in this case. The MMU can then add the VA-to-PA translation to the TLB, known as a TLB fill.
Some processor architectures define the TLB as a non-coherent cache of the page tables. The operating system or the VMM is responsible for notifying the processor of changes to the translations in its page tables to ensure the TLB does not have inconsistent or stale translations. Those processor architectures provide instructions to invalidate cached translations at a few granularities, such as invalidating a single translation and invalidating all translations. Architectures such as x86 and x86-64 invalidate all (non-global) cached translations when the register that points to the base of the page table tree is modified to switch between address spaces. The shadow page tables cache GVA-to-GPA translations in the guest page tables, effectively acting as a virtual TLB.
In contrast to this physical machine 400 architecture, a virtual machine 410 architecture that is build on top of the physical machine 400, has more complex layers of page tables, namely, there are GPTs and SPTs. Per
The VMM builds up a cache of translations in the virtual TLB on demand as the guest accesses memory. The virtual TLB initially may not cache any translations. When the guest accesses a GVA for the first time, the processor generates a page fault exception and notifies the VMM of the virtual TLB miss, since there was no translation for that GVA in the SPT tree. The miss handler performs a virtual TLB fill at that GVA by walking the GPT tree to that GVA, reading the GVA-to-GPA translation, translating the GPA to an SPA, and filling the SPT entry with the newly cached GVA-to-SPA translation.
For example, the miss handler could read entry “50” in GPT 2506 and translate this guest physical address to a system physical address, say, “150”. This latter value then, is filled in the corresponding shadow page table (acting as a virtual TLB), namely, SPT 2516. Specifically, the entry “150” is placed in the appropriate slot of the SPT 2516, which corresponds to the entry “50” in a slot of the GPT 2506. Other values are similarly synchronized between guest page tables 500 and shadow page tables 510.
On the other hand, if a guest invalidates GVAs, the VMM must remove the GVA-to-SPA translations from the SPTs and the underlying hardware TLBs. It is expensive to flush virtual TLBs whenever the guest switches between address spaces. Thus, as will be shown next, in other aspects of the presently disclosed subject matter, performance and scalability of guest memory virtualization algorithms can be improved on by building upon other related and commonly assigned subject matter disclosed in U.S. patent application Ser. No. 11/128,982, entitled “Method and system for caching address translations from multiple address spaces in virtual machines” (disclosing algorithms implementing tagged TLBs in software, which cache and retain translations from multiple address spaces at a time, maintaining multiple shadow address spaces, each of which is a tree of shadow page tables, and caching translations from a guest address space), and U.S. patent application Ser. No. 11/274,907, entitled “Efficient operating system operation on a hypervisor” (describing how the VMM can expose a set of APIs known as hypercalls, some of which perform virtual TLB operations; those operations enable an enlightened guest to provide hints and use less expensive virtual TLB operations).
Virtualization in Multi-Processor Virtual Machines
In one aspect of the presently disclosed subject matter, the virtual machine monitor (VMM) implicitly locks shadow page tables (SPTs) using per-processor generation counters. The VMM has to prevent a SPT from being reclaimed and freed while a virtual processor (VP) is accessing it. However, locking and unlocking a SPT upon each access is expensive, especially on critical paths such as the virtual translation look-aside buffer (TLB) miss handler, which may access four or more SPTs.
Each SPT has a reference count indicating how many entries in higher-level SPTs point to it. In the case of a top-level SPT, the reference count indicates the number of VPs running in that shadow address space. When its reference count drops to zero, the SPT cannot be freed immediately since a VP may have read the reference that was just removed and may still be accessing the SPT.
This technique provides for a VP servicing a fill or invalidation in the virtual TLB so that it does not need to lock a SPT prior to accessing it, provided that the VP gets to the table via an existing reference. Even if that reference goes away after it is read, the VP's walk generation counter holds an implicit lock on the SPT to prevent the SPT from being repurposed (used by another module). This technique eliminates explicit locking on critical paths which would negatively impact performance and scalability.
In one non-limiting aspect, the generation counters 606, 608, 610 can change in value only when their corresponding VPs 600, 602, 604 are not accessing any SPTs, such as SPTs 612, 614, 616, respectively, that reside in a virtual TLB 630. In another non-limiting aspect, an odd value can indicate the VP is not accessing SPTs and an even value can indicate the VP is accessing SPTs. Once a SPT has no references and is locked exclusive to prevent new references, it may be reclaimed to be used to shadow other GPTs only after the most recent state of generation counters indicate that any VP may have been accessing the SPT is no longer doing so. Taking the snapshot after the SPT is locked exclusive and then comparing the snapshot against the most recent state of generation counters can be used to determine whether any VPs may be accessing the SPT being reclaimed.
In another aspect of the presently disclosed subject matter,
Turning now to
In another aspect of the presently disclosed subject matter,
Thus, in this aspect, the solution is to defer the flush of the hardware TLB, effectively batching the write-protection or un-mapping of multiple guest pages to reduce the frequency of such flushes. In addition, there are times when the VMM must flush the hardware TLB in response to a GVA invalidation request by the guest, so the flush required to write-protect or un-map a guest page comes for free. This requires the VMM to write-protect or un-map using a two-stage pipeline. The first stage is to eliminate the translations in the virtual TLB. The second stage is to eliminate the translations in the hardware TLB.
In other words, this aspect reduces the rate of inter-processor interrupts and hardware TLB flushes, while still permitting the VMM to write-protect and un-map guest pages, but with a slightly higher latency. Turning now to
Also, the hardware TLB flush generation counter is incremented when the hardware TLB of every physical processor is flushed of GVA-to-SPA translations for any given virtual machine. The second state 902 transitions into the third state 904 when the hardware TLB generation counter has increased past the snapshot taken when it first entered state 902. At the un-mapped state 904, translations have been flushed in batch from the physical TLBs.
In another aspect of the presently disclosed subject matter, the VMM can immediately reclaim and free a SPT by locking it exclusive and waiting for every VP to increment its walk generation counter, but the cost of the wait may be high. Instead, the VMM may use a two-stage pipeline where the first stage is to lock SPTs exclusive and place them on a flush list, and the second stage is to free the SPTs on the flush list. The second stage is deferred until the per-VP walk generation counters have incremented, so no processor has to explicitly wait on the counters (as described in the first aspect). The flush list keeps a snapshot of the counters, whether a snapshot for the entire list or a snapshot for one or more SPTs in the list, so the VMM can determine whether every counter has incremented since each SPT was pushed onto the list. Thus, this aspect enables the VMM to lazily reclaim shadow page tables in a pipelined fashion to ensure that there are almost always free SPTs, which helps the VPs avoid having to waiting on the walk generation counters.
In another aspect of the presently disclosed subject matter,
The virtual TLB would improve scalability if it took into consideration the NUMA node of a GPT when allocating a SPT to cache translations from the guest page table. Using this insight increases the likelihood that the SPT is on the same NUMA node as the processor that is walking it. Thus, per
In another aspect of the presently disclosed subject matter,
Thus, at block 1200, virtual TLBs are created for their respective NUMA nodes. Then, at block 1202, memory is allocated for each of the virtual TLBs from its NUMA node. If, on the one hand, page table edit detection is not required, at block 1204, as in the case of enlightened guests, each NUMA node can have its own instance of a virtual TLB with no data structures shared between NUMA nodes (where it is understood that “page table edit detection” refers to logic to detect when a writable translation has been created in the virtual TLB such that the guest is able to modify a guest page table, which means a shadow page table may have stale translations cached). When a guest makes a GVA invalidation request that must be made effective on all VPs, the VMM forwards the request onto each virtual TLB instance using a heuristic such as synchronous inter-processor interrupt, as is shown at block 1206. On the other hand, at block 1204, if page table edit detection is required, the VMM can share the data structures (where these data structures should store state that is relevant to the entire VM, not just a specific instance of the virtual TLB) for the detection between the virtual TLB instances, as is shown at block 1208. In this case, each virtual TLB instance would consult and update those data structures.
In another aspect of the presently disclosed subject matter,
This aspect enables the guest page table walk to be atomic with respect to guest operations. The VMM is able to intercept the creation of writable GVA-to-SPA translations to guest page tables during a virtual TLB fill, so it can detect when a guest modifies a guest page table, which makes its corresponding shadow page tables stale. The VMM maintains a stale generation counter for each SPT, which is updated when the shadowed guest page table is mapped writable. When servicing a virtual TLB fill, the VMM makes sure every non-terminal SPT along the walk is not stale and takes a snapshot of the stale generation counters of every non-terminal SPT along the walk. After it reads the terminal GPT entry, it can determine whether any of the non-terminal GPT entries have been modified by comparing the counters against the snapshot. If the guest did indeed modify one of the non-terminal GPTs, it must restart the fill in order to ensure the walk is atomic. If the guest did not, the walk was indeed atomic and the VMM can fill in the terminal SPT entry.
Thus, one exemplary embodiment of this aspect is shown in
In another aspect of the presently disclosed subject matter, in
Instead, the VMM can allocate and link in a new SPT for the time being, and it can periodically coalesce two SPTs that shadow the same guest page table (GPT) with the same attributes. The latter reduces the memory footprint of the virtual TLB and the former eliminates redundant fills for the same GVA-to-SPA translations caused by zeroing the existing SPT. An advantageous time to coalesce SPTs is after the virtual TLB has been flushed, since it is caching few translations at that point. The first time a guest page table is shadowed after a virtual TLB flush, the VMM coalesces the duplicate shadows. These concepts are reflected in
In another aspect of the presently disclosed subject matter, as is shown in
Finally, in another aspect of the presently disclosed subject matter,
Thus, as is shown at block 1600, this aspect maintains a virtual TLB generation counter for the virtual machine, and at block 1602, increments the virtual TLB generation counter prior to starting a reset of the virtual TLB (e.g. the counter can be odd numbered). At block 1604, every VP is forced to switch to a new shadow address space to make the reset (flush) of the virtual TLB effective, and then at block 1606, the virtual TLB generation counter is incremented after completing the reset (e.g. counter would be even numbered—per the current example). At this point, the virtual TLB is in a new generation (as opposed to at block 1600, when it was in a previous generation). The period between flushes of the virtual TLB is treated as belonging to both the previous and subsequent generation (see block 1608).
In this aspect, blocks 1610-1614 can be prerequisites for blocks 1600-1606. For example, at block 1610, shadow page tables can be tagged upon allocation with a snapshot of the virtual TLB generation counter, so that the VMM knows which SPTs belong to which generation (in case, for example, a SPT from generation X is found in the address space of generation Y). Another enhancement is shown at block 1612, where information (which can include data or associated metadata) is tagged on whether a guest page is mapped with a snapshot of the virtual TLB generation counter. Various indicators regarding such tagging can help the VMM determine the generational status of a SPT. Lastly, at block 1614, only SPTs are used that belong to the current generation. Thus, if a virtual processor is in generation X, only SPTs from that generation (X) will be used, but not others from previous generations W, Y, Z, and so on. When it is between generations X and Y, only SPTs from either generation will be used.
The methods, systems, and apparatuses of the presently disclosed subject matter may also be embodied in the form of program code (such as computer readable instructions) that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received (and/or stored on computer readable media) and loaded into and executed by a machine, such as an EPROM, a gate array, a programmable logic device (PLD), a client computer, such as that shown in the figure below, a video recorder or the like, the machine becomes an apparatus for practicing the present subject matter. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates to perform the saving and restoring functionality of the present subject matter.
Lastly, while the present disclosure has been described in connection with the preferred aspects, as illustrated in the various figures, it is understood that other similar aspects may be used or modifications and additions may be made to the described aspects for performing the same function of the present disclosure without deviating therefrom. For example, in various aspects of the disclosure, mechanisms were disclosed for coping with virtual machine architectures with multi-processors. However, other equivalent mechanisms to these described aspects are also contemplated by the teachings herein. Therefore, the present disclosure should not be limited to any single aspect, but rather construed in breadth and scope in accordance with the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4779188 | Gum et al. | Oct 1988 | A |
5317705 | Gannon et al. | May 1994 | A |
5428757 | Sutton | Jun 1995 | A |
5437017 | Moore et al. | Jul 1995 | A |
5455922 | Eberhard et al. | Oct 1995 | A |
5586283 | Lopez-Aguado et al. | Dec 1996 | A |
5699543 | Saxena | Dec 1997 | A |
5721858 | White et al. | Feb 1998 | A |
5787494 | DeLano et al. | Jul 1998 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5897664 | Nesheim et al. | Apr 1999 | A |
5915019 | Ginter et al. | Jun 1999 | A |
5917912 | Ginter et al. | Jun 1999 | A |
6075938 | Bugnion et al. | Jun 2000 | A |
6182195 | Laudon et al. | Jan 2001 | B1 |
6442666 | Stracovsky | Aug 2002 | B1 |
6490671 | Frank et al. | Dec 2002 | B1 |
6721839 | Bauman et al. | Apr 2004 | B1 |
6766434 | Gaertner et al. | Jul 2004 | B2 |
6785886 | Lim et al. | Aug 2004 | B1 |
6907600 | Neiger et al. | Jun 2005 | B2 |
6986006 | Willman et al. | Jan 2006 | B2 |
7058768 | Willman et al. | Jun 2006 | B2 |
7069389 | Cohen | Jun 2006 | B2 |
7069413 | Agesen et al. | Jun 2006 | B1 |
7111145 | Chen et al. | Sep 2006 | B1 |
7222221 | Agesen et al. | May 2007 | B1 |
20020082824 | Neiger et al. | Jun 2002 | A1 |
20020169938 | Scott et al. | Nov 2002 | A1 |
20030200412 | Peinado et al. | Oct 2003 | A1 |
20040003262 | England et al. | Jan 2004 | A1 |
20040215918 | Jacobs et al. | Oct 2004 | A1 |
20050039180 | Fultheim et al. | Feb 2005 | A1 |
20050044301 | Vasilevsky et al. | Feb 2005 | A1 |
20050044339 | Sheets | Feb 2005 | A1 |
20050050295 | Noel et al. | Mar 2005 | A1 |
20050172099 | Lowe | Aug 2005 | A1 |
20060026383 | Dinechin et al. | Feb 2006 | A1 |
20060064567 | Jacobson et al. | Mar 2006 | A1 |
20060112212 | Hildner | May 2006 | A1 |
20060161734 | Cohen | Jul 2006 | A1 |
20060174053 | Anderson et al. | Aug 2006 | A1 |
20060230223 | Kruger et al. | Oct 2006 | A1 |
20060259734 | Sheu et al. | Nov 2006 | A1 |
20070073993 | Allen et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
51-097342 | Aug 1976 | JP |
52-156518 | Dec 1977 | JP |
63-024337 | Feb 1988 | JP |
04-043445 | Feb 1992 | JP |
05-225063 | Sep 1993 | JP |
06-243043 | Sep 1994 | JP |
2000-067009 | Mar 2000 | JP |
Number | Date | Country | |
---|---|---|---|
20080155168 A1 | Jun 2008 | US |