A virtual machine monitor (“VMM”) creates an environment that allows multiple operating systems to run simultaneously on the same computer hardware. In such an environment, applications written for different operating systems (e.g., WINDOWS® operating system, LINUX® operating system) can be run simultaneously on the same hardware.
When an operating system (“OS”) is run on a VMM, unprivileged instructions of the operating system execute on the hardware at full hardware speed. However, most or all instructions that access a privileged hardware state trap to the VMM. The VMM simulates the execution of those instructions as needed to maintain the illusion that the operating system has sole control over the hardware on which it runs.
I/O handling involves two levels of device drivers for each device: one maintained by the VMM, and the other maintained by the operating system. When an application requests the operating system to perform an I/O function, the operating system invokes a device driver. That device driver then invokes the corresponding device driver maintained by the VMM to perform the I/O function. Similarly, when an I/O interrupt comes in, a VMM device driver handles the incoming interrupt and may deliver it to the corresponding device driver maintained by the operating system.
The VMM typically handles memory by managing memory translation in order to translate between the OS's use of physical memory, and the real “machine” memory present in hardware.
The VMM adds to the overhead of the computer. Adding the VMM's management of memory to the OS's own memory management slows memory access. The two layers of device drivers add to the overhead by increasing the amount of software that processes I/O requests and interrupts. Overhead is also added by constantly trapping and simulating privileged instructions, and by forcing I/O requests to go through two levels of device drivers. This overhead can slow interrupt handling, increase the fraction of CPU bandwidth lost to software overhead, increase response time, and decrease perceived performance.
The VMM is loaded during bootup of the computer and receives control of the hardware at boot time. The VMM maintains hardware control until the computer is shut down.
Since the VMM has hardware control from bootup to shutdown, overhead is incurred even when the VMM is not needed (for example, when only a single OS instance is running on the hardware). Thus the VMM can add unnecessary overhead to the computer.
It would be desirable to reduce the unnecessary overhead.
According to one aspect of the present invention, a virtual machine monitor is interposed between computer hardware and an operating system at runtime. According to another aspect of the invention, at least some of the hardware is devirtualized at runtime. Other aspects and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the present invention.
a and 2b are illustrations of methods of using a virtual machine monitor in accordance with different embodiments of the present invention.
As shown in the drawings for purposes of illustration, the present invention is embodied in a computer that can run a virtual machine monitor. The computer is not limited to any particular type. The computer can be, for example, a file server, web server, workstation, mainframe, personal computer, personal digital assistant (PDA), print server, or network appliance. The computer can be contained in a single box, or distributed among several boxes.
Reference is made to
Additional reference is made to
The virtual machine monitor 116 is interposed between the hardware layer 110 and the operating system 112 at runtime (212). Runtime is the period of normal execution of the operating system after boot and before shutdown. Interposing the VMM 116 gives the VMM 116 direct control of at least one of the CPU, the memory and the I/O devices. While the VMM 116 has direct control of the CPU, the CPU is said to be “virtualized.” Similarly, those portions of physical memory directly controlled by the VMM 116 are said to be virtualized, and those I/O devices directly controlled by the VMM 116 are said to be virtualized. After being interposed, the VMM 116 can then be used for its intended purpose, such as running additional operating system instances. These operating system instances can share the virtualized hardware.
The VMM 116 can be interposed only when needed, which can occur long after the computer has booted. As a result, overhead is reduced, since the VMM 116 does not add to the overhead between computer bootup and actual use.
Once interposed, however, the virtual machine monitor adds overhead to the computer. This overhead can become unnecessary once the VMM 116 is not used for its intended purpose.
This unnecessary overhead can be reduced by devirtualizing the hardware layer 110 (214). Devirtualizing the hardware 110 gives the operating system 112 direct hardware control over at least one of the CPU, the physical memory, and the I/O. The hardware layer 110 may be partially devirtualized, whereby the memory alone is devirtualized, or the I/O alone is devirtualized, or the memory and I/O are devirtualized. If the hardware 110 is fully devirtualized (that is, the operating system 112 is given direct control over the CPU, the physical memory, and the I/O), the VMM can be unloaded from the computer 100.
Reference is made to
The steps in
Reference is now made to
If necessary, an OS kernel can be used to invoke an initialization routine in the VMM (312). This routine may go through resource discovery to discover the hardware that is installed in the computer. The initialization routine may also initialize internal VMM data structures and device drivers, and it may also carry out at least one of the following steps (314-324).
Next interrupts are disabled (314). If the VMM has sufficient privilege, it can disable the interrupts. For example, the computer 100 might allow the VMM to disable the interrupt, or the VMM might have been previously booted/devirtualized and retained sufficient privilege to disable the interrupts. If the VMM cannot disable the interrupts, an OS kernel module can be invoked to perform this step.
Interrupts are redirected to handlers in VMM so the VMM gets control on interrupts and traps (316). This step will likely involve modifying the interrupt vector table to invoke VMM handlers rather than OS handlers. Interrupt handlers for the CPU typically include NMI, machine check, timer, interprocessor, and cycle counter.
After the interrupts have been redirected, interrupts are re-enabled (318). After that, the direct addressability of physical memory is disabled (320). Completing this latter step (320) will allow the VMM to manage (e.g. trap or map) access to physical addresses.
Next, privileged instructions are caused to trap to the VMM (322). On some architectures, causing the privileged instructions to trap to the VMM may involve reducing the current processor mode of the CPU (i.e. its privilege level). On other architectures, it may not be necessary to reduce the CPU's privilege level much or at all. For example, on the HP Alpha architecture, the OS usually runs in a fairly unprivileged mode relative to the PALcode (a privileged library between the hardware and operating system). If the VMM is implemented using the privilege afforded the PALcode, then it is not necessary to further reduce the privilege of the OS. For architectures such as Intel x86 and IA-64, causing the OS's privileged instructions to trap to the VMM may involve modifying the OS's executable image in memory. For example, the VMM replaces some instructions that can reveal privileged state without trapping to the VMM. The replacement instructions may instead invoke a routine in the VMM. For some architectures, the VMM may also modify aspects of the OS code to optimize the OS performance.
Control is returned to the OS at this reduced privileged level (324). If the CPU alone is virtualized, the operating system will still have direct control over the physical memory and the I/O devices.
If the VMM is loaded into memory at boot time, firmware or a boot loader can set aside sufficient physical memory for the VMM before the OS boots, and then load the VMM into that range of memory. The firmware or boot loader should shield the booting OS from discovering the range of memory set aside for the VMM, for example, by modifying the table passed to the OS describing the physical memory available for its use.
Reference is made to
The VMM may handle memory as follows. When an operating system boots on the VMM (see, e.g.,
To partition memory for later use by the VMM and operating systems, the boot loader, firmware, or the VMM (if the VMM gets control before the first OS to boot) can modify the table passed to the operating system describing the memory that the OS can use. The table may be modified to expose to the booting OS one partition of memory. The table also may be modified to hide from that OS the memory dedicated to the VMM, and each partition of memory that will be provided to another OS instance.
If an OS boots on the hardware, and claims all the memory in the hardware (see, e.g.,
As a second example of gaining control over the memory, the VMM may use a device driver or kernel module in the OS to “borrow” memory from the running OS for use by other OS instances in a manner similar to the one depicted in
The VMM can also perform steps to gain control of the I/O. As a first example, the VMM can virtualize I/O devices at runtime by commencing I/O emulation at runtime as described in U.S. patent application Ser. No. 10/676,922 filed Oct. 1, 2003, U.S. Patent Publication No. 2005/0076155, and incorporated herein by reference. Because the CPU is already virtualized, the VMM already has sufficient control over the hardware to perform the method disclosed therein.
As a second example of interposing the VMM on I/O, the operating system is provided with “dual-mode” drivers. The dual-mode drivers perform direct hardware control in “native” mode and communicate with device drivers of the VMM in “virtual” mode.
For example, consider a dual-mode network card driver whose “send” routine is called. If the mode bit is set to “native”, that driver would enqueue the message on its queue of outgoing packets, and eventually issue direct I/O instructions to hand the packet off to the network card for sending. If the mode bit is set to “virtual” the driver would instead pack up the message and invoke the corresponding device driver maintained by the VMM. The corresponding VMM device driver would call its own send routine to send the message. The VMM send routine would then enqueue the message and eventually perform the I/O instructions needed to send the message. Each native device driver in the VMM has a routine for importing the state of the corresponding driver maintained by the operating system, and exporting its state to the corresponding OS driver. When interposing a VMM, the state maintained by the OS's device driver (if any) would be handed off to the VMM's driver via one of these routines.
If the dual-mode network card driver receives a “switch mode to virtual” call, it could delay the processing of new messages while finishing I/Os that have already been enqueued (if draining the queue simplifies the mode switch). Then, if needed, the dual-mode driver could call a routine in the corresponding VMM driver to export to that driver any state the OS driver was maintaining (for some devices or drivers, there may not be any such state). The dual-mode driver could then set its virtual/native mode bit to “virtual”, and resume processing messages by forwarding new requests to the appropriate routines in the VMM's native device driver.
Reference is made to
Devirtualization will now be discussed. The VMM can devirtualize one or more of the CPU, memory, and I/O devices. If the CPU is devirtualized, then both memory and I/O are also devirtualized. If the CPU remains virtualized, the memory alone can be devirtualized, or I/O alone can be devirtualized, or both memory and I/O can be devirtualized.
Reference is now made to
If dual-mode drivers are not used, the VMM can devirtualize the device by ceasing emulation of the I/O device at runtime as disclosed in U.S. Patent Publication No. 2005/0076155.
The VMM may devirtualize memory as disclosed in U.S. Patent Publication No. 2005/0076156. The VMM should return control of devirtualized memory to the OS.
If the VMM took control of memory using a special kernel module that could borrow blocks of memory, by the time only one OS runs, all memory used by other OS instances should have been returned by the VMM to the one remaining OS. Similarly, if memory was partitioned at boot time for this OS, the OS already controls its memory partition; thus there is no memory for the VMM to return.
Reference is now made to
Next, privileged instructions are caused not to trap to the VMM (720), and control is returned to the OS (722). On some architectures, causing the privileged instructions not to trap to the VMM may involve restoring the “normal” processor mode (privilege level) of the CPU for the OS. In typical virtual machine systems, the VMM runs in the most privileged processor mode, and it makes the OS run in a less privileged processor mode. By restoring the normal processor mode of the OS (720), the VMM allows the OS to execute without its privileged instructions trapping to handlers in the VMM. In such systems, completing these last two steps (720-722) can involve merely returning control to the OS while leaving the CPU in the processor mode normally reserved for the VMM. In other systems, the VMM may need to set the CPU to a different processor mode before returning to the OS. In systems where the OS runs at a reduced privilege level even when not on a VMM (such as systems based on the HP Alpha architecture), the VMM may not have to restore the normal processor mode. In certain systems, the VMM may modify the OS's executable image in memory. For example, the VMM may modify some of the OS instructions as an optimization, or to cause the OS instructions to trap to the VMM. In these certain systems, the VMM may restore the normal executable image of the OS during step 322 of devirtualization.
Reference is now made to
The present invention is not limited to the specific embodiments described and illustrated above. Instead, the present invention is construed according to the claims that follow.
Number | Name | Date | Kind |
---|---|---|---|
4253145 | Goldberg | Feb 1981 | A |
4843541 | Bean | Jun 1989 | A |
5437033 | Inoue et al. | Jul 1995 | A |
5522075 | Robinson et al. | May 1996 | A |
5896141 | Blaho et al. | Apr 1999 | A |
5991893 | Snider | Nov 1999 | A |
6075938 | Bugnion et al. | Jun 2000 | A |
6256657 | Chu | Jul 2001 | B1 |
6397242 | Devine et al. | May 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6785886 | Lim et al. | Aug 2004 | B1 |
6789156 | Waldspurger | Sep 2004 | B1 |
6795966 | Lim et al. | Sep 2004 | B1 |
6832270 | Das Sharma et al. | Dec 2004 | B2 |
6961941 | Nelson et al. | Nov 2005 | B1 |
6978018 | Zimmer | Dec 2005 | B2 |
7225441 | Kozuch | May 2007 | B2 |
7272799 | Imada et al. | Sep 2007 | B2 |
7370324 | Goud et al. | May 2008 | B2 |
20040103299 | Zimmer et al. | May 2004 | A1 |
20040117532 | Bennett | Jun 2004 | A1 |
20040128670 | Robinson et al. | Jul 2004 | A1 |
20040230794 | England et al. | Nov 2004 | A1 |
20050076155 | Lowell | Apr 2005 | A1 |
20050076156 | Lowell | Apr 2005 | A1 |
20050076324 | Lowell et al. | Apr 2005 | A1 |
20050081212 | Goud et al. | Apr 2005 | A1 |
20050091354 | Lowell et al. | Apr 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20050091365 A1 | Apr 2005 | US |