The present disclosure relates in general to virtual machines, and, in particular, to methods and apparatus for interleaving priorities of a plurality of virtual processors.
A hypervisor is a software interface between the physical hardware of a computing device, such as a wireless telephone or vehicle user interface system, and multiple operating systems. Each operating system managed by the hypervisor is associated with a different virtual machine, and each operating system appears to have exclusive access to the underlying hardware, such as processors, user interface devices, and memory. However, the hardware is a shared resource, and the hypervisor controls all hardware access (e.g., via prioritized time sharing).
In order to give each virtual machine the appearance of exclusive access to one or more physical processors, the hypervisor schedules one or more virtual processors to execute on one or more physical processors based on a priority associated with each virtual processor. In one example, if two virtual processors are sharing one physical processor, where one of the virtual processors has a priority level that is twice as high as the other virtual processor; the hypervisor may schedule the higher priority virtual processor to execute on the physical processor twice as often as the lower priority virtual processor. In another example, if one virtual processor has a higher priority than another virtual processor, and both virtual processors are available to run, then the hypervisor may schedule the virtual processor with the higher priority to execute on the physical processor every time. This strict-priority based scheduling supports real-time processing.
However, each virtual processor may be executing a plurality of different threads. Some of these threads may be more time critical than others. As a result, threads within a virtual processor only execute when the virtual processor is scheduled, and their thread-priorities are not considered. This may lead to scheduling low priority threads on a high priority virtual processor in preference to high priority threads on a medium priority virtual processor.
Briefly, methods and apparatus for interleaving priorities of a plurality of virtual processors are disclosed. In an embodiment, a hypervisor assigns a base priority to each virtual processor and schedules one or more virtual processors to execute on one or more physical processors based on the current priority associated with each virtual processor. When the hypervisor receives an indication from one of the virtual processors that its current priority may be temporarily reduced, the hypervisor lowers the current priority of that virtual processor. The hypervisor may then schedule another virtual processor to execute on the physical processor instead of the virtual processor with the temporarily reduced priority. When the hypervisor receives an interrupt for the virtual processor with the lowered priority, the hypervisor raises the priority of that virtual processor and schedules the virtual processor based on the restored priority. This may cause the virtual processor to immediately execute on a physical processor so the virtual processor can handle the interrupt with reduced latency and at the virtual processor's base priority level. Among other features, the methods and apparatus disclosed herein facilitate interleaving priorities of threads within virtual processors despite the fact that the hypervisor is unaware of individual threads and the priorities associated with individual threads.
The present system may be used in a network communications system. A block diagram of certain elements of an example network communications system 100 is illustrated in
The web server 106 stores a plurality of files, programs, and/or web pages in one or more databases 108 for use by the client devices 102 as described in detail below. The database 108 may be connected directly to the web server 106 and/or via one or more network connections. The database 108 stores data as described in detail below.
One web server 106 may interact with a large number of client devices 102. Accordingly, each server 106 is typically a high end computer with a large storage capacity, one or more fast microprocessors, and one or more high speed network connections. Conversely, relative to a typical server 106, each client device 102 typically includes less storage capacity, fewer low power microprocessors, and a single network connection.
Each of the devices illustrated in
The example electrical device 200 includes a main unit 202 which may include, if desired, one or more physical processors 204 electrically coupled by an address/data bus 206 to one or more memories 208, other computer circuitry 210, and one or more interface circuits 212. The processor 204 may be any suitable processor or plurality of processors. For example, the electrical device 200 may include a central processing unit (CPU) and/or a graphics processing unit (GPU). The memory 208 may include various types of non-transitory memory including volatile memory and/or non-volatile memory such as, but not limited to, distributed memory, read-only memory (ROM), random access memory (RAM) etc. The memory 208 typically stores a software program that interacts with the other devices in the system as described herein. This program may be executed by the processor 204 in any suitable manner. The memory 208 may also store digital data indicative of documents, files, programs, web pages, etc. retrieved from a server and/or loaded via an input device 214.
The interface circuit 212 may be implemented using any suitable interface standard, such as an Ethernet interface and/or a Universal Serial Bus (USB) interface. One or more input devices 214 may be connected to the interface circuit 212 for entering data and commands into the main unit 202. For example, the input device 214 may be a keyboard, mouse, touch screen, track pad, isopoint, camera and/or a voice recognition system.
One or more displays, printers, speakers, monitors, televisions, high definition televisions, and/or other suitable output devices 216 may also be connected to the main unit 202 via the interface circuit 212. The display 216 may be a cathode ray tube (CRTs), liquid crystal displays (LCDs), or any other type of suitable display. The display 216 generates visual displays of data generated during operation of the device 200. For example, the display 216 may be used to display web pages and/or other content received from a server. The visual displays may include prompts for human input, run time statistics, calculated values, data, etc.
One or more storage devices 218 may also be connected to the main unit 202 via the interface circuit 212. For example, a hard drive, CD drive, DVD drive, and/or other storage devices may be connected to the main unit 202. The storage devices 218 may store any type of data used by the device 200.
The electrical device 200 may also exchange data with other network devices 222 via a connection to a network. The network connection may be any type of network connection, such as an Ethernet connection, digital subscriber line (DSL), telephone line, coaxial cable, etc. Users of the system may be required to register with a server. In such an instance, each user may choose a user identifier (e.g., e-mail address) and a password which may be required for the activation of services. The user identifier and password may be passed across the network using encryption built into the user's browser. Alternatively, the user identifier and/or password may be assigned by the server.
In some embodiments, the device 200 may be a wireless device. In such an instance, the device 200 may include one or more antennas 224 connected to one or more radio frequency (RF) transceivers 226. The transceiver 226 may include one or more receivers and one or more transmitters. For example, the transceiver 226 may be a cellular transceiver. The transceiver 226 allows the device 200 to exchange signals, such as voice, video and data, with other wireless devices 228, such as a phone, camera, monitor, television, and/or high definition television. For example, the device may send and receive wireless telephone signals, text messages, audio signals and/or video signals.
A block diagram of certain elements of an example wireless device 102 for sharing memory between multiple processes of a virtual machine is illustrated in
In this example, the wireless device 102 includes a plurality of antennas 302 operatively coupled to one or more radio frequency (RF) receivers 304. The receiver 304 is also operatively coupled to one or more baseband processors 306. The receiver 304 tunes to one or more radio frequencies to receive one or more radio signals 308, which are passed to the baseband processor 306 in a well known manner. The baseband processor 306 is operatively coupled to one or more controllers 310. The baseband processor 306 passes data 312 to the controller 310. A memory 316 operatively coupled to the controller 310 may store the data 312.
A block diagram of certain elements of yet another example electronic device is illustrated in
A plurality of virtual machines 402 execute within the physical machine 102. Each virtual machine 402 is a software implementation of a computer and the operating system associated with that computer. Different virtual machines 402 within the same physical machine 102 may use different operating systems. For example, a mobile communication device may include three virtual machines 402 where two of the virtual machines 402 are executing the Android operating system and one of the virtual machines 402 is executing a different Linux operating system.
Each virtual machine 402 includes one or more virtual processors 404 and associated virtual memory 410. Each virtual processor 404 executes one or more processes 406 using one or more of the physical processors 204. Similarly, the contents of each virtual memory 410 are stored in the physical memory 208.
A hypervisor 400 controls access by the virtual machines 402 to the physical processors 204 and the physical memory 208. More specifically, the hypervisor 400 schedules each virtual processor 404 to execute one or more processes 406 on one or more physical processors 204 according to the relative priorities associated with the virtual machines 402. Once the hypervisor 400 schedules a process 406 to execute on a physical processor 204, the process 406 typically advances to a progress point 408 unless suspended by the hypervisor 400.
Each virtual processor 404 typically executes a plurality of different processes and/or threads within one or more processes. Some of these threads are time critical in nature (e.g., sending and receiving wireless data). Other threads are less time critical (e.g., decoding audio), and some threads are not time critical at all (e.g., miscellaneous memory clean up). Accordingly, using the methods and apparatus described herein, priorities of different threads can be interleaved, even when the threads are run on different virtual processors.
A block diagram showing one example of how priorities of a plurality of virtual processors may be interleaved is illustrated in
In this example, the first virtual processor 404 includes a hard real-time thread 502, a soft real-time thread 504, and a non-real-time thread 506. For example, the hard real-time thread 502 may be a time-critical thread such as sending and receiving Global System for Mobile Communications (GSM) data. If this thread 502 is not executed above some threshold frequency, users may experience dropped calls and/or a loss of data. Accordingly, the first virtual processor 404 is given high priority (e.g., 20) to support this requirement. In addition, using the methods and apparatus for interleaving priorities described herein, the hard real-time thread 502 is also given a high effective priority 508 (e.g., 20).
The soft real-time thread 504 may be a global positioning system (GPS) thread. If this thread 504 is not executed above some threshold frequency, users may experience less than ideal positioning on a GPS map. In other words, this thread 504 includes real-time aspects. However, the real-time aspects of the thread 504 are not considered critical. If this thread 504 is given a high priority (e.g., 20) due to being associated with the same virtual processor 404 as the hard real-time thread 502 (e.g., GSM thread), processor resources may be wasted, because this thread 504 may execute properly at some lower priority (e.g., 10). Accordingly, using the methods and apparatus for interleaving priorities described herein, the soft real-time thread 504 is given a midlevel effective priority 510 (e.g., 10).
The non-real-time thread 506 may be some miscellaneous thread that is not time critical. If this thread 506 is not executed above some threshold frequency, users may experience no perceivable detriment. If this thread 506 is given a high priority (e.g., 20) due to being associated with the same virtual processor 404 as the hard real-time thread 502 (e.g., GSM thread), processor resources may be wasted, because this thread 506 may execute properly at a much lower priority (e.g., 1). Accordingly, using the methods and apparatus for interleaving priorities described herein, the non-real-time thread 506 is given a low effective priority 512 (e.g., 1).
In this example, the second virtual processor 404 also includes a hard real-time thread 514 and a soft real-time thread 504. For example, the hard real-time thread 514 may be a time-critical thread such as handling a touch screen user interface. If this thread 514 is not executed above some threshold frequency, users may experience sluggish performance. Avoiding sluggish performance may be important, but not as important as avoiding dropped calls. Accordingly, the second virtual processor 404 is given midlevel priority (e.g., 15) to support this requirement.
The soft real-time thread 516 may be an audio player thread. If this thread 516 is not executed above some threshold frequency, users may experience less than ideal audio playback. In other words, this thread 516 includes real-time aspects. However, the real-time aspects of the thread 516 are not considered critical. If this thread 516 is given a midlevel priority (e.g., 10) due to being associated with the same virtual processor 404 as the hard real-time thread 514 (e.g., touch screen UI thread), processor resources may be wasted, because this thread 516 may execute properly at some lower priority (e.g., 5). Accordingly, using the methods and apparatus for interleaving priorities described herein, the soft real-time thread 516 is given a lower level effective priority 520 (e.g., 5).
A flowchart of an example process 600 for interleaving priorities of a plurality of virtual processors is illustrated in
In general, a hypervisor 400 assigns a base priority to each virtual processor 404 and schedules one or more virtual processors 404 to execute on one or more physical processors 204 based on the current priority associated with each virtual processor 404. When the hypervisor 400 receives an indication from one of the virtual processors 404 that its current priority may be temporarily reduced, the hypervisor 400 lowers the current priority of that virtual processor 404. The hypervisor 400 then schedules another virtual processor 404 to execute on a physical processor 204 instead of the virtual processor 404 with the temporarily reduced priority. When the hypervisor 400 receives an interrupt for the virtual processor 404 with the lowered priority, the hypervisor 400 raises the priority of that virtual processor 404. If no higher priority virtual processor is already scheduled, the hypervisor 400 schedules the virtual processor 404 with the restored priority to execute on a physical processor 204 so that virtual processor 404 can handle the interrupt. In a virtual machine 402 containing a plurality virtual processors 404, threads may or may not be bound to a single virtual processor 404. However at any point in time, when a thread is scheduled to execute on a virtual processor 404, the virtual processor 404 can take on the priority of that thread.
More specifically, the example process 600 begins when a hypervisor 400 assigns a base priority to each virtualized entity (block 602). For example, the hypervisor 400 may assign a higher priority to real-time virtual machines than non-real-time virtual machines. The hypervisor 400 then schedules the virtualized entities to execute on physical processor(s) based on the current priority associated with each virtualized entity (block 604). For example, the hypervisor 400 may schedule a real-time virtual machine of a higher priority in preference to a non-real-time virtual machine with a lower priority.
The hypervisor 400 then receives an indication from one of the virtualized entities that its current priority may be temporarily reduced (block 606). For example, a real-time virtual machine may be currently running a low priority and/or non-real-time process. The hypervisor 400 then lowers the current priority of that virtualized entity (block 608). For example, the hypervisor 400 may lower the priority of a real-time virtual machine 402 that is currently running a low priority and/or non-real-time process, so that other virtual machines 402, which are now higher priority, can be scheduled. The hypervisor 400 may then schedule another virtualized entity to execute on a physical processor 204 instead of the virtualized entity with the temporarily reduced priority (block 702). For example, the hypervisor 400 may schedule a non-real-time virtual machine instead of the real-time virtual machine.
The hypervisor 400 then receives an interrupt for the virtualized entity with the lowered priority (block 7804). For example, the hypervisor 400 may receive a device interrupt associated with the real-time reception of data. The hypervisor 400 then raises the priority of the virtualized entity with the lowered priority back to its base priority (block 706). For example, the hypervisor 400 may raise the priority of a real-time virtual machine back to a high priority so that the real-time virtual machine can receive data. The hypervisor 400 then schedules the virtualized entity with the restored priority to execute on a physical processor 204 so it can handle the interrupt, which may cause a wakeup of its high priority thread. (block 708). Upon handling the interrupt, the virtual entity may determine a desired priority level and may send another indication to the hypervisor 400 that its priority may be temporarily reduced.
In summary, persons of ordinary skill in the art will readily appreciate that methods and apparatus for interleaving priorities of a plurality of virtual processors have been provided. Among other features, the methods and apparatus disclosed herein facilitate interleaving priorities of virtual processors despite the fact that the hypervisor is unaware of individual priorities associated with individual threads being executed within each virtual processor.
The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the exemplary embodiments disclosed. Many modifications and variations are possible in light of the above teachings. It is intended that the scope of the invention be limited not by this detailed description of examples, but rather by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
7590982 | Weissman | Sep 2009 | B1 |
7707578 | Zedlewski et al. | Apr 2010 | B1 |
7831980 | Accapadi et al. | Nov 2010 | B2 |
20030126187 | Won et al. | Jul 2003 | A1 |
20040162964 | Ota et al. | Aug 2004 | A1 |
20050086661 | Monnie et al. | Apr 2005 | A1 |
20050154861 | Arimilli et al. | Jul 2005 | A1 |
20060117325 | Wieland et al. | Jun 2006 | A1 |
20060130062 | Burdick et al. | Jun 2006 | A1 |
20060174053 | Anderson et al. | Aug 2006 | A1 |
20070094659 | Singh et al. | Apr 2007 | A1 |
20070226449 | Akimoto | Sep 2007 | A1 |
20070250669 | Arimilli et al. | Oct 2007 | A1 |
20070300218 | Mann | Dec 2007 | A1 |
20080016315 | Cohen et al. | Jan 2008 | A1 |
20080059556 | Greenspan et al. | Mar 2008 | A1 |
20080072238 | Monnie et al. | Mar 2008 | A1 |
20080126820 | Fraser et al. | May 2008 | A1 |
20080141277 | Traut et al. | Jun 2008 | A1 |
20080183944 | Thornton et al. | Jul 2008 | A1 |
20090006805 | Anderson et al. | Jan 2009 | A1 |
20100023666 | Mansell et al. | Jan 2010 | A1 |
20100138828 | Hanquez et al. | Jun 2010 | A1 |
20100217868 | Heller, Jr. | Aug 2010 | A1 |
20100229173 | Subrahmanyam et al. | Sep 2010 | A1 |
20100262742 | Wolfe | Oct 2010 | A1 |
20100325454 | Parthasarathy | Dec 2010 | A1 |
20110016290 | Chobotaro et al. | Jan 2011 | A1 |
20110023029 | Diab et al. | Jan 2011 | A1 |
20110047546 | Kivity et al. | Feb 2011 | A1 |
20110072427 | Garmark | Mar 2011 | A1 |
20110083132 | Laor et al. | Apr 2011 | A1 |
20110119667 | Srinivasan | May 2011 | A1 |
20110197003 | Serebrin et al. | Aug 2011 | A1 |
20110197188 | Srinivasan et al. | Aug 2011 | A1 |
20110225459 | Fahrig et al. | Sep 2011 | A1 |
20110225591 | Wada et al. | Sep 2011 | A1 |
20110231630 | Dannowski et al. | Sep 2011 | A1 |
20110296406 | Bhandari et al. | Dec 2011 | A1 |
20110296407 | Bhandari et al. | Dec 2011 | A1 |
20110307888 | Raj et al. | Dec 2011 | A1 |
20120096458 | Huang et al. | Apr 2012 | A1 |
20120122573 | Ha et al. | May 2012 | A1 |
20120151481 | Kang et al. | Jun 2012 | A1 |
20120216193 | Lee et al. | Aug 2012 | A1 |
20120222027 | Tsirkin | Aug 2012 | A1 |
20120233282 | Voccio et al. | Sep 2012 | A1 |
20120272235 | Fahrig | Oct 2012 | A1 |
20120304173 | Garmark | Nov 2012 | A1 |
20130042084 | Austruy et al. | Feb 2013 | A1 |
20130097602 | Hanquez et al. | Apr 2013 | A1 |
20130268933 | Bhandari et al. | Oct 2013 | A1 |
Number | Date | Country | |
---|---|---|---|
20140164662 A1 | Jun 2014 | US |