1. Field of the Invention
The present invention is related to allocation of computer resources, and in particular to sharing computer resources based in part on detecting an idle condition.
2. Description of the Related Art
Many conventional general-purpose computers, such as personal computers, spend a significant portion of their time executing an idle loop because these conventional computers are typically capable of executing instructions at a much faster rate than required by the software that they run. Often, even a program which is running without waiting for user interaction will often end up giving CPU cycles to the idle loop because the program is waiting for disk I/O operations, such as data transfers to or from a hard disk drive, to complete. For example, if a single disk head seek takes 10 milliseconds, the computer processor clocked at 2 gigahertz can execute 20 million idle instruction cycles during the 10 milliseconds. Each such seek which occurs per second will cause the CPU to spend an additional 1% of its time in the idle loop.
In many conventional modem operating systems, multiple programs may be run at once. A distinction is often made between the program the user is interacting with or waiting for, called the foreground process, and one or more programs that are running tasks less important to the user at the moment, called background processes. Background processes may run at a lower scheduling priority than the foreground process, so that the background processes will not unduly slow down the primary computational task. Some such background processes may voluntarily set their own priorities to a very low level, sometimes referred to as “idle priority”, in an attempt to have no or little impact on the apparent speed of the computer from the user's perspective. With such a low priority, many conventional operating systems will allow these processes to run only when the system would otherwise be running its idle loop.
Such a scheme may work adequately if a background process is only going to utilize processor computation resources. However, if the background process accesses the disk at the same time that the foreground process does, system performance may rapidly deteriorate. Because a single disk head seek can take approximately 10 milliseconds, only 100 such seeks may be performed every second. If the background process causes the disk heads to move away from the area of the disk that the foreground process was about to access next, the foreground process may spend 20 milliseconds waiting for the disk head to move to the location the background process is accessing and back again.
When the foreground process is performing tasks whose completion time is bound by disk I/O speed, such disk head seeks resulting from sharing disk access with the background process can cause the overall task completion time to take many times longer as compared to the scenario where the foreground process has exclusive use of the disk. This is because the disk may be performing many seeks where none may be needed if only one process were accessing the disk.
By way of example, one background process could be an indexing process. The indexing process may perform many disk I/O operations, such as when indexing the contents of the user's hard disk to allow the user to rapidly find files which contain certain words or phrases. Such a background process, even if set to run at “idle priority”, may greatly slow down the apparent speed of a foreground process that performs disk I/O operations because, while running, the indexing process is constantly reading from and writing to the user's hard disk.
As discussed below, embodiments of the present invention enable the efficient use of shared resources by different processes, such as background and foreground processes sharing a mass storage device. Thus, disk intensive operations, such as file indexing, do not unduly interfere with higher priority processes.
One embodiment provides method of determining when to perform a computer background process, the method comprising: allowing the computer background process to access a computer resource for a first predetermined time period; after the first predetermined time period has elapsed, inhibiting the computer background process from accessing the computer resource for a second predetermined time period; after the second predetermined time period has elapsed, determining if the computer resource is being used by another process, wherein if the computer resource is being used by another process, waiting for a third predetermined time period and again determining if the computer resource is being used by another process, and if the computer resource is not being used by another process, allowing the computer background process to access the computer resource again.
Another embodiment provides a computer system that detects a computer resource idle condition, the computer system comprising: a processor; memory coupled to the processor; a computer resource; and program instructions stored in computer readable memory configured to: enable a computer background process to access the computer resource for a first time period; after the first predetermined time period has elapsed, prevent the computer background process from accessing the computer resource for a second time period; determine if the computer resource is idle; allow the computer background process to access the computer resource again if the computer resource is idle; prevent the computer background process from accessing the computer resource for a third time period, if the computer resource is not idle, and after the third time period, again determine if the computer resource is being used by another process.
Still another embodiment provides a method of allocating access to a computer resource, the method comprising: permitting a first process to access a computer resource for a first time period; after the first time period has elapsed, inhibiting the first process from accessing the computer resource for a second time period; after the second time period has elapsed, determining if the computer resource is idle based at least in part on a computer resource performance indicator, wherein if the computer resource is not idle, waiting for a third predetermined time period and again determining if the computer resource is idle, and if the computer resource is idle, allowing the first process to access the computer resource again.
Yet another embodiment provides a system that allocates access to a computer resource, the system comprising: computer readable memory; and instructions stored in the computer readable memory configured to: permit a first process to access a computer resource for a first time period; after the first time period has elapsed, inhibit the first process from accessing the computer resource for a second time period; after the second time period has elapsed, determine if the computer resource is idle based at least in part on a computer resource performance indicator, wherein if the computer resource is not idle, cause the first process to wait for a third predetermined time period and again determine if the computer resource is idle, and if the computer resource is idle, allow the first process to access the computer resource again.
Embodiments of the present invention determine when a computer and/or resource therein is idle. The determination can take into account the processor or central processing unit (CPU) load, as measured by the time spent in the idle loop, as well as the load on other shared system resources, such as disk drives. Based on such determination, a background process is selectively provided access to the shared resource.
Unless otherwise indicated, the functions described herein are preferably performed by executable code and instructions running on one or more general-purpose computers, terminals, personal digital assistants, other processor-based systems, or the like. However, the present invention can also be implemented using special purpose computers, state machines, and/or hardwired electronic circuits. The example processes described herein do not necessarily have to be performed in the described sequence, and not all states have to be reached or performed.
Embodiments of the present invention can be used with numerous different operating systems, including by way of example and not limitation, Microsoft's Windows operating systems, Sun Solaris operating systems, Linux operating systems, Unix operating systems, Apple operating systems, as well as other operating systems.
By way of example, with respect to operating systems based on Microsoft Windows NT (including without limitation Windows 2000, Windows 2003 and Windows XP), the operating system provides a mechanism whereby the various subsystems, such as the CPU, network hardware, disk drives, other mass storage devices, etc., can include “performance counters” which are used to record statistics regarding their operation. For example, a network interface might provide information about the number of packets the network interface has received, the number of packets waiting to be sent, and other values that would allow a program to analyze or display the current load and performance of the network hardware.
For a disk drive, the disk-related statistics can include the percentage of time the disk is idle, the average number of bytes or other data amount per read, the number of writes per second, and many other similar values. These values are made available to running programs through a variety of mechanisms, including the system registry, the “Performance Data Helper” library, and/or Windows Management Instrumentation (WMI). Some of these values are averages or occurrences over time (for example, bytes per second), and some values, such as “current disk queue length,” give a program access to what is happening at substantially the current moment.
By way of further example, Windows Server 2003 provides the performance counters described in Table 1.
In an example embodiment, a background process running at idle priority uses performance counters, optionally including one or more of the counters discussed above, and/or other mechanisms to determine the immediate load on a resource, such as a magnetic or optical mass storage device, it wishes to use. The background process can then determine when idle cycles are being allocated to the background process because another process, such as a foreground process, is waiting for an operation on that same resource to complete. In such cases, the background process optionally refrains from imposing an additional load on the resource, so that the other process can run without delay. The background process can periodically check the idle cycle allocation, and selectively determine when to access the resource so as not to unduly inhibit foreground processes' access of the resource. This allows the system to run at substantially full speed, because the background process is only using idle CPU cycles to wait for the resource to become available and not using the resource itself.
An embodiment optionally utilizes a background process which performs indexing of the contents of a user's hard disk without impacting system performance under Windows-NT based operating systems to an extent that would be readily noticeable by a user. The indexing process performs many disk I/O operations when indexing the contents of the user's hard disk to allow the user to rapidly find files which contain certain words, phrases, or strings.
By way of example, a search application can be stored on and executed by a user or host terminal. The search application can provide user interfaces for searching email, files, Web sites, cached Web pages, databases and/or the like. In addition, the search application can include a local index engine that indexes email, files, cached Web pages, databases and the like, stored in a data repository or database. For example, Web pages previously viewed in the search application's view pane or area, and optionally, stored Web pages previously viewed using other user browsers, or otherwise stored locally can be indexed. Separate indexes can be used for the email, files, cached Web pages, databases and the like, or a single index can be used for the foregoing.
The index engine can further include code configured as a scan engine or module that is used to determine whether a file is to be indexed. Thus, the index engine can also scan files to identify new targets, such as email, document files, Web pages, database entries, and the like, that have not yet been indexed, or targets that have previously been indexed but have since been modified. Optionally, rather then re-index all corresponding targets each time an index operation is performed, the index engine can incrementally index just the new or modified targets or documents. In addition, the index engine can refrain from indexing until it determines that the mass storage device, which stores the data or files to be indexed, is not being utilized by a higher priority or foreground process.
The index engine can utilize one or more indexing algorithms to create an index, such as a reverse or inverted index. The index includes a data structure that associates character strings with files, documents, and the like. In one example embodiment, for each word or character string found with a file or document, the index stores which fields of which documents or files contain that word or character string.
By way of example, the background process checks a performance counter, such as the counter named “\\PhysicalDisk\Current Disk Queue Length” for the specific disk drive instance it wishes to read from or write to. Alternatively or in addition, the background process can access the aggregate total value of the current disk queue lengths for all of the physical disk drives, whose instance is known as “_Total”. Advantageously, this is easier than keeping track of which disk drive the process is about to access and checking only that one drive's queue length.
However, because Windows NT-based operating systems perform many disk and mass storage I/O operations asynchronously to and from a system cache, the background process can mistake disk I/O being performed on its own behalf as disk I/O from another process. For example, when a process writes to the disk, the data is typically written to a memory based disk cache, and then written out to the disk at a later time, allowing the process to continue operations without waiting for the disk write to complete. Thus, a check of the “current disk queue length” performance counter may not be, on its own, adequate or sufficient to allow a background process to determine whether or not another process is using the disk drive, because a queued operation might be on behalf the background process itself. If the background process were to give up the idle CPU cycles being offered under these circumstances, the background process would “err on the side of caution” and not affect the speed of foreground processes, but the background process also would not make full use of the available processor and disk bandwidth.
In one embodiment, this problem is solved by optionally allowing the background process to use idle cycles for a certain accumulated amount of time, such as 90 milliseconds or other designated time period, to perform disk intensive operations. The background process then waits a given amount of time, such as, by way of example, 10 milliseconds, and checks for pending disk or mass storage I/O by checking the “current disk queue length” counter, or other appropriate performance indicator. If the counter value is 0 or less than a specified threshold, the background process takes another time period, such as a 90 millisecond slice of idle time, and can utilize the disk. When the counter value is non-zero, or greater than a designated threshold, the background process waits a designated amount of time, such as 10 milliseconds, before checking again. This 90/10 procedure allows the background process to use the disk for 90 percent, or other designated amount of the idle time, to perform computations and to access the disk.
In another embodiment, the process gates only the disk operations performed in the background process, so that the background process can still use CPU or other resources, even as the other processes use the disk. For example, during the 10 millisecond waiting period, the background process may choose to perform other tasks, such as computation, that do not use mass storage I/O without affecting the speed of the foreground process.
When the disk is being used heavily by a foreground process, it is most likely that the disk queue length will be greater than zero when the background process checks. The fact that the “idle priority” background process is being given CPU cycles indicates that other processes on the system are waiting for something. If the other processes are waiting for the disk, then the “current disk queue length” counter will be non-zero. Despite being given the idle CPU cycles, the background process will therefore not impose an additional load on the shared resource at this time, though it may optionally choose to perform purely computational tasks or tasks that access other shared resources not currently in use.
By way of another example, under Microsoft Windows operating systems based on Windows 95, including Windows 98 Windows ME, and other Windows 9x variants, there is a similar performance counter mechanism provided by the operating system. Unfortunately, by default, for these operating systems there are no counters for the disk subsystem which measure the immediate load, such as the “current disk queue length” counter provided in NT-based systems. Thus, under a Windows 95-based operating system, such a counter needs to be provided. Windows 95 and its variants provide a mechanism called “Virtual Device Drivers” also known as VxDs, which can be inserted into the system I/O chain dynamically by a running application. One such mechanism is provided by the system entry point available to VxDs named IFSMgr_InstallFileSystemApiHook( ). Under Win 9x operating systems, drivers can monitor disk access operations by installing themselves via the IFS (installable file system) manager service, IFSMgr_InstallFileSystemAPIHook.
During device initialization, a VxD calls the IFS service, passing the address of its hook procedure as an argument. The IFS manager service, IFSMgr_InstallFileSystemAPIHook, inserts that address in an API function hook list. IFS manager API function calls involve calling installed hook procedures with parameters indicating the type of request, the drive being accessed, a pointer to an IFS I/O request structure, and other information. Thus, via the hook, the VxD can monitor file operations that result in disk activity, such as open, read, or write operations.
In an embodiment, a background process running under Windows 95 and its successors dynamically loads a VxD which uses this entry point to insert code of its own which is called whenever a file I/O operation occurs. The VxD maintains a count of the number of threads, wherein each process contains one or more simultaneously executing threads, which have called into the file system but not returned. This mechanism provides a value similar to the NT-based operating system's “current disk queue length” counter, and the background process can obtain this counter value by numerous methods, such as, by way of example, a DeviceIoControl( ) system call, which sends a control code to a specified device driver, causing the corresponding device to perform the corresponding operation.
One embodiment, which can be used in systems utilizing Windows 95-based operating systems or the like, has the VxD increment a counter each time a file or disk I/O operation is initiated. The background process checks the value of this counter before and after an interval, such as the 10 millisecond wait interval described above. If the value has changed, the background process uses this as an indication that another process has used the disk in the interim and is possibly still using the disk, and so backs off and waits for an additional period or periods of time, such as additional 10 millisecond intervals, until the counter value stops changing.
While the foregoing example utilizes a 90/10 cycle ratio, other ratios, or combinations of ratios can be used, including other integer ratios and non-integer ratios. For example, the process can use a 90/10 ratio, then 18/7, 30/10, 50/10, 70/10, 82/11, back up to 90/10, or approximations thereof, so that the background process does not just jump into a small period of nonuse by a foreground process and take another 90 milliseconds of disk use. Still other ratios can be used such as approximately 10:1, 8:1, 6:1, 4:1, 2:1, 1:1, 20:1, and so on. Thus, once disk activity has been detected, the process optionally slowly ramps up to full speed to make sure that the background process does not slow the system down too much by prematurely designating a disk as idle, when actually the foreground process was just about to continue its I/O after a very brief pause for computation.
The foregoing techniques are techniques for determining when foreground processes are being used. Other techniques can be used for determining when other applications are accessing any resource with significant context-switching time such as a hard disk, an optical drive, a tape drive, a floppy disk drive, or external coprocessor such as for graphics.
By way of example, a filter driver can be installed ahead of a system device driver associated with the mass storage device. The filter driver monitors each call into the device driver, and keeps track of one or more of the following:
a. how many calls are actively in the device driver at any given time
b. when the last call was made into the device driver
c. how many total calls have been made into the device driver
Based on one or more of the foregoing, the filter driver, or a module in communication with the filter driver, determines whether the mass storage device is being accessed by a foreground or other process.
Similarly, virtual device drivers can be dynamically installed that intercept file or disk I/O calls and thereby determine which applications or processes are accessing the mass storage device.
Still another technique utilizes a program that monitors system calls that can result in file and/or mass storage I/O operations being performed. The monitor program installs “thunks”, wherein the addresses of the system entry points are replaced with addresses that point to a stub that increments a counter and then calls the original system entry point.
Yet another technique utilizes performance-monitoring facilities, such as SNMP (Simple Network Management Protocol), WMI (Windows Management Instrumentation), and the like.
Another technique is performed by timing mass storage operations in the background process and detecting when they take more than a certain or predetermined amount of time. For example, if a disk drive has been determined to seek at an average rate of 10 milliseconds (based on historical readings), a read or write operation that takes more than 2× or other selected multiple is a sign that the disk is busy. Thus, if a read, write, or other disk operation takes more than a predetermined amount of time, a determination is made that the disk is not idle, and the background process will not attempt to access the disk at this time.
Optionally, rather than utilize the 90/10 cycle described above, one embodiment associates disk operations with their processes. Thus, rather than just relying on the performance counters, an NT file system filter driver is used that determines which process has performed the disk I/O and can “ignore” changes in the counter or items in an I/O queue when those items are known to be from the background process, either by their process ID or by the name of the file being accessed.
The use of the foregoing techniques are not limited to shared mass storage devices, but can be similarly applied to other shared resources. For example, similar techniques can be used with process switching, to run a background task less often if the foreground task keeps reloading data into the processor cache. Another application of the foregoing techniques is with respect to the use of a wireless card that has to be switched between two networks, where it takes a significant amount time to make the switch. Similarly, the above techniques can be applied to a shared network with limited bandwidth. For example, there may be multiple processes trying to access the Internet, and use of the foregoing techniques avoid having a background process slow down a transfer being made by a foreground process.
As depicted in
It should be understood that certain variations and modifications of this suggest themselves to one of ordinary skill in the art. The scope of the present to be limited by the illustrations or the foregoing descriptions thereof.
This application claims the benefit under 35 U.S.C. 119(e) of U.S. Provisional Application No. 60/528,787, filed Dec. 10, 2003, the contents of which are incorporated herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4466065 | Advani et al. | Aug 1984 | A |
5111398 | Nunberg et al. | May 1992 | A |
5502840 | Barton | Mar 1996 | A |
5692173 | Chew | Nov 1997 | A |
5721897 | Rubinstein | Feb 1998 | A |
5784616 | Horvitz | Jul 1998 | A |
5812844 | Jones et al. | Sep 1998 | A |
5828879 | Bennett | Oct 1998 | A |
5832208 | Chen et al. | Nov 1998 | A |
5842208 | Blank et al. | Nov 1998 | A |
5854897 | Radziewicz et al. | Dec 1998 | A |
5902352 | Chou et al. | May 1999 | A |
5907837 | Ferrel et al. | May 1999 | A |
5920854 | Kirsch et al. | Jul 1999 | A |
5938723 | Hales et al. | Aug 1999 | A |
5941944 | Messerly | Aug 1999 | A |
5953536 | Nowlin, Jr. | Sep 1999 | A |
5995997 | Horvitz | Nov 1999 | A |
6002409 | Harkin | Dec 1999 | A |
6005575 | Colleran et al. | Dec 1999 | A |
6009452 | Horvitz | Dec 1999 | A |
6014681 | Walker et al. | Jan 2000 | A |
6035325 | Potts | Mar 2000 | A |
6070158 | Kirsch et al. | May 2000 | A |
6073133 | Chrabaszcz | Jun 2000 | A |
6085193 | Malkin et al. | Jul 2000 | A |
6092163 | Kyler et al. | Jul 2000 | A |
6097390 | Marks | Aug 2000 | A |
6112172 | True et al. | Aug 2000 | A |
6112243 | Downs et al. | Aug 2000 | A |
6118428 | Blackmon et al. | Sep 2000 | A |
6223201 | Reznak | Apr 2001 | B1 |
6223204 | Tucker | Apr 2001 | B1 |
6243736 | Diepstraten et al. | Jun 2001 | B1 |
6260150 | Diepstraten et al. | Jul 2001 | B1 |
6330567 | Chao | Dec 2001 | B1 |
6341303 | Rhee et al. | Jan 2002 | B1 |
6349370 | Imamura | Feb 2002 | B1 |
6353857 | Bader et al. | Mar 2002 | B2 |
6385638 | Baker-Harvey | May 2002 | B1 |
6385708 | Stracovsky et al. | May 2002 | B1 |
6415372 | Zakai et al. | Jul 2002 | B1 |
6434589 | Lin et al. | Aug 2002 | B1 |
6490612 | Jones et al. | Dec 2002 | B1 |
6499086 | Belt et al. | Dec 2002 | B2 |
6563913 | Kaghazian | May 2003 | B1 |
6601153 | Engelbrecht et al. | Jul 2003 | B1 |
6615237 | Kyne et al. | Sep 2003 | B1 |
6640244 | Bowman-Amuah | Oct 2003 | B1 |
6651081 | Salgado et al. | Nov 2003 | B1 |
6665668 | Sugaya et al. | Dec 2003 | B1 |
6675192 | Emer et al. | Jan 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6757897 | Shi et al. | Jun 2004 | B1 |
6785889 | Williams | Aug 2004 | B1 |
6834386 | Douceur et al. | Dec 2004 | B1 |
6847959 | Arrouye et al. | Jan 2005 | B1 |
6862713 | Kraft et al. | Mar 2005 | B1 |
6873982 | Bates et al. | Mar 2005 | B1 |
6920632 | Donovan et al. | Jul 2005 | B2 |
6986141 | Diepstraten et al. | Jan 2006 | B1 |
20010042090 | Williams | Nov 2001 | A1 |
20020052909 | Seeds | May 2002 | A1 |
20020055981 | Spaey et al. | May 2002 | A1 |
20020078121 | Ballantyne | Jun 2002 | A1 |
20020129083 | Abe et al. | Sep 2002 | A1 |
20020152292 | Motoyama et al. | Oct 2002 | A1 |
20020165707 | Call | Nov 2002 | A1 |
20020178009 | Firman | Nov 2002 | A1 |
20020184290 | Olszewski et al. | Dec 2002 | A1 |
20020184317 | Thankachan | Dec 2002 | A1 |
20030037094 | Douceur et al. | Feb 2003 | A1 |
20030061258 | Rodgers et al. | Mar 2003 | A1 |
20030061260 | Rajkumar | Mar 2003 | A1 |
20030084087 | Berry | May 2003 | A1 |
20030084096 | Starbuck et al. | May 2003 | A1 |
20030130993 | Mendelevitch et al. | Jul 2003 | A1 |
20030154233 | Patterson | Aug 2003 | A1 |
20030154235 | Sager | Aug 2003 | A1 |
20030220984 | Jones et al. | Nov 2003 | A1 |
20030221059 | Emmot et al. | Nov 2003 | A1 |
20030227489 | Arend et al. | Dec 2003 | A1 |
20030229898 | Babu et al. | Dec 2003 | A1 |
20030233419 | Beringer | Dec 2003 | A1 |
20050256846 | Zigmond et al. | Nov 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20050149932 A1 | Jul 2005 | US |
Number | Date | Country | |
---|---|---|---|
60528787 | Dec 2003 | US |