1. Technical Field
This invention relates to operating systems management. In particular, this invention relates to adaptive partitioning for operating systems.
Fair-share scheduling is a scheduling strategy known in the art for operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes. For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100%/4=25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100%/5=20%). Other scheduling methods such as last-in-first-out (LIFO), round-robin scheduling, rate-monotonic scheduling, and earliest-deadline first scheduling are also known.
In a conventional fair-share scheduling system, a high priority workload response time can be low only because another lower priority workload response time is high. Low priority processes can tax a microprocessor's resources by consuming large quantities of CPU budget, which may leave little available CPU budget for processes that need to be run immediately, but are infrequently executed. In addition, untrusted applications may gain access to a CPU resource and create an infinite loop, starving other legitimate processes of their required CPU budgets. Therefore, a need exists for a scheduling strategy for an operating system that allows critical processes adequate access to system resources when needed.
2. Related Art
Fair-share scheduling is a scheduling strategy known in the art for operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes. For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100%/4=25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100%/5=20%). Other scheduling methods such as last-in-first-out (LIFO), round-robin scheduling, rate-monotonic scheduling, and earliest-deadline first scheduling are also known.
In a conventional fair-share scheduling system, a high priority workload response time can be low only because another lower priority workload response time is high. Low priority processes can tax a microprocessor's resources by consuming large quantities of CPU budget, which may leave little available CPU budget for processes that need to be run immediately, but are infrequently executed. In addition, untrusted applications may gain access to a CPU resource and create an infinite loop, starving other legitimate processes of their required CPU budgets. Therefore, a need exists for a scheduling strategy for an operating system that allows critical processes adequate access to system resources when needed.
An adaptive partition system provides a method for scheduling in an operating system where the system creates one or more adaptive partitions including one or more threads or one or more group of threads in an operating system. The operating system specifies one or more adaptive partition parameters. The scheduler may designate one or more critical threads. The scheduler may assign each adaptive partition a CPU time budget as a percentage of an overall system budget and apply one or more minimum CPU percentage execution time guarantees to the threads or the groups of threads when the operating system is overloaded. The operating system may execute the threads if there is CPU budget available for use by the partition. The system calculates the CPU budget consumed by the adaptive partition over a sliding time window for all partitions. The system allows use of an additional amount of CPU budget for critical threads even when the adaptive partition comprising the critical thread has exhausted its CPU budget, and deducts amounts of CPU budget consumed by the adaptive partition, in a process known as microbilling.
The adaptive partition scheduler also provides a method for scheduling an adaptive partition which may include determining if one or more threads in a number of adaptive partitions including one or more threads in an operating system is critical. The operating system may evaluate an ordering function for the adaptive partitions to determine the thread with the highest value of the ordering function. The operating system executes the thread from the adaptive partition that has the highest value after evaluating the ordering function. The system applies the time spent executing the thread against a CPU budget of the thread's adaptive partition (known as microbilling), and may apply the time spent executing the thread against a critical time budget of the adaptive partition if and only if the thread is critical and if the thread would not have been scheduled if it had not been critical.
The adaptive partition scheduler provides a method for transferring status of a critical thread in a message-passing operating system, which may include sending a message from a thread in an adaptive partition to a server, where the adaptive partition has a CPU budget and a critical time budget. The server receives the message and may assign the priority level of the sending thread to the receiving thread. To avoid priority inversion, the scheduler may join the receiving thread to the adaptive partition of the sending thread and may begin billing the execution time of the receiving thread to the partition of the sending thread.
The adaptive partition scheduler also provides a method of prioritizing access to a mutex in a message-passing operating system, which may include determining, from a list of waiting threads, a waiting thread that is most likely to run next after a current thread in an adaptive partition, where the current thread is holding a mutex, and the waiting threads are waiting for the mutex. The scheduler may raise the priority level of the current thread in the adaptive partition to the priority level of the waiting thread most likely to run next after the current thread. The system calculates a CPU waiting time incurred by the current thread while the current thread is holding the mutex and charges the current thread's assigned CPU budget the amount of the CPU waiting time, where the current thread assigned CPU budget is charged until the CPU budget reaches zero. Finally, the system charges the waiting thread's assigned CPU budget a remaining CPU waiting time if the current thread assigned CPU budget reaches zero, where the remaining CPU waiting time is the difference between the CPU waiting time and the amount of CPU budget charged to the current thread.
Other systems, methods, features and advantages of the invention will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the following claims.
The invention can be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
As shown, at block 110, one or more adaptive partitions may be created comprising one or more threads or one or more group of threads in an operating system. The maximum number of partitions may be dynamically configured. Typically, the scheduler may be configured for a maximum of 16 adaptive partitions, though more partitions may be created as needed. One adaptive partition must be the system adaptive partition, where at least the idle thread and the process manager's threads exist. At boot time, the system designer may specify 115 the properties of the adaptive partitions, including: the list of processes that are initially part of each adaptive partition: the length of the averaging window, in milliseconds; the guaranteed CPU utilization (in percent) allocated to an adaptive partition; the maximum critical time budget of the adaptive partition; or the policy for how the system should respond should the adaptive partition exhaust its critical-time budget. The averaging window is the time over which the scheduler will try to keep adaptive partitions at their guaranteed CPU percentages, when the system is overloaded. A typical time is 100 milliseconds, though the designer may choose an appropriate averaging window as needed. The window size is specified only at boot time and is the same for all adaptive partitions.
A parent process in an adaptive partition may spawn, or generate, other processes or threads during operation. Child processes and threads inherit the adaptive partition of their parent process automatically. However, an API may be provided that will allow spawning threads into other adaptive partitions. This API may be made available only to code with sufficient privilege. For example, a system application launcher may have such privilege.
The size of the averaging window (“windowsize”) is measured in clock ticks, which are converted internally to milliseconds. A clock tick is the interval at which the clock interrupt (the system timer) fires. Windowsize can range from a minimum of two ticks (typically 2 ms) to 255 ticks. Guaranteed CPU budget and critical time budgets are averaged by the same window size.
At block 120, critical threads may be designated. Critical threads may provide the ability for real-time behavior within these partitions. Designating a thread as critical gives it the ability to run immediately even if the adaptive partition's budget is exceeded. When this occurs, the adaptive partition is said to have gone into short-term debt. At boot time, the system designer may label selected adaptive partitions as critical and may give to each a critical time budget, specified in time units (for example, milliseconds). This is the amount of time all critical threads may use (above the specified partition's normal budget) during an averaging window. A critical thread will run even if its adaptive partition is out of budget, as long as the partition still has critical budget.
Next, an adaptive partition may be assigned a CPU budget as a percentage of the total system budget at block 125. The sum of all adaptive partitions' CPU percentages must always be 100%. For the purpose of assigning CPU shares, an SMP machine, regardless of the number of processors, is considered to be a single computer with 100% CPU to share out amongst its adaptive partitions. Therefore, engineering the parameters for adaptive partition scheduling is no different for an SMP machine than one with a single processor. Adaptive partition parameters may typically be set at boot time. However, the ability to modify them at runtime also may be available. This includes re-distributing the guaranteed CPU utilization between adaptive partitions, as well as modifying the critical time budget. The ability to modify adaptive partition parameters may only be made available to code with sufficient privilege.
As an added convenience, the designer may define a set of operating modes. A mode is a set of adaptive partition parameters. Modes may be typically specified at boot time. A run-time API allows the user to switch modes as needed. For example, a separate set of CPU-percentages may be needed during startup versus normal operations. These two modes would be set up, and the system may switch from one mode (the startup mode) to the next (normal operation mode) when initialization is complete. When the parameters of the set of adaptive partitions change at run time, for example because of a mode switch, it may take up to one averaging window for it to take effect.
At block 130, one or more minimum CPU percentage execution time guarantees may be applied to the threads or the groups of threads when the operating system is overloaded. This guarantees that the system will allocate a guaranteed CPU percentage as a resource for threads that are designated as critical. This necessarily may starve other threads for a period of time when the critical threads are executing, as a system design parameter.
If there is available CPU budget during normal load operations (i.e., there is not a lack of system resources for allocation), then the operating system will execute threads assigned to a partition 135. Under normal load, the system may run a hard real-time scheduler. The highest-priority thread in the system may run immediately when it becomes ready (usually via an interrupt event). There is no delay imposed by a fixed timeslice partition scheduler (i.e., a scheduler that decides what to run next based on fixed time period allocations). In effect, CPU time is efficiently spent on those threads that most deserve it, without a timeslice partition scheduler introducing any scheduling latencies.
When the system is not overloaded, for example, if one adaptive partition chooses to sleep, the scheduler gives the CPU time to other adaptive partitions—even if the other adaptive partitions are over budget: if one adaptive partition has a thread with the highest priority, the scheduler hands out the “free” time to that thread. If two or more adaptive partitions have threads with the same highest priority, the scheduler divides the free time in proportion to the other adaptive partitions' percentages. This is necessary to prevent long ready-queue delay times in the case where two adaptive partitions have the same priority.
For example, suppose there are three adaptive partitions with 70%, 20% and 10% guarantees, ready threads at the same priority, and the system is overloaded. When the 70% adaptive partition goes to sleep, the scheduler hands CPU time out to the 20% and 10% adaptive partitions in a 2:1 ratio. If the sleeping adaptive partition sleeps for a short time (i.e. less than windowsize−percentage*windowsize milliseconds within one averaging window), then the scheduler may make sure that the sleeping partition will later get CPU time up to its guaranteed limit. That means that the 20% and 10% adaptive partitions in the example must pay back the time they utilized. If the sleeping adaptive partition sleeps for a long time, then some or all of the time given to other adaptive partitions may become free.
High-priority threads, running with the FIFO scheduling, that run for a long time, will be preempted when their adaptive partition runs out of budget. If a thread exhausts its budget in the middle of its timeslice, the scheduler may let the thread run to the end of its timeslice. This causes the adaptive partition to briefly run over budget.
The CPU budget consumed by the adaptive partition may be calculated by the scheduler over the sliding time window at block 140. If the threads in an adaptive partition have exhausted the assigned CPU budget and the threads are designated as critical, then the scheduler may allow use of additional CPU budget for the critical threads 145. Under heavy load, if an adaptive partition exceeds its CPU budget, then its highest-priority thread does not run until the partition once again has time available in its budget. This is a safeguard on the system that divides insufficient CPU time among the partitions. In this state, the processor runs the highest-priority thread in an adaptive partition with CPU time in its budget. When the system is overloaded, the scheduler balances adaptive partitions to within 1% of their guaranteed CPU percentages, or +/−timeslice/windowsize, whichever is greater.
If all adaptive partitions are at their CPU limit, then the adaptive partition algorithm may specify that the highest-priority thread must be run. If two adaptive partitions have the same highest priority, then the adaptive partition that has used the least fraction of its budget may be run. This is needed to prevent long ready-queue delays that would otherwise occur. For example, if the window size is 100 ms, adaptive partition 1 is allotted 80% and has used 40 ms, and adaptive partition 2 is allotted 20% and has used 5 ms, and both partitions are at priority 10, adaptive partition 2 is run because its relative fraction free is 5 ms/20 ms, or 0.25, while adaptive partition 1's relative fraction free is 40 ms/80 ms or 0.50.
Overload or underload is a property of the whole system, not of a single adaptive partition. Adaptive partitions may legitimately go over budget when some other adaptive partition is sleeping. This is not by itself considered to be a system overload, and therefore does not trigger the overload-notification API.
The scheduler accounting may fully accumulate all of the CPU time spent executing by a thread. This may include, but is not limited to, time spent in interrupt handling threads, kernel execution time and partial timeslice execution. This is known as microbilling. Time spent by the idle thread is never billed. Time spent spin-locked on one processor while waiting to enter the kernel may be charged to the thread that is trying to enter the kernel. After the CPU budget has been calculated for the threads in an adaptive partition, the schedule microbills the amount of CPU budget used to the adaptive partition's assigned CPU budget at block 150.
Critical Threads
Under maximum system load, it may occur that certain partitions are using up their entire budget. Designating a thread as critical gives it the ability to run immediately even if the adaptive partitions' budget is exceeded. When this occurs, the adaptive partition is said to have gone into short-term debt.
At boot time, the system designer may label selected adaptive partitions as critical and give to each a critical time budget, specified in time units (for example, milliseconds). This is the amount of time all critical threads may use (above the specified partition's normal budget) during an averaging window. A critical thread will run even if its adaptive partition is out of budget, as long as the partition still has critical budget.
The system may automatically mark interrupt threads that are initiated by an I/O interrupt as critical. The designer also may specify a set of additional OS wakeup events, for example, timers, which may mark their associated handler threads as critical. An API also may allow designers to mark selected threads as critical.
In
A critical thread may remain critical until it enters a blocking state. That is, it may leave the running or ready state—this is typically because the threads are waiting for a message, interrupt notification, etc. The criticality of a thread, or billing to its adaptive partition's critical time budget, may be inherited along with the adaptive partition during operations which trigger priority inheritance.
The short-term debt is bounded by the critical budget specified on the partition. Over time, the partition may repay any short-term debt. A critical thread that exceeds the partition's critical budget (i.e. causes the partition to become bankrupt) is considered to be an application error, and the designer may specify the system's response. The choices for response are: 1) force the system to reboot; 2) notify an internal or external system watchdog; or 3) terminate or notify other designated processes.
The system may add an entry to its log if an adaptive partition exhausts its critical budget. In the case where the adaptive budgets are changed (through a mode change or through an API call to modify CPU percentages), the scheduler may never immediately declare any adaptive partition to be bankrupt (being over critical-budget).
Priority Inheritance and Server Threads
To avoid busy-server priority-inversion, for example when a client messages a resource manager whose server threads 622 and 623 are all busy with other clients (block 525 of
Algorithms Scheduling
In adaptive partition scheduling, it is not enough to simply pick the highest priority thread with budget. Quite often, all adaptive partitions will exhaust their percentage budgets at the same time, and then the highest priority adaptive partition may be run. Also, when adaptive partitions have equal highest priorities, it may be desirable to split their time by the ratio of their guaranteed CPU percentages. Finally, critical threads may be run even if their adaptive partition is out of budget.
Then, a function f(ap) is constructed which orders the ordered triplet (x,y,z), where x=is_critical(ap) OR has_budget(ap); y=has_budget(ap); and z=1-relative_fraction_used(ap). Then the function f(ap) is evaluated for all ap, choosing the ap that has the highest value of f(ap), where the ordering of the triplets (x,y,z) is defined numerically, with x being more significant than y or z and y being more significant than z, at block 818. The operating system then determines the highest priority thread ready for execution by the adaptive partition by computing the function f(ap) as above for the ordered triplet (w,y,z), where w=has_critical_budget(ap), y=has_budget(ap); and z=1-relative_fraction_used(ap) (block 819).
Computing the Relative Fraction Used
The relative fraction used of an adaptive partition is the number of microseconds it ran during the last averaging window divided by its share of the averaging window in microseconds, or run/(windowsize*percentage). To avoid doing floating-point division at run time, the scheduler may instead compute a different number that has the same ordering properties as run/(windowsize*percentage) for each adaptive partition. Thus, a constant c(a) may be pre-computed so that the adaptive partition with the highest run*c(ap) also is the adaptive partition with the highest run/(windowsize*percentage).
This c(ap) may be precomputed, once per startup, in the process manager. At block 910, the CPU budget percentage for each adaptive partition may be determined at start-up. The operating system may compute, for each adaptive partition a factor, f(ap) to be the product of the percentage CPU budgets of all the other adaptive partitions 915. At block 920, if the maximum averaging error is max_error (e.g. 0.005 for ½ a percent), then k=min(list of f(ap))*max_error may be computed. Next, a constant scaling factor c(ap) is calculated as c(ap)=f(ap)/k 925. The value, run*c(ap) has the same ordering properties as run/(windowsize*percentage) within an error tolerance max_error.
To practically compare different adaptive partitions' relative fraction used, the scheduler may need to multiply the adaptive partitions' run time with c(ap). However the microbilled times may be large numbers. To ensure only single-multiply instructions are used, the micro-billed times may be first scaled choosing a number of most significant bits of the CPU budget time 930. The degree of scaling is set by max_error. However, any reasonable choice for max_error (e.g. ¼ to ½%) can be satisfied by choosing only the most significant 16 bits of the microbilled run-time. So, in practice, the system may be calculating run>>32*cp(ap). At block 935, the relative budget ratio is calculated as c(ap)*(adaptive partition execution time).
An error tolerance of 0.5% to 0.25% is considered sufficient for an implementation. However, the application may include the notion that for any specified error tolerance, a minimal number of bits is chosen to both represent c(ap), the scaled value of the CPU time executed by adaptive partition ap, during the last averaging windowsize time, and the product of c(ap) and the scaled CPU time. The minimal number of bits is chosen for both representations and carrying out multiplies so that all representation and arithmetic errors are less than or equal to a chosen error tolerance.
Ordering Access to Mutexes
Mutexes may be used to prevent data inconsistencies due to race conditions. A race condition often may occur when two or more threads need to perform operations on the same memory area, but the results of computations depend on the order in which these operations are performed. Mutexes are used for serializing shared resources. Anytime a global resource is accessed by more than one thread the resource may have a mutex associated with it. The operating system may apply a mutex to protect a segment of memory (“critical region”) from other threads. The application gives a mutex to threads in the order that they are requested. However, the application deals with the problem of when a low-priority thread, which may hold the mutex, unreasonably delays higher-priority threads which are waiting for the same mutex.
The thread “most likely to run next” may be computed by applying, pairwise, a “compare two threads” algorithm repeatedly on pairs of threads in the list of waiting threads. The “compare two threads” algorithm is executed as follows, where A and B are the two threads to be compared: A function f(ap) is constructed, which includes the ordered triplets(x,y,z). A first value (determined by x=highest_priority_thread(ap).is_critical OR has_budget(ap)) determines if the highest priority thread in the adaptive partition is critical or the adaptive partition has CPU budget. A second value (determined by=highest_priority_thread(ap).prio) determines if the thread is the highest priority thread in the adaptive partition. A third value (z=1-relative_fraction_used(ap)), determines whether there is any remaining relative fraction of CPU budget remaining for the adaptive partition. This is the same ordering function f(ap) constructed for use in method 800 above. Then, let partition_of(X) mean the partition containing the thread X. Then, if f(partition_of(A))>f(partition_of(B), thread A is more likely to run than thread B. The ordering of the triplets (x,y,z) is defined numerically, with x being more significant than y or z, and y being more significant than z, to create an order of significance for the first, second, and third values (x, y, and z). The function f(X) is constructed for each thread to be compared until the thread with the highest f(X) is determined. The thread with the highest f(X) may be determined to be the “thread most likely to run next” and may be charged accordingly for waiting for the mutex.
Designing a System with Adaptive Partition Scheduling
The adaptive partition scheduler is typically used to solve two problems: 1) engineering a system to work properly in overload; and 2) preventing low-importance, or untrusted applications, from monopolizing the system. In either case, the parameters for the adaptive partition scheduler may need to be configured with the whole system in mind. To engineer a system for adaptive partition scheduling, there are four basic decisions:
It may be desirable to put functionally-related software into the same adaptive partition. Frequently, that is the right choice. However, adaptive partition scheduling is a structured way of deciding when not to run software. Therefore, the actual criterion for separating software into different adaptive partitions is: separate software into different adaptive partitions if they should be starved of CPU time under different circumstances. For example, suppose the system is a packet router with the applications of: routing packets; collecting and logging statistics for packet routing; route-topology protocols with peer routers; and collecting logging and route-topology metrics. It may seem reasonable to have two adaptive partitions: one for routing and one for topology. Certainly, logging routing metrics is functionally related to packet routing. However, when the system is overloaded, i.e., there is more outstanding work than the machine may possibly accomplish, the system may need to decide what work to run slower. In this example, when the router is overloaded with incoming packets, it is still important to route packets. But it may be reasonably decided that if there are not resources for both, it is preferable to route packets than collect routing metrics. It also may be reasonable to conclude that route-topology protocols should still run, using much less of the machine than routing itself, but running quickly when it needs to. Such an analysis would lead to three adaptive partitions:
In this case, the functionally-related components of routing and logging routing metrics may be separated, because it is preferable to “starve” just one if forced to “starve” something. Similarly, two functionally unrelated components, routing metric logging and topology metric logging may be grouped, and chosen to “starve” under the same circumstances.
How to Choose CPU percentages for Each Adaptive Partition
In choosing percentages for each adaptive partition, the first step is to measure the CPU time each adaptive partition would like to use under normal loads. If the application is a transaction processor, it may be useful to measure CPU consumption under a few different loads and construct a graph of offered load to CPU consumed. To properly configure the adaptive partition scheduler, application system designers may also need to:
Ideally, resource managers, such as file systems, run with a budget of zero. This way they would always be billing time to their clients. However, sometimes device drivers find out too late which client a particular thread has been working for. In addition, some device drivers may have background threads (e.g. for audits or maintenance) that require budget that cannot be attributed to a particular client. In those cases, the system designer may measure the resource manager's background and unattributable loads and add them to the resource manager's adaptive partition's budget.
Choosing the Window Size
Designers who change the timeslice of the system may do so before defining the adaptive partition scheduler's window size. The time-averaging window size may be set from 8 ms to 255 ms. This is the time over which the scheduler tries to balance adaptive partitions to their guaranteed CPU limits. Additional effects of choosing this windowsize are:
In an extreme case, a long window size may cause some adaptive partitions to experience runtime delays. The delays are never longer than the windowsize. For example, if two adaptive partitions have one thread, both at priority 10, both threads are always ready to run, the groups have guaranteed CPU percentages of 90% and 10%, and the windowsize is 100 ms. Then, the scheduler will schedule the 10% adaptive partition roughly every 9 timeslices. However, if the 90% adaptive partition sleeps for 10 ms, the 10% adaptive partition will spend its entire budget during those 10 ms. Subsequently, the 10% partition will run only at intervals of every 90 ms. However, this pattern only occurs if the 10% partition never suspends (which is exceedingly unlikely) and if there are no threads of other priorities (also exceedingly unlikely).
The methods described above may be configured to run in a transaction processing system in an event of an overload in processing capacity, where it is more important to continue to process some fraction of the offered load rather to fail completely. Examples of such applications include Internet routers and telephone switches. The methods may be configured to run in other real-time operating system environments, such as automotive and aerospace environments, where critical processes may be designated that need to be executed at critical events. An example may be in an automotive environment, where an airbag deployment event is a low probability event, but must be allocated processor budget should the event be initiated. As such, the methods may be configured to operate within an automobile control operating system.
The methods may be configured to operate in an environment where untrusted applications may be in use. In such situations, applications such as Java applets may be downloaded to execute in the operating system, but the nature of the application may allow the untrusted application to take over the system and create an infinite loop. The operating system designer will not want such a situation, and will create appropriate adaptive partitions so the untrusted application may be run in isolation, while limiting access to CPU time which other processes will have need of.
While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents.
This application is a continuation of U.S. patent application Ser. No. 11/216,795, filed Aug. 31, 2005, and issued as U.S. Pat. No. 8,387,052, which claims the benefit of priority from U.S. Provisional Application No. 60/662,070, filed Mar. 14, 2005, each of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4908750 | Jablow | Mar 1990 | A |
5068778 | Kosem et al. | Nov 1991 | A |
5528513 | Vaitzblit et al. | Jun 1996 | A |
5530860 | Matsuura | Jun 1996 | A |
5745778 | Alfieri | Apr 1998 | A |
5812844 | Jones et al. | Sep 1998 | A |
5845116 | Saito et al. | Dec 1998 | A |
5912888 | Walsh et al. | Jun 1999 | A |
5944778 | Takeuchi et al. | Aug 1999 | A |
6003061 | Jones et al. | Dec 1999 | A |
6108646 | Mohri et al. | Aug 2000 | A |
6249836 | Downs et al. | Jun 2001 | B1 |
6301616 | Pal et al. | Oct 2001 | B1 |
6385636 | Suzuki | May 2002 | B1 |
6385638 | Baker-Harvey | May 2002 | B1 |
6415333 | Vasell | Jul 2002 | B1 |
6560628 | Murata | May 2003 | B1 |
6687904 | Gomes et al. | Feb 2004 | B1 |
6754690 | Larson | Jun 2004 | B2 |
6757897 | Shi | Jun 2004 | B1 |
6785889 | Williams | Aug 2004 | B1 |
6859926 | Brenner et al. | Feb 2005 | B1 |
6874144 | Kush | Mar 2005 | B1 |
6910213 | Hirono et al. | Jun 2005 | B1 |
6931639 | Eickemeyer | Aug 2005 | B1 |
6941437 | Cook et al. | Sep 2005 | B2 |
6948172 | D'Souza | Sep 2005 | B1 |
6950848 | Yousefi'zadeh | Sep 2005 | B1 |
6957431 | Bollella et al. | Oct 2005 | B2 |
6964048 | Isham | Nov 2005 | B1 |
6988226 | Koning et al. | Jan 2006 | B2 |
7028298 | Foote | Apr 2006 | B1 |
7051329 | Boggs et al. | May 2006 | B1 |
7058951 | Bril et al. | Jun 2006 | B2 |
7086057 | Hayashi | Aug 2006 | B2 |
7100161 | Latour | Aug 2006 | B2 |
7103745 | Koning et al. | Sep 2006 | B2 |
7117497 | Miller et al. | Oct 2006 | B2 |
7134124 | Ohsawa et al. | Nov 2006 | B2 |
7222343 | Heyrman et al. | May 2007 | B2 |
7269713 | Anderson et al. | Sep 2007 | B2 |
7302685 | Binns et al. | Nov 2007 | B2 |
7380039 | Miloushev et al. | May 2008 | B2 |
7383548 | Boon et al. | Jun 2008 | B2 |
7421691 | Hancock et al. | Sep 2008 | B1 |
7464379 | Kanai et al. | Dec 2008 | B2 |
7472389 | Smith et al. | Dec 2008 | B2 |
7475399 | Arimilli et al. | Jan 2009 | B2 |
7478149 | Joshi et al. | Jan 2009 | B2 |
7506361 | Kegel et al. | Mar 2009 | B2 |
7512950 | Marejka | Mar 2009 | B1 |
7562362 | Paquette et al. | Jul 2009 | B1 |
7657892 | Langen et al. | Feb 2010 | B2 |
7765547 | Cismas et al. | Jul 2010 | B2 |
7900206 | Joshi et al. | Mar 2011 | B1 |
20020062435 | Nemirovsky et al. | May 2002 | A1 |
20020078121 | Ballantyne | Jun 2002 | A1 |
20020078194 | Neti et al. | Jun 2002 | A1 |
20020083211 | Driesner et al. | Jun 2002 | A1 |
20020120661 | Binns et al. | Aug 2002 | A1 |
20020120665 | Alford et al. | Aug 2002 | A1 |
20020178208 | Hutchison et al. | Nov 2002 | A1 |
20020198925 | Smith et al. | Dec 2002 | A1 |
20030009506 | Bril et al. | Jan 2003 | A1 |
20030061258 | Rodgers et al. | Mar 2003 | A1 |
20030061260 | Rajkumar et al. | Mar 2003 | A1 |
20030069917 | Miller | Apr 2003 | A1 |
20030084164 | Mazzitelli | May 2003 | A1 |
20030088606 | Miller et al. | May 2003 | A1 |
20030101084 | Otero Perez | May 2003 | A1 |
20030154234 | Larson | Aug 2003 | A1 |
20040078543 | Koning et al. | Apr 2004 | A1 |
20040139431 | Muller | Jul 2004 | A1 |
20040143664 | Usa et al. | Jul 2004 | A1 |
20040186904 | Oliveira | Sep 2004 | A1 |
20040216101 | Burky et al. | Oct 2004 | A1 |
20040216113 | Armstrong et al. | Oct 2004 | A1 |
20040226015 | Leonard et al. | Nov 2004 | A1 |
20050004879 | Mathias et al. | Jan 2005 | A1 |
20050010502 | Birkestrand et al. | Jan 2005 | A1 |
20050081200 | Rutten et al. | Apr 2005 | A1 |
20050081214 | Nemirovsky et al. | Apr 2005 | A1 |
20050091387 | Abe | Apr 2005 | A1 |
20050138168 | Hoffman et al. | Jun 2005 | A1 |
20050210468 | Chung et al. | Sep 2005 | A1 |
20050283785 | D'Souza | Dec 2005 | A1 |
20060026594 | Yoshida et al. | Feb 2006 | A1 |
20060080285 | Chowdhuri | Apr 2006 | A1 |
20060117316 | Cismas et al. | Jun 2006 | A1 |
20060130062 | Burdick et al. | Jun 2006 | A1 |
20060143350 | Miloushev et al. | Jun 2006 | A1 |
20060206881 | Dodge et al. | Sep 2006 | A1 |
20060206891 | Armstrong et al. | Sep 2006 | A1 |
20060212870 | Arndt et al. | Sep 2006 | A1 |
20060218557 | Garthwaite et al. | Sep 2006 | A1 |
20060225077 | Anderson | Oct 2006 | A1 |
20060277551 | Accapadi et al. | Dec 2006 | A1 |
20070061788 | Dodge et al. | Mar 2007 | A1 |
20070061809 | Dodge et al. | Mar 2007 | A1 |
20070226739 | Dodge et al. | Sep 2007 | A1 |
20070271562 | Schumacher et al. | Nov 2007 | A1 |
20080126547 | Waldspurger | May 2008 | A1 |
20080196031 | Danko | Aug 2008 | A1 |
20080235701 | Danko | Sep 2008 | A1 |
20080307425 | Tripathi | Dec 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 03052597 | Jun 2003 | WO |
WO 2004019205 | Mar 2004 | WO |
Entry |
---|
Krten, Rob, “Getting Started with QNX 4: A Guide for Realtime Programmers,” PARSE Software Devices, 1999, pp. 13-24. |
“QNX Operating System: System Architecture,” QNX Software Systems, Ltd., 1997, pp. 37-47. |
Shen, Kai et al., “Integrated Resource Management for Cluster-Based Internet Services” ACM SIGOPS Operating Systems Review, 2002. |
Number | Date | Country | |
---|---|---|---|
20140245311 A1 | Aug 2014 | US |
Number | Date | Country | |
---|---|---|---|
60662070 | Mar 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11216795 | Aug 2005 | US |
Child | 13776465 | US |