1. Field of the Invention
The present invention relates to computer systems and methods in which data resources are shared among concurrent data consumers while preserving data integrity and consistency relative to each consumer. More particularly, the invention concerns an implementation of a mutual exclusion mechanism known as “read-copy update” in a uniprocessor computing environment.
2. Description of the Prior Art
By way of background, read-copy update is a mutual exclusion technique that permits shared data to be accessed for reading without the use of locks, writes to shared memory, memory barriers, atomic instructions, or other computationally expensive synchronization mechanisms, while still permitting the data to be updated (modify, delete, insert, etc.) concurrently. The technique is well suited to multiprocessor computing environments in which the number of read operations (readers) accessing a shared data set is large in comparison to the number of update operations (updaters), and wherein the overhead cost of employing other mutual exclusion techniques (such as locks) for each read operation would be high. By way of example, a network routing table that is updated at most once every few minutes but searched many thousands of times per second is a case where read-side lock acquisition would be quite burdensome.
The read-copy update technique implements data updates in two phases. In the first (initial update) phase, the actual data update is carried out in a manner that temporarily preserves two views of the data being updated. One view is the old (pre-update) data state that is maintained for the benefit of operations that may be currently referencing the data. The other view is the new (post-update) data state that is available for the benefit of operations that access the data following the update. In the second (deferred update) phase, the old data state is removed following a “grace period” that is long enough to ensure that all executing operations will no longer maintain references to the pre-update data. The second-phase update operation typically comprises freeing a stale data element. In certain RCU implementations, the second-phase update operation may comprise something else, such as changing an operational state according to the first-phase update.
It is assumed that the data element list of
At some subsequent time following the update, r1 will have continued its traversal of the linked list and moved its reference off of B. In addition, there will be a time at which no other reader process is entitled to access B. It is at this point, representing expiration of the grace period referred to above, that u1 can free B, as shown in
In the context of the read-copy update mechanism, a grace period represents the point at which all running processes having access to a data element guarded by read-copy update have passed through a “quiescent state” in which they can no longer maintain references to the data element, assert locks thereon, or make any assumptions about data element state. By convention, for operating system kernel code paths, a context (process) switch, an idle loop, and user mode execution all represent quiescent states for any given CPU (as can other operations that will not be listed here).
In
There are various methods that may be used to implement a deferred data update following a grace period. One technique is to accumulate deferred update requests as callbacks (e.g., on callback lists), then perform batch callback processing at the end of the grace period. This represents asynchronous grace period processing. Updaters can perform first phase updates, issue callback requests, then resume operations with the knowledge that their callbacks will eventually be processed at the end of a grace period. Another commonly used technique is to have updaters perform first phase updates, block (wait) until a grace period has completed, and then resume to perform the deferred updates. This represents synchronous grace period processing.
Read-copy update has been used in production for many years in various operating system kernel environments, including the Linux® kernel. In non-preemptible kernels, grace period detection processing can be performed by observing natural quiescent states (e.g., context switch, user mode or idle loop) or by inducing such states (e.g., by forcing a context switch). Although non-preemptible RCU is commonly used in multiprocessor environments, it may also be used in uniprocessor environments. For example, many small embedded real-time systems are still uniprocessor systems. Such systems can often benefit from RCU (e.g., when critical section code can be executed from both process as well as interrupt context), and thus may utilize preemptible RCU as an alternative to other mutual exclusion mechanisms. However, applicants have determined that the existing grace period detection methods used by some implementations of non-preemptible RCU may not be optimal for uniprocessor environments.
A technique is provided for optimizing grace period detection in a uniprocessor environment. An update operation is performed on a data element that is shared with non-preemptible readers of the data element. A call is issued to a synchronous grace period detection method. The synchronous grace period detection method performs synchronous grace period detection and returns from the call if the data processing system implements a multi-processor environment at the time of the call. The synchronous grace period detection determines the end of a grace period in which the readers have passed through a quiescent state and cannot be maintaining references to the pre-update view of the shared data. The synchronous grace period detection method returns from the call without performing grace period detection if the data processing system implements a uniprocessor environment at the time of the call.
The foregoing and other features and advantages of the invention will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying Drawings, in which:
Turning now to the figures, wherein like reference numerals represent like elements in all of the several views,
Update operations executed within kernel-level or user-level processes, threads or other execution contexts periodically perform updates on a set of shared data 16 stored in the memory 8. Reference numerals 181, 182 . . . 18n illustrate individual update operations (updaters) that may periodically execute on the processor 4. As described by way of background above, the updates performed by the data updaters 181, 182 . . . 18n can include modifying elements of a linked list, inserting new elements into the list, deleting elements from the list, and many other types of operations (involving lists or other data structures). To facilitate such updates, the processor 4 is programmed to implement a read-copy update (RCU) subsystem 20 as part of its kernel-level or user-level functions. The processor 4 also periodically executes kernel-level or user-level read operations (readers) 211, 212 . . . 21n that access the shared data 16 for reading. Such read operations will typically be performed far more often than updates, insofar as this is one of the premises underlying the use of read-copy update. For purposes of the present disclosure, the readers 211, 212 . . . 21n are assumed to be non-preemptible. This may be because reader preemption is not supported at all, or because preemption has been temporarily disabled during RCU read-side critical section processing.
The RCU subsystem 20 is implemented in the environment of
The RCU subsystem further includes a grace period detection component 26 that allows the updaters 181, 182 . . . 18n to request asynchronous grace period detection or synchronous grace period detection following an update to a shared data element. As described in the “Background” section above, asynchronous grace period detection accumulates deferred update requests as callbacks (e.g., on callback lists), then performs batch callback processing at the end of the grace period. The updaters 181, 182 . . . 18n can perform first phase updates, issue callback requests, then resume operations with the knowledge that their callbacks will eventually be processed at the end of a grace period. The RCU primitive “call_rcu( )” in current versions of the Linux® kernel is one example of an asynchronous grace period detection method that may be implemented by the grace period detection component 22. The call_rcu( ) primitive will wait for the readers 211, 212 . . . 21n to leave their RCU-protected critical sections (as demarcated by rcu_read_unlock( )), then process callbacks that are ripe for processing.
Synchronous grace period detection differs from asynchronous grace period detection in that updaters 181, 182 . . . 18n perform first phase updates, then block (wait) until a grace period has completed, and thereafter resume to perform the deferred updates themselves. The RCU primitive “synchronize_rcu( )” in current versions of the Linux® kernel is one example of a synchronous grace period detection method that may be implemented by the grace period detection component 22. The synchronize_rcu( ) primitive will wait for the readers 211, 212 . . . 21n to leave their RCU-protected critical sections (as demarcated by rcu_read_unlock( )), then return to the updater that invoked the primitive.
By convention, synchronous grace period detection may not be called from non-preemptible code. For example, non-preemptible RCU readers may not perform update operations that rely on synchronous grace period detection inside of RCU-protected critical sections. Using synchronous grace period detection in this case would result in deadlock because the update code invoked by the reader would wait for the reader to complete, which it never will. Nor can such synchronous grace period detection be used for update operations performed within hardware interrupt handlers or software interrupt handlers. Such handlers are not allowed to block.
The foregoing means that when synchronous grace period detection begins, it may be assumed that no non-preemptible RCU-protected critical section will be capable of running on the same processor. In a uniprocessor environment, synchronous grace period detection need not wait at all for non-preemptible readers to complete RCU-protected critical sections. There will be no such readers. Applicants have thus determined that grace period detection may be conditioned on whether a call for synchronous grace period detection has been made in a uniprocessor environment running non-preemptible RCU readers. If this condition prevails, grace period detection may be bypassed. Grace period detection need only be performed to protect non-preemptible readers if asynchronous grace period detection is requested or if synchronous grace period detection is requested in a multi-processor environment.
The system 2 of
Grace period detection processing for uniprocessor environments running non-preemptible readers may be conditioned in a variety of ways. One technique would be to set the synchronize grace period detection condition statically at compile time by conditionally compiling the RCU subsystem based on a multiprocessor vs. uniprocessor compiler preprocessor directive. In Linux®, the condition could be based on having the CONFIG_SMP kernel configuration (Kconfig) option enabled and the corresponding C preprocessor symbol defined. A Linux® kernel built for uniprocessor operation only will have CONFIG_SMP disabled and the corresponding C preprocessor directive symbol not defined. Thus, a non-preemptible RCU implementation in the Linux® kernel can decide at compilation time how to define the synchronous grace period detection primitive based on whether the target configuration of the kernel supports multiprocessor systems. Example preprocessor directive pseudo code based on the synchronize_rcu( ) primitive could be implemented as follows:
In this case, the synchronize_rcu( ) primitive performs conventional synchronous grace period detection if the CONFIG_SMP option is enabled. Otherwise, synchronize_rcu( ) returns without performing grace period detection. In some cases, synchronize_rcu( ) could return without performing any processing at all. In other cases, it may be desirable to perform minimal housekeeping processing, such as executing one or more instructions to prevent reordering of the call to synchronize_rcu( ) with another instruction (e.g., an instruction pertaining to a data element update performed by the updater 181, 182 . . . 18n that called for synchronous grace period detection).
Another technique for conditioning grace period detection processing would be to set the grace period condition dynamically at run time. This could be done by consulting system configuration information provided at compile time, at boot time or at run time. For example, as shown in
As can be seen, the config_smp configuration variable 30 will be defined as 1 for multiprocessor code compilations and 0 for uniprocessor compilations. At run time, the config_smp variable can be tested to condition grace period detection according to the appropriate system configuration. For example, if config_smp=1, the conventional synchronize_rcu( ) primitive could be implemented as per the pseudocode above. If config_smp=0, the alternative synchronize_rcu( ) primitive of the pseudocode could be implemented.
Still another technique for dynamically conditioning grace period detection processing would be to select the grace period detection according to a kernel boot parameter passed at boot time. The kernel initialization code could set the configuration variable 30 (such as config_smp above) according a boot parameter that indicates whether or not uniprocessor or multiprocessor grace period detection processing is to be used by the RCU subsystem 20. As in the example given in the preceding paragraph, the RCU subsystem 20 would then be programmed to inspect the configuration variable 30 in order to dynamically determine whether uniprocessor-mode or multiprocessor-mode grace period detection is to be used.
Still another technique for dynamically conditioning grace period detection processing would be to set the configuration variable 30 according to the current number of active processors. In a hotpluggable environment such the system 2A of
Turning now to
During run time, an updater 181, 182 . . . 18n implements block 44 by performing an update to the shared data set 18 shown in
Turning now to
Accordingly, a grace period detection optimization technique for uniprocessor systems running non-preemptible readers has been disclosed. It will be appreciated that the foregoing concepts may be variously embodied in any of a data processing system, a machine implemented method, and a computer program product in which programming means are provided by one or more machine-readable media for use in controlling a data processing system to perform the required functions. The system and method may be implemented using a software-programmed machine, a firmware-programmed machine, hardware circuit logic, or any combination of the foregoing. For the computer program product, example machine-readable media for providing programming means are depicted by reference numeral 100 in
While various embodiments of the invention have been described, it should be apparent that many variations and alternative embodiments could be implemented in accordance with the invention. It is understood, therefore, that the invention is not to be in any way limited except in accordance with the spirit of the appended claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5241673 | Schelvis | Aug 1993 | A |
5442758 | Slingwine et al. | Aug 1995 | A |
5608893 | Slingwine et al. | Mar 1997 | A |
5727209 | Slingwine et al. | Mar 1998 | A |
6219690 | Slingwine et al. | Apr 2001 | B1 |
6886162 | McKenney | Apr 2005 | B1 |
6996812 | McKenney | Feb 2006 | B2 |
7287135 | McKenney et al. | Oct 2007 | B2 |
7353346 | McKenney et al. | Apr 2008 | B2 |
7953708 | McKenney et al. | May 2011 | B2 |
20050149634 | McKenney | Jul 2005 | A1 |
20050198030 | McKenney | Sep 2005 | A1 |
20060100996 | McKenney | May 2006 | A1 |
20060112121 | McKenney et al. | May 2006 | A1 |
20060117072 | McKenney | Jun 2006 | A1 |
20060123100 | McKenney | Jun 2006 | A1 |
20060130061 | McKenney | Jun 2006 | A1 |
20060265373 | McKenney et al. | Nov 2006 | A1 |
20070083565 | McKenney | Apr 2007 | A1 |
20070101071 | McKenney | May 2007 | A1 |
20070198520 | McKenney | Aug 2007 | A1 |
20070226440 | McKenney et al. | Sep 2007 | A1 |
20070266209 | McKenney et al. | Nov 2007 | A1 |
20080033952 | McKenney et al. | Feb 2008 | A1 |
20080040720 | McKenney et al. | Feb 2008 | A1 |
20080082532 | McKenney | Apr 2008 | A1 |
20080140951 | McKenney et al. | Jun 2008 | A1 |
20080177742 | McKenney | Jul 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20100115235 A1 | May 2010 | US |