1. Field of the Invention
The present invention relates to computer systems and methods in which data resources are shared among concurrent data consumers while preserving data integrity and consistency relative to each consumer. More particularly, the invention concerns an implementation of a mutual exclusion mechanism known as “read-copy update” in a preemptive real-time computing environment with processors capable of assuming low-power states.
2. Description of the Prior Art
By way of background, read-copy update is a mutual exclusion technique that permits shared data to be accessed for reading without the use of locks, writes to shared memory, memory barriers, atomic instructions, or other computationally expensive synchronization mechanisms, while still permitting the data to be updated (modify, delete, insert, etc.) concurrently. The technique is well suited to multiprocessor computing environments in which the number of read operations (readers) accessing a shared data set is large in comparison to the number of update operations (updaters), and wherein the overhead cost of employing other mutual exclusion techniques (such as locks) for each read operation would be high. By way of example, a network routing table that is updated at most once every few minutes but searched many thousands of times per second is a case where read-side lock acquisition would be quite burdensome.
The read-copy update technique implements data updates in two phases. In the first (initial update) phase, the actual data update is carried out in a manner that temporarily preserves two views of the data being updated. One view is the old (pre-update) data state that is maintained for the benefit of read operations that may have been referencing the data concurrently with the update. These other read operations will never see the stale data and so the updater does not need to be concerned with them. However, the updater does need to avoid prematurely removing the stale data being referenced by the first group of read operations. Thus, in the second (deferred update) phase, the old data state is only removed following a “grace period” that is long enough to ensure that the first group of read operations will no longer maintain references to the pre-update data.
It is assumed that the data element list of
At some subsequent time following the update, r1 will have continued its traversal of the linked list and moved its reference off of B. In addition, there will be a time at which no other reader process is entitled to access B. It is at this point, representing expiration of the grace period referred to above, that u1 can free B, as shown in
In the context of the read-copy update mechanism, a grace period represents the point at which all running processes (or threads within a process) having access to a data element guarded by read-copy update have passed through a “quiescent state” in which they can no longer maintain references to the data element, assert locks thereon, or make any assumptions about data element state. By convention, for operating system kernel code paths, a context (process) switch, an idle loop, and user mode execution all represent quiescent states for any given CPU running non-preemptible code (as can other operations that will not be listed here). In some read-copy update implementations adapted for preemptible readers, all read operations that are outside of an RCU read-side critical section are quiescent states.
In
There are various methods that may be used to implement a deferred data update following a grace period, including but not limited to the use of callback processing as described in commonly assigned U.S. Pat. No. 5,442,758, entitled “System And Method For Achieving Reduced Overhead Mutual-Exclusion And Maintaining Coherency In A Multiprocessor System Utilizing Execution History And Thread Monitoring.” Another commonly used technique is to have updaters block (wait) until a grace period has completed.
It will be appreciated from the foregoing discussion that the fundamental operation of the read-copy update (RCU) synchronization technique entails waiting for all readers associated with a particular grace period to complete. Multiprocessor implementations of RCU must therefore observe or influence the actions performed by other processors. Preemptible variants of RCU, including several real-time RCU implementations, “Sleepable” RCU (SRCU), and “Quick” SRCU (QRCU), rely on explicit actions taken by processors executing readers when they enter and leave read-side critical sections. The RCU implementation may need to coordinate with these processors to ensure orderly grace period processing. Moreover, the RCU implementation may choose to force the other processors to perform the necessary actions as soon as possible rather than waiting. This may occur if the RCU implementation decides that it has waited too long or that it has too many waiters.
RCU implementations used for preemptible readers do not currently account for processor power states. Modern processors benefit greatly from low-power states (such as, on Intel® processors, the CIE halt state, or the C2 or deeper halt states). These low-power states have higher wakeup latency, so processors and operating systems do not choose to enter these states if frequently forced to wake up. Operating systems with mechanisms such as the dynamic tick framework (also called “dyntick” or “nohz”) in current versions of the Linux® kernel can avoid the need for regular timer interrupts (a frequent cause of unnecessary wakeups) and instead only wake up processors when they need to perform work, allowing for better utilization of low-power states. Thus, RCU implementations that force other processors to wake up and perform work can lead to higher power usage on processors with low-power higher-latency states. This may result in decreased battery life on battery-powered systems (such as laptops and embedded systems), higher power usage (particularly problematic for large data centers), increased heat output, and greater difficulty achieving compliance with various standards for environmentally friendly or “green” systems. It would thus be desirable to avoid unnecessary wakeups during RCU grace period processing.
It is to addressing the foregoing that the present invention is directed. In particular, what is required is a read-copy update technique that may be safely used in a preemptible reader computing environment while supporting the use of low-power processor states. These requirements will preferably be met in a manner that avoids excessive complexity of the grace period detection mechanism itself.
The foregoing goals are met and an advance in the art is obtained by a method, system and computer program product for low-power detection of a grace period for deferring the destruction of a shared data element until pre-existing references to the data element have been removed. A grace period processing action is implemented that requires a response from a processor that may be running a preemptible reader of said shared data element before further grace period processing can proceed. A power and reader status of the processor is also determined. Grace period processing may proceed despite the absence of a response from the processor if the power and reader status indicates that an actual response from the processor is unnecessary.
According to exemplary disclosed embodiments, the processor may be designated as having provided the response upon the power and reader status indicating that 1) the processor has remained in a low-power state without reader execution since the grace period processing action, or 2) the processor has passed through or entered a low-power state without reader execution since the grace period processing action, or 3) the processor has entered an execution state that is tantamount to the response being provided. The processing history may be determined by determining an initial power and reader status when the grace period processing action is taken and comparing the initial power and reader status to a current power and reader status. The initial power and reader status and the current power and reader status may include the processor being in either of 1) a non-low-power state or a low-power state with concurrent reader processing, or 2) a low-power state without concurrent reader processing. The initial power and reader status and the current power and reader status may be determined from a free-running counter maintained by the processor that is incremented when the processor enters or leaves a low-power state, or enters or leaves a reader processing state while in a low-power state. The low-power state may correspond to the processor being in a dynamic tick timer mode and the low-power reader processing state may correspond to the processor handling an interrupt in dynamic tick timer mode using a reader.
The foregoing and other features and advantages of the invention will be apparent from the following more particular description of exemplary embodiments of the invention, as illustrated in the accompanying Drawings, in which:
Turning now to the figures, wherein like reference numerals represent like elements in all of the several views,
It is further assumed that update operations executed within kernel or user mode processes, threads, or other execution contexts will periodically perform updates on a set of shared data 16 stored in the shared memory 8. Reference numerals 181, 182 . . . 18n illustrate individual data update operations (updaters) that may periodically execute on the several processors 41, 42 . . . 4n. As described by way of background above, the updates performed by the data updaters 181, 182 . . . 18n can include modifying elements of a linked list, inserting new elements into the list, deleting elements from the list, and many other types of operations. To facilitate such updates, the several processors 41, 42 . . . 4n are programmed to implement a read-copy update (RCU) subsystem 20, as by periodically executing respective RCU instances 201, 202 . . . 20n as part of their operating system or user application functions. Each of the processors 41, 42 . . . 4n also periodically executes read operations (readers) 211, 212 . . . 21n on the shared data 16. Such read operations will typically be performed far more often than updates, this being one of the premises underlying the use of read-copy update.
One of the functions performed by the RCU subsystem 20 is grace period processing for deferring the destruction of a shared data element until pre-existing references to the data element are removed. This processing entails starting new grace periods and detecting the end of old grace periods so that the RCU subsystem 20 knows when it is safe to free stale data elements. In order to support low-power state processor operation, the RCU subsystem 20 is adapted to avoid having to perform unnecessary wakeups of processors that are in low-power states. To that end, the RCU subsystem 20 may periodically arrange for a power status notification from the processors 41, 42 . . . 4n as these processors enter or leave a low-power state. By way of example only, each processor 41, 42 . . . 4n may store per-processor power status information whenever the processor enters a low-power state, such as a dynamic tick state. The power status notification allows the RCU subsystem 20 to keep track of which of the processors 41, 42 . . . 4n are and are not in a low-power state, and avoid waking those processors if possible. Unfortunately, in some RCU implementations, a processor may have associated readers even though it is in a low-power state. One such example is found in current real-time patches of the Linuxo kernel wherein a processor in dynamic tick mode may periodically run an interrupt handler that performs RCU read operations. For convenience, such interrupts will be referred to hereinafter as “RCU interrupts.” RCU interrupt handling could include, but is not necessarily limited to, servicing NMIs (Non-Maskable Interrupts) and other hardware or software events (such as SMIs (System Management Interrupts) in the event that the SMI handler implemented RCU).
The RCU subsystem 20 thus may not assume that a low-power state processor implies an absence of readers. To handle this case, when a processor 41, 42 . . . 4n enters a low-power state, it may also provide notification to the RCU subsystem 20 if and when there is an RCU reader 211, 212 . . . 21n (such as an RCU interrupt handler) associated with the processor. Thus, the processors 41, 42 . . . 4n may provide both a power and reader status notification to the RCU subsystem 20. Like the power status notification, the reader status notification could be provided by each processor 41, 42 . . . 4n storing per-processor reader status information whenever the processor has an associated reader that must be accounted for by the RCU subsystem 20, such as when the processor enters or leaves an RCU interrupt handling state.
One way that a processor could store power status and reader status information would be to manipulate a per-processor power and reader status indicator, as by incrementing a free-running progress counter to indicate both power status and reader status. A processor 41, 42 . . . 4n could increment its progress counter whenever it enters or exists a low-power state, and also when an RCU interrupt handler is commenced and terminated (or when any other RCU reader that must be accounted for by the RCU subsystem 20 becomes associated with the processor).
By way of example, if the progress counter has an initial value of 1 corresponding to a full power state, it may be incremented to 2 when a low-power state is entered, and thereafter incremented to 3 when the full power state resumes (e.g., when a task is scheduled or an interrupt is taken). The next low-power state may increment the progress counter to 4. If the processor then takes an RCU interrupt or is otherwise assigned to handle RCU read processing, the progress counter may be incremented to 5. When the RCU interrupt handling or other RCU reader activity completes, the progress counter may be incremented to 6. A return to the full power state may then result in the progress counter incrementing to 7. It will be appreciated from the foregoing discussion that odd progress counter values represent a notification that a processor 41, 42 . . . 4n is either at full power or is responsible for an RCU reader 211, 212 . . . 21n. Progress counter values that are even signify that a processor 41, 42 . . . 4n is in a low-power state and does not have an associated RCU reader 211, 212 . . . 21n. Note that the foregoing assumes that a processor 41, 42 . . . 4n will not be placed into a low-power state while an RCU reader 211, 212 . . . 21n is in progress.
When the RCU subsystem 20 executes on a particular processor 41, 42 . . . 4n, it may periodically implement grace period processing actions that require response from the other processors in order for grace period processing to proceed. As used herein, the term “response” refers to any processing that is required to be performed on a processor 41, 42 . . . 4n in order to support an RCU implementation. One grace period processing action that may require response from other processors is the commencement of a new grace period. In RCU implementations designed to handle preemptible readers (hereinafter referred to as “preemptible RCU”), it is essential that processors running such readers be aware when new grace periods start so that the readers are assigned to the correct grace period, and are thereby tracked correctly by the RCU implementation. Requiring that processors acknowledge the start of new grace periods (e.g., by clearing a flag) is one way to ensure this result. Another grace period processing action that may require responsive action by processors is the ending of an old grace period. Insofar as this triggers callback processing, it is important to ensure that all RCU readers associated with the ending grace period have completed. In some cases, processor design and/or compiler optimizations could result in a reader advising the RCU implementation that it has completed read-side critical section processing when in fact it has not. One way to avoid this result is to have the processor that runs the reader execute a memory barrier instruction after being advised of the end of a grace period.
In the system 2 of
The foregoing processing may be implemented by logic within the RCU subsystem 20 that is shown in
If desired, special handling may be implemented in connection with blocks 34 and 36 due to the possibility of nested RCU read-side operations. For example, during RCU interrupt handling, it may be possible that one or more additional RCU interrupts (e.g., NMIs or SMIs) could occur. To handle this case, a second per-processor counter 39 (see
In the processing of
With continuing reference to
(curr==snap) AND (least significant bit of curr==0), (1)
where “curr” is the current progress counter snapshot and “snap” is the initial progress counter snapshot. This power and status history may be used for both of the exemplary real-time Linux® kernel grace period processing actions described above, namely, when requesting acknowledgement of the start of a new grace period and when requesting a memory barrier following the end of a grace period. In the first instance, if a processor has been in a low-power state with no RCU readers since the start of a grace period, it cannot possibly be in a state that would require acknowledgement of the new grace period. In the second instance, if a processor has been in a low-power state with no RCU readers since the end of a grace period, it cannot possibly be in a state that would require the implementation of a memory barrier.
In other cases, the processor 41, 42 . . . 4n may be designated as having provided the grace period processing response upon a comparison of current and initial progress counter snapshots indicating that the processor has passed through or entered a low-power state without reader execution since the grace period processing action. Logically, this power and status history may be represented by equation (2) as follows:
((curr−snap)>2) OR (least significant bit of snap==0), (2)
where “curr” is the current progress counter snapshot and “snap” is the initial progress counter snapshot. This power and status history may be used for the first exemplary real-time Linux® kernel grace period processing action described above, namely, when requesting acknowledgement of the start of a new grace period. If the processor has passed through or entered a high power or RCU reader processing state since the beginning of the grace period, it cannot possibly be in a state that would require acknowledgement of the new grace period.
In still other cases, the processor 41, 42 . . . 4n may be designated as having provided the grace period processing response upon a comparison of current and initial power and reader snapshots indicating that the processor has entered or left a low-power RCU reader execution state. Logically, this power and status history may be represented by equation (3) as follows:
curr!=snap, (3)
where “curr” is the current progress counter snapshot and “snap” is the initial progress counter snapshot. This power and status history may be used for the second exemplary real-time Linux® kernel grace period processing action described above, namely, when requesting that a memory barrier be implemented at the end of a grace period. However, in order for equation (3) to be effective, the low-power RCU reader execution state must be tantamount to the requested grace period processing response being provided. In the case of a memory barrier request, this means that the low-power RCU reader must inherently implement a memory barrier. Recalling the discussion of
One final aspect of the disclosed technique that is worthy of mention is that it supports the grace period processing component 26 when the latter is implemented as a state machine called by a clock timer interrupt, as in the current real-time version of the Linux® kernel. In such an implementation of the grace period processing, the grace period processing component 26 itself is responsible for both issuing grace period processing requests to other processors and for servicing those requests. In particular, the grace period processing component 26 would issue the grace period processing request on one processor, terminate at the end of the current interrupt, and then be re-invoked on the next interrupt on a different processor. The grace period processing component 26 will encounter the grace period processing request that it previously issued and take whatever responsive action is required. As indicated above, this may include a grace period coordination action such as acknowledging a new grace period or implementing a memory barrier at the end of an old grace period. If all the processors 41, 42 . . . 4n are awake, the grace period processing component 26 will make the rounds on all processors, the coordination process will complete and grace period detection will advance to the next state. One consequence of using a low-power dynamic tick mode is that the grace period processing component 26, being dependent on a clock timer interrupt, may not advance through its various states due to the absence of clock timer ticks. An advantage of the low-power technique disclosed herein is that the grace period processing component 26 need not run on processors 41, 42 . . . 4n that are in a low power dynamic tick state with no RCU readers 211, 212 . . . 21n. Thus, the fact that its clock timer interrupt does not execute during dynamic tick mode is irrelevant.
Accordingly, a technique for optimizing preemptible read-copy update for low-power usage has been disclosed that avoids unnecessary wakeups and facilitates grace period processing state advancement. It will be appreciated that the foregoing concepts may be variously embodied in any of a data processing system, a machine implemented method, and a computer program product in which programming logic is provided by one or more machine-useable media for use in controlling a data processing system to perform the required functions. Exemplary machine-useable media for providing such programming logic are shown by reference numeral 100 in
While various embodiments of the invention have been described, it should be apparent that many variations and alternative embodiments could be implemented in accordance with the invention. It is understood, therefore, that the invention is not to be in any way limited except in accordance with the spirit of the appended claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
5442758 | Slingwine et al. | Aug 1995 | A |
5608893 | Slingwine et al. | Mar 1997 | A |
5727209 | Slingwine et al. | Mar 1998 | A |
6219690 | Slingwine et al. | Apr 2001 | B1 |
6886162 | McKenney | Apr 2005 | B1 |
6996812 | McKenney | Feb 2006 | B2 |
20050149634 | McKenney | Jul 2005 | A1 |
20050198030 | McKenney | Sep 2005 | A1 |
20060090104 | McKenney et al. | Apr 2006 | A1 |
20060100996 | McKenney | May 2006 | A1 |
20060112121 | McKenney et al. | May 2006 | A1 |
20060117072 | McKenney | Jun 2006 | A1 |
20060123100 | McKenney | Jun 2006 | A1 |
20060130061 | McKenney | Jun 2006 | A1 |
20060265373 | McKenney et al. | Nov 2006 | A1 |
20070083565 | McKenney | Apr 2007 | A1 |
20070101071 | McKenney | May 2007 | A1 |
20070226431 | McKenney et al. | Sep 2007 | A1 |
20070226440 | McKenney et al. | Sep 2007 | A1 |
20070266209 | McKenney et al. | Nov 2007 | A1 |
20080033952 | McKenney et al. | Feb 2008 | A1 |
20080040720 | McKenney et al. | Feb 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20090254764 A1 | Oct 2009 | US |