a to 1c show flow diagrams of methods according to embodiments of the present invention;
a to 2d schematically shows the splitting into separately executable sections of a computer program according to an embodiment of the present invention;
a to 3b schematically shows a method of splitting and then merging sections of a computer program;
a shows a simple computer program annotated according to an embodiment of the present invention;
b shows the maximal set of threads for the program of
a shows a flow diagram illustrating a method according to an embodiment of the present invention. A first step is performed in which a portion of a computer program comprising a list of sequential instructions and a program separation indicator indicating a point where the sequential instructions may be divided to form separate sections that are capable of being separately executed is analysed. The analysis determines if the sequential instructions can be split at the point indicated by the separation indicator into separate sections that can be processed on different execution mechanisms. If it determines it can the sequential instructions are divided into the separate sections at the point indicated by the program separation indicator. If it determines they cannot be separated at this point then a warning is output to the programmer to indicate an error in the program.
b illustrates an alternative embodiment in which rather than outputting a warning if the program cannot be decoupled and separated at the indicated point, the program is amended by inserting data communication instructions into the list of sequential instructions, these data communication instructions enabling the different sections to be decoupled and thus, separated. The separation can then be performed.
c provides an alternative embodiment in which in response to determining that the program cannot be separated at the indicated point the two sections are merged together and the program separation indicator removed.
As can be seen the three embodiments provide different solutions to an analysis that determines that it is not possible to separate a program into sections at the point indicated. Different ones of these embodiments may be used in the same analysis of a program for different separation indicators depending on circumstances. Thus, it may be that the preferred course of action is to amend the program to make it separable at he indicated point, if this cannot be done then it may be chosen to merge the two portions, or if this would result in an unacceptably large portion then a warning may be output.
a shows a portion of a computer program comprising a loop in which data items are processed, function f operating on the data items, and function g operating on the data items output by function f and then function h operating on these items. These functions being performed n times in a row for values of i from 1 to n.
Thus, the control flow can be seen as following the solid arrows while data flow follows the dotted arrows. In order to try to parallelise this portion of the computer program it is analysed, either automatically or by a programmer and “decouple” or program separation indications are inserted into the data flow where it is seen as being desirable to split the portion into sections that are decoupled from each other and can thus, be executed on separate execution mechanisms. In this case, a decouple indication is provided between the data processing operations f and g. This can be seen as being equivalent to inserting a buffer in the data flow, as the two sections are decoupled by providing a data store between then so that the function f can produce its results which can then be accessed at a different time by function g.
The program is then analysed to see if it can indeed be decoupled at the point indicated by the separation indicators. If it can then the method proceeds to
c, shows how the separate sections of the program are decoupled by the insertion of “put” and “get” instructions into the data stream. These result in the data being generated by the f function being put into a data store, from which it is retrieved by the get instruction to be processed by function g. This enables the program to be split into two sections as is shown in
Thus, if the program is being processed by a data processing apparatus having a number of different processors, the two sections can be processed in parallel which can improve the performance of the apparatus. Alternatively, one of the functions may be a function suitable for processing by an accelerator in which case it can be directed to an accelerator, while the other portion is processed by say, the CPU of the apparatus.
As can be seen from
It should be noted that the put and get operations used in
void* put_begin(channel *ch);
void put_end(channel *ch, void* buf);
void* get_begin(channel *ch);
void get_end(channel *ch, void* buf);
Using these operations, sequences of code such as:
int x[100];
generate(x);
put(ch,x);
Can be rewritten to this more efficient sequence:
int px=put_begin(ch);
generate(px);
put_end(ch,px);
And similarly, for get:
int x[100];
get(ch,x);
consume(x);
to this more efficient sequence:
int px=get_begin(ch);
consume(px);
get_end(ch,px);
The use of puts and gets to decouple threads can be further extended to use where communication between threads is cyclic. Cyclic thread dependencies can lead to deadlock—that is, two threads may not run in parallel because of data dependencies between them and thus, in devices of the prior art decoupling is generally limited to acyclic thread dependencies.
Under conventional decoupling schemes, puts are always inserted after assignment to any data boundary variable. This would require both a put outside the loop and a put at the end of the loop:
Conventional decoupling schemes only generate matched pairs of puts and gets (i.e., there is only one put on each channel and only one get on each channel) so they cannot generate such code.
Embodiments of the present invention use an alternative way of decoupling this code and generate:
This does have matched pairs of puts and gets but breaks the rule of always performing a put after any assignment to a variable.
a and 3b schematically illustrate the program code shown in
A further example of a code fragment that can be split by an embodiment of the present invention is provided below. Since communication lies at the boundaries between threads, the compiler's job is to “color in” the code that lies between the boundaries. This is done through a dependency analysis to decide the set of operations that are on the “producer” side of a channel and the set of operations on the “consumer” side of a channel. The compiler then partitions the operations according to that analysis; and generates a separate thread for each equivalence class.
The PIPELINE annotation on line 1 identifies the region of code to be split into threads. The FIFO annotation on line 5 identifies that the communication between threads is to be performed between f and g. The compiler performs a data and control flow analysis to determine that the call to g has a data dependency on the FIFO operation and also has control dependencies on the if statement (line 4) and the for loop (line 2). This results in the following thread:
Note that the dependency on the FIFO operation caused the compiler to insert a fifo get operation.
The data and control flow analysis also determines that the FIFO operation (line 5) has a data dependency on the call to f (line 3) and also has control dependencies on the if statement (line 4) and the for loop (line 2). This results in the following thread:
Note that because the data dependency is from the FIFO operation, the compiler inserted a fifo_put operation.
Because decoupling respects data and control dependencies and FIFO channels preserve the order of their elements, the resulting threads are equivalent to the unannotated code. (Obviously, this property is not preserved when timed channels are used.)
Decoupling must make two essential decisions: “What variables and operations to replicate?” and “What operations to place in the same thread?”.
The task of decoupling is to split the region of code into as many threads as possible, without introducing timing-dependent behaviour, using channels to communicate between threads. As the example above shows, the generated threads do not strictly partition the statements in the original code: some variables and operations (principally those used for control) are duplicated between threads. The choice of what to duplicate is an essential part of the transformation: if too much code or data is duplicated, the transformed program can run slower and use much more memory than the original program. While these decisions could be controlled using annotations on every variable and statement, some simple default rules can be used that give the programmer control without requiring excessive annotation. By default, the variables to be duplicated (i.e., privatized) are determined by the location of their declaration (variables declared inside the PIPELINE annotation may be duplicated) and their size (scalar variables may be duplicated). Operations other than function calls may be duplicated unless they have side-effects or modify a non-duplicable variable.
The other essential decision that the transformation must make is “What operations must be in the same thread as each other?”. To avoid introducing timing-dependent behaviour, we apply the following two rules:
Our implementation of decoupling finds the unique, maximal solution in three stages: dependency analysis, merging, and thread production.
The dependency analysis stage forms a large number of “pre-threads” by computing a backward data and control slice (see Mark Weiser, “Program slicing”, in ICSE '81: Proc. of International Conference on Software Engineering, 439-449, 1981) from each unduplicated operation ignoring data dependencies on FIFO operations but including all duplicable and unduplicable operations in the slice. That is, we repeatedly apply rules (1) and (2) to form prethreads. In our running example, there are three prethreads: one each for f( ), FIFO(2,x), and g(x).
For example, the prethread for f( ) is:
the prethread for FIFO(2,x) is:
and the prethread for g(x) is:
Note that the prethread for g(x) omits the FIFO operation (line 5): the dependency rules ensure that FIFO operations are placed in the producer thread but not the consumer thread.
The merging stage combines “prethreads” by merging threads that contain the same non-duplicable operation or variable. For example, the prethread for f( ) is merged with the prethread for FIFO(2,x) because they both contain the operation f( ), resulting in the prethread:
This is identical to the prethread for FIFO(2,x) because the prethread already contained the f(x) operation.
The thread production stage converts prethreads to threads by inserting channel declarations and initialization, privatizing duplicable variables, replacing FIFO operations with fifo_put operations and inserting a fifo_get operation in every thread that contains an operation dependent on a FIFO operation. If multiple threads contain operations dependent on the same FIFO operation, a separate channel has to be introduced for each fifo_get operation introduced and the FIFO operation is replaced with a fifo_put operation on each channel.
When adding FIFO annotations, it is easy for the programmer to overlook dependencies between operations. For example, if the if-statement were changed to x>0, then the control dependency of g(x) on the if-statement and the data dependency of the condition x>0 requires the prethread for x=f( ) to be merged with the prethread for g(x) and no pipeline parallelism is introduced. Since the use of FIFO suggests that the programmer expected parallelism, the compiler reports a warning that both sides of the channel are in the same thread. In this case, the problem can be fixed by moving the FIFO operation to before the if-statement or by arranging to pass the if-condition through a channel by changing the if-statement to if (FIFO(2,x>0)) { . . . }.
Decoupling can be used with all channel types except non-blocking FIFOs.
When optimizing inter-thread communication and introducing pipeline parallelism, our compiler needs to perform dataflow analysis on both scalar and non-scalar variables. When a function is written in C, it is often possible to perform this analysis automatically using, for example, the techniques of Tu et al. (P. Tu and D. Padua, “Automatic Array Privatization”, in Proc. Workshop on Compiler Optimizations for Scalable Parallel Systems Languages (LNCS 768), Springer Verlag, 500-521, 1993) to compute DEF, USE and KILL sets. In typical SoC architectures, this analysis is not possible because the PE is programmed in assembly language or is a fixed-function accelerator, so the programmer must annotate the function declarations for RPC functions with whether an argument is an “in”, “out” or “inout” argument. Similarly, side-effects on global variables or on device state are annotated by recording dependencies on those variables/devices.
Parallelizing at a coarse granularity allows the duplication of more control code between threads which reduces and simplifies inter-thread communication allowing the generation of distributed schedules. That is, we can distribute the control code across multiple processors both by putting each control thread on a different processor and by putting different parts of a single control thread onto different processors.
The transfer of data may be done by, writing the data to a particular buffer such as a FIFO. Alternatively it may simply be done by providing the other section of the program with information as to where the data has been stored.
The way of transferring the data depends on the system the program is executing on. In particular, if the architecture does not have shared memory, it is necessary to insert DMA copies from a buffer in one memory to a buffer in a different memory. This can lead to a lot of changes in the code: declaring both buffers, performing the copy, etc. In embodiments of the invention an analysis is performed to determine which buffers need to be replicated in multiple memory regions and to determine exactly which form of copy should be used. DMA copies are also inserted automatically subject to some heuristics when the benefit from having the programmer make the decision themselves is too small.
One of the advantages of inserting channel indicators into a program explicitly is that it is possible to use channels which do not provide first-in-first-out semantics. This section describes some alternative implementations which can be used.
Atomic channels provide atomic access to an element: an atomic_get operation acquires a copy of the element and makes the element unavailable to other threads (i.e., it “locks” the variable) and an atomic_put operation makes the variable available for use by other threads (i.e., it “unlocks” the variable). We note that atomic channels are equivalent to a fifo channel of maximum length 1.
Nonblocking put and get channels are a variant on fifo channels where the nbpfifo_put operation returns an error code if the channel is full instead of blocking as fifo channels do. These channels are for use in interrupt handlers since it is possible to block a thread but not an interrupt. We also provide channels that provide a non-blocking nbgfifo_get operation.
Timed channels provide time-indexed access to data. When data is inserted into or removed from the channel, a timestamp is specified:
void ts_put(tschan_t c, int timestamp, void* v);
void ts_get(tschan_t c, int timestamp, void* v);
The ts_get operation returns the entry with the closest timestamp to that specified. All ts_put operations must use strictly increasing times and all ts_get operations must use strictly increasing times. This restriction allows entries to be discarded when they can no longer be accessed.
Timed channels allow for more parallelism between threads since, after the first ts_put is performed, ts_get operations never block because there is always an entry with a closest timestamp. The cost of this increased performance is less precise synchronization between threads than with FIFO channels: applications that use timed channels are unlikely to give deterministic results.
Timed channels are useful for implementing time-sensitive information where it is important to use current data. For example, mobile telephones implementing the “3rd generation” W-CDMA protocol use rake receivers to increase the bandwidth of links subject to multipath fading (i.e., where the signal contains delayed echoes of the transmitted signal typically due to reflections off buildings). Rake receivers estimate the strength and delay of these reflections and use these estimates to combine delayed copies of the received signal to maximize the amount of received signal energy. In such applications, the best estimate to use is the one closest in time to the data arrived which may be different from the next estimate generated.
Timed channels are an example of a channel type which makes sense in some domains or applications but not in others. Rather than fix the set of channel types in the language, our compiler allows new channel types to be added using annotations to identify channel types and put and get operations. The only properties on which SoC-C relies are that operations are atomic, directional and obey a copy semantics. That is, put operations atomically copy data into a channel and get operations atomically copy data out of a channel.
Most alternative channel types of interest have the effect of making the application non-deterministic: the behaviour of the application depends on the precise order in which parallel operations complete. For this reason, we require the programmer to explicitly decide which type of queue to use.
a shows a simple computer program annotated according to an embodiment of the present invention. An analysis of this program is performed initially and parts of the program are identified by programmer annotation in this embodiment although it could be identified by some other analysis including static analysis, profile driven feedback, etc. The parts identified are as follows:
What can be regarded as the “decoupling scope”. This is a contiguous sequence of code that we wish to split into multiple threads.
The “replicatable objects”: that is variables and operations which it is acceptable to replicate. A simple rule of thumb is that scalar variables (i.e., not arrays) which are not used outside the scope, scalar operations which only depend on and only modify replicatable variables, and control flow operations should be replicated but more sophisticated policies are possible.
Ordering dependencies between different operations: if two function calls both modify a non-replicated variable, the order of those two function calls is preserved in the decoupled code. (Extensions to the basic algorithm allow this requirement to be relaxed in various ways.)
The “data boundaries” between threads: that is, the non-replicatable variables which will become FIFO channels. (The “copies” data annotation described above determines the number of entries in the FIFO.)
This degree of annotation is fine for examples but would be excessive in practice so most real embodiments would rely on tools to add the annotations automatically based on heuristics and/or analyses.
At a high level, the algorithm splits the operations in the scope into a number of threads whose execution will produce the same result as the original program under any scheduling policy that respects the FIFO access ordering of the channels used to communicate between threads.
The particular decoupling algorithm used generates a maximal set of threads such that the following properties hold:
b shows the maximal set of threads for the program of
Each replicatable variable must be initialized at the start of each thread with the value of the original variable before entering the scope and one of the copies of each replicatable variable should be copied back into the master copy on leaving the scope. (Executing all these protothreads is highly unlikely to give the same answer as the original program, because it lacks the necessary synchronization between threads. This is fixed by the next steps.)
Another way is to pick an operation, identify all the operations which must be in the same thread as that operation by repeatedly adding operations which would be merged (in step 2 above). Then pick the next operation not yet assigned to a thread and add all operations which must be in the same thread as that operation. Repeat until there are no more non-replicatable operations. It should be noted that this is just one possible way of tackling this problem: basically, we are forming equivalence classes based on a partial order and there are many other known ways to do this.
The above method splits a program into a number of sections which can be executed in parallel. There are many possible mechanisms that can be used to accomplish this task.
Execution mechanism.
Program separation indicators can take a number of forms. They may simply be some sort of indicator such as a split indicating that the program needs to be divided there or they can in fact be data transfer functions which do by themselves provide a data communication between the two sections. Thus, they may be fifo instructions indicating that data should be sent fifo to transfer the data between the two sections as was discussed earlier or they may be put and get instructions. It may be that several variables need to be transferred between the two sections and that there are not sufficient data transfer instructions within the program. In such a case then the further data transfer instructions for these variables which have not been addressed can be inserted into the program and then the data communication between the two sections is provided and can then be separated before separate execution on separate execution mechanisms.
The following describes language extensions/annotations, compilation tools, analysis tools, debug/profiling tools, runtime libraries and visualization tools to help programmers program complex multiprocessor systems. It is primarily aimed at programming complex SoCs which contain heterogeneous parallelism (CPUs, DEs, DSPs, programmable accelerators, fixed-function accelerators and DMA engines) and irregular memory hierarchies.
The compilation tools can take a program that is either sequential or contains few threads and map it onto the available hardware, introducing parallelism in the process.
When the program is executed, we can exploit the fact that we know mappings between the user's program and what is executing to efficiently present a debug and profile experience close to what the programmer expects while still giving the benefit of using the parallel hardware. We can also exploit the high level view of the overall system to test the system more thoroughly, or to abstract away details that do not matter for some views of the system.
This provides a way of providing a full view for SoC programming.
The task of programming a SoC is to map different parts of an application onto different parts of the hardware. In particular, blocks of code must be mapped onto processors, data engines, accelerators, etc. and data must be mapped onto various memories. In a heterogeneous system, we may need to write several versions of each kernel (each optimized for a different processor) and some blocks of code may be implemented by a fixed-function accelerator with the same semantics as the code. The mapping process is both tedious and error-prone because the mappings must be consistent with each other and with the capabilities of the hardware. We reduce these problems using program analysis which:
The number of legal mappings is usually large but once the programmer has made a few choices, the number of legal options usually drops significantly so it is feasible to ask the programmer to make a few key choices and then have the tool fill in the less obvious choices automatically.
Often the code needs minor changes to allow some mappings. In particular, if the architecture does not have shared memory, it is necessary to insert DMA copies from a buffer in one memory to a buffer in a different memory buffer. This leads to a lot of changes in the code: declaring both buffers, performing the copy, etc. Our compiler performs an analysis to determine which buffers need to be replicated in multiple memory regions and to determine exactly which form of copy should be used. It also inserts DMA copies automatically subject to some heuristics when the benefit from having the programmer make the decision themselves is too small.
Systems with multiple local memories often have tight memory requirements which are exacerbated by allocating a copy of a buffer in multiple memories. Our compiler uses lifetime analysis and heuristics to reduce the memory requirement by overlapping buffers in a single memory when they are never simultaneously live.
Programmable accelerators may have limited program memory so it is desirable to upload new code while old code is running. For correctness, we must guarantee that the new code is uploaded (and I-caches made consistent) before we start running it.
Our compiler uses program analysis to check this and/or to schedule uploading of code at appropriate places.
For applications with highly variable load, it is desirable to have multiple mappings of an application and to switch dynamically between different mappings.
Some features of our approach are:
To describe this idea further, we need some syntax for annotations. Here we provide one embodiment of annotations which provide the semantics we want.
In this document, all annotations take the form:
Or, when there is just one tag and it is obvious,
The primary annotations are on data and on code. If a tag is repeated, it indicates alternative mappings.
The tags associated with data include:
For example, the annotation:
The tags associated with code include:
For example, the annotation:
Specifies that processor P1 is to execute fft followed by P1. The semantics is similar to that of a synchronous remote procedure call: when control reaches this code, free variables are marshalled and sent to processor P1, processor P1 starts executing the code and the program continues when the code finishes executing.
It is not always desirable to have synchronous RPC behaviour. It is possible to implement asynchronous RPCs using this primitive either by executing mapped code in a separate thread or by splitting each call into two parts: one which signals the start and one which signals completion.
The tags associated with functions are:
For example, the annotation:
Specifies that this function runs on a PL081 accelerator (a DMA Primesys engine) and can be used whenever a call to “copy” is mapped to a PL081 accelerator.
There are a variety of languages for describing hardware architectures including the SPIRIT language and ARM SoCDesigner's internal language. While the languages differ in syntax, they share the property that we can extract information such as the following:
Thus, from the architecture, we can detect both address maps which can be used to fill in fine details of the mapping process and we can detect problems such as connectivity, sequential consistency and incoherence that can affect the correctness of a mapping.
Based on rules detected in an architectural description and/or rules from other sources, we can analyse both sequential and parallel programs to detect errors in the mapping. Some examples:
Thus, we can check the mapping of a software system against the hardware system it is to run on based on a specification of the architecture or additional information obtained in different ways.
Having detected errors in a system mapping, there are a variety of responses. An error such as mapping a piece of code to a fixed-function accelerator that does not support that function should probably just be reported as an error that the programmer must fix. Errors such as omitting synchronization can sometimes be fixed by automatically inserting synchronization. Errors such as requiring more variables to a memory bank than will fit can be solved, to some extent, using overlay techniques. Errors such as mapping an overly large variable to a memory can be resolved using software managed paging though this may need hardware support or require that the kernel be compiled with software paging turned on (note: software paging is fairly unusual so we have to implement it before we can turn it on!). Errors such as omitting memory barriers, cache flush/invalidate operations or DMA transfers can always be fixed automatically though it can require heuristics to insert them efficiently and, in some cases, it is more appropriate to request that the programmer fix the problem themselves.
Given a program that has been mapped to the hardware, the precise way that the code is compiled depends on details of the hardware architecture. In particular, it depends on whether two communicating processors have a coherent and sequentially consistent view of a memory through which they are passing data.
Our compiler uses information about the SoC architecture, extracted from the architecture description, to determine how to implement the communication requirements specified within the program. This enables it to generate the glue code necessary for communication to occur efficiently and correctly. This can include generation of memory barriers, cache maintenance operations, DMA transfers and synchronisation on different processing elements.
This automation reduces programming complexity, increases reliability and flexibility, and provides a useful mechanism for extended debugging options.
Other manual and automatic factors may be used to influence the communication mechanism decisions. Errors and warnings within communication mappings can be found using information derived from the architecture description.
Some features of our approach are:
We take the concept of Remote Procedure Calls (RPCs) which are familiar on fully programmable processors communicating over a network, and adapt and develop it for application in the context of a SoC: processors communicating over a bus with fixed function, programmable accelerators and data engines.
Expressing execution of code on other processing elements or invocation of accelerators as RPCs gives a function based model for programmers, separating the function from the execution mechanism. This enables greater flexibility and scope for automation and optimisation.
An RPC abstraction can be expressed as functions mapped to particular execution mechanisms:
This provides a simple mechanism to express invocation of functions, and the associated resourcing, communication and synchronisation requirements.
Code can be translated to target the selected processing elements, providing the associated synchronisation and communication. For example, this could include checking the resource is free, configuring it, starting it and copying the results on completion. The compiler can select appropriate glue mechanisms based on the source and target of the function call. For example, an accelerator is likely to be invoked primarily by glue on a processor using a mechanism specific to the accelerator.
The glue code may be generated automatically based on a high level description of the accelerator or the programmer may write one or more pieces of glue by hand.
The choice of processor on which the operation runs can be determined statically or can be determined dynamically. For example, if there are two identical DMA engines, one might indicate that the operation can be mapped onto either engine depending on which is available first.
The compiler optimisations based on the desired RPC interface can range from a dynamically linked interface to inter-procedural specialisation of the particular RPC interface.
RPC calls may be synchronous or asynchronous. Asynchronous calls naturally introduce parallelism, while synchronous calls are useful as a simpler function call model, and may be used in conjunction with fork-join parallelism. In fact, parallelism is not necessary for efficiency; a synchronous call alone can get the majority of the gain when targeting accelerators. Manually and automatically selecting between asynchronous and synchronous options can benefit debugging, tracing and optimisation.
RPC calls may be re-entrant or non-reentrant, and these decisions can be made implicitly, explicitly or through program analysis to provide benefit such as optimisation where appropriate.
This mechanism enables a particular function to have a number of different execution targets within a program, but each of those targets can be associated back to the original function; debugging and trace can exploit this information. This enables a user to set a breakpoint on a particular function, and the debug and trace mechanisms be arranged such that it can be caught wherever it executes, or on a restricted subset (e.g. a particular processing element).
The details of the RPC interface implementation can be abstracted away in some debugging views.
Some features of our approach are:
Increasingly, applications are being built using libraries which define datatypes and a set of operations on those types. The datatypes are often bulk datastructures such as arrays of data, multimedia data, signal processing data, network packets, etc. and the operations may be executed with some degree of parallelism on a coprocessor, DSP processor, accelerator, etc. It is therefore possible to view programs as a series of often quite coarse-grained operations applied to quite large data structures instead of the conventional view of a program as a sequence of ‘scalar’ operations (like ‘32 bit add’) applied to ‘scalar’ values like 32-bit integers or the small sets of values found in SIMD within a register (SWAR) processing such as that found in NEON. It is also advantageous to do so because this coarse-grained view can be a good match for accelerators found in modern SoCs.
We observe that with some non-trivial adaptation and some additional observations, optimization techniques known to work on fine-grained operations and data can be adapted to operate on coarse-grained operations and data.
Our compiler understands the semantics associated with the data structures and their use within the system, and can manipulate them and the program to perform transformations and optimisations to enable and optimise execution of the program.
Most optimizing compilers perform a dataflow analysis prior to optimization. For example, section 10.5 of Aho Sethi and Ullman's ‘Compilers: Principles Techniques and Tools’, published by Addison Wesley, 1986, ISBN: 0-201-10194-7 describes dataflow analysis. The dataflow analysis is restricted to scalar values: those that fit in a single CPU register. Two parts of a dataflow analysis are:
In order to use dataflow analysis techniques with coarse-grained dataflow, we modify the first part so that instead of identifying the effect of a single instruction on a single element, we identify the effect of a coarse-grained operation (e.g., a function call or coprocessor invocation) on an entire data structure in terms of whether the operation is a ‘use’, a ‘def’ or a ‘kill’ of the value in a data structure. Care must be taken if an operation modifies only half of an array since the operation does not completely kill the value of the array.
For operations implemented in hardware or in software, this might be generated automatically from a precise description of the operation (including the implementation of the operation) or it might be generated from an approximate description of the main effects of the operation or it might be provided as a direct annotation.
In particular, for software, these coarse-grained operations often consist of a simple combination of nested loops and we can analyze the code to show that the operation writes to an entire array and therefore ‘kills’ the old value in the array. In scalar analysis, this is trivial since any write necessarily kills the entire old value.
The following sections identify some of the uses of coarse-grained dataflow analysis
Especially when writing parallel programs or when using I/O devices and when dealing with complex memory hierarchies, it is necessary to allocate multiple identically sized buffers and copy between the different buffers (or use memory remapping hardware to achieve the effect of a copy). We propose that in many cases these multiple buffers can be viewed as alternative versions of a single, logical variable. It is possible to detect this situation in a program with multiple buffers, or the programmer can identify the situation. One way the programmer can identify the situation is to declare a single variable and then use annotations to specify that the variable lives in multiple places or the programmer could declare multiple variables and use annotations to specify that they are the same logical variable.
However the different buffers are identified as being one logical variable, the advantages that can be obtained include:
We note that compilers do something similar for scalar variables: the value of a scalar variable ‘x’ might sometimes live on the stack, sometimes in register 3, sometimes in register 6, etc. and the compiler keeps track of which copies currently contain the live value.
By performing a liveness analysis of the data structures, the compiler can provide improved memory allocation through memory reuse because it can identify opportunities to place two different variables in the same memory location. Indeed, one can use many algorithms normally used for register allocation (where the registers contain scalar values) to perform allocation of data structures. One modification required is that one must handle the varying size of buffers whereas, typically, all scalar registers are the same size.
One thing that can increase memory use is having many variables simultaneously live. It has been known for a long time that you can reduce the number of scalar registers required by a piece of code by reordering the scalar operations so that less variables are simultaneously live.
Using a coarse-grained dataflow analysis, one can identify the lifetime of each coarse-grained data structure and then reorder code to reduce the number of simultaneously live variables. One can even choose to recalculate the value of some data structure because it is cheaper to recalculate it than to remember its value.
When parallelising programs, one can also deliberately choose to restrain the degree of parallelism to reduce the number of simultaneously live values. Various ways to restrain the parallelism exist: forcing two operations into the same thread, using mutexes/semaphores to block one thread if another is using a lot of resource, tweaking priorities or other scheduler parameters.
If a processor/accelerator has a limited amount of available memory, performing a context switch on that processor can be challenging. Context switching memory-allocated variables used by that processor solves the problem.
Compiler books list many other standard transformations that can be performed to scalar code. Some of the mapping and optimisation techniques that can be applied at the coarse-grain we discuss include value splitting, spilling, coalescing, dead variable removal, recomputation, loop hoisting and CSE.
Data structures will be passed as arguments, possibly as part of an ABI. Optimisations such as specialisation and not conforming to the ABI when it is not exposed can be applied.
In some cases, one would want to view a complex datastructure at multiple granularities. For example, given a buffer of complex values, one might wish to reason about dataflow affecting all real values in the buffer, dataflow affecting all imaginary values or dataflow involving the whole buffer. (More complex examples exist)
When debugging, it is possible for the data structure to live in a number of different places throughout the program. We can provide a single debug view of all copies, and watch a value wherever it is throughout the lifetime of a program, optionally omitting omit certain accesses such as DMAs.
The same is possible for tracing data structures within the system.
Using this coarse-grained view, one can achieve zero copy optimization of a sequence of code like this:
Most of this section is about coarse-grained data structure but some benefits from identifying coarse-grained operations come when we are generating trace. Instead of tracing every scalar operation that is used inside a coarse-grained operation, we can instead just trace the start and stop of the operation. This can also be used for cross-triggering the start/stop of recording other information through trace.
Likewise, instead of tracing the input to/output from the whole sequence of scalar operations, we can trace just the values at the start/end of the operation.
If we rely on programmer assertions, documentation, etc. in performing our dataflow analysis, it is possible that an error in the assertions will lead to an error in the analysis or transformations performed. To guard against these we can often use hardware or software check mechanisms. For example, if we believe that a function should be read but not written by a given function, then we can perform a compile-time analysis to verify it ahead of time or we can program an MMU or MPU to watch for writes to that range of addresses or we can insert instrumentation to check for such errors. We can also perform a ‘lint’ check which looks for things which may be wrong even if it cannot prove that they are wrong. Indeed, one kind of warning is that the program is too complex for automatica analysis to prove that it is correct.
Some of the features of our approach are:
Given a program that uses some accelerators, it is possible to make it run faster by executing different parts in parallel with one another. Many methods for parallelizing programs exist but many of them require homogeneous hardware to work and/or require very low cost, low latency communication mechanisms to obtain any benefit. Our compiler uses programmer annotations (many/all of which can be inserted automatically) to split the code that invokes the accelerators (‘the control code’) into a number of parallel “threads” which communicate infrequently. Parallelizing the control code is advantage because it allows tasks on independent accelerators to run concurrently.
Our compiler supports a variety of code generation strategies which allow the parallelized control code to run on a control processor in a real time operating system, in interrupt handlers or in a polling loop (using ‘wait for event’ if available to reduce power) and it also supports distributed scheduling where some control code runs on one or more control processors, some control code runs on programmable accelerators, some simple parts of the code are implemented using conventional task-chaining hardware mechanisms. It is also possible to design special ‘scheduler devices’ which could execute some parts of the control code. The advantage of not running all the control code on the control processor is that it can greatly decrease the load on the control processor.
Other parallelising methods may be used in conjunction with the other aspects of this compiler.
Some of the features of our approach are:
Some of the recent known discussions on decoupled software pipelining are:
The basic decoupling algorithm splits a block of code into a number of threads that pass data between each other via FIFO channels. The algorithm requires us to identify (by programmer annotation or by some other analysis including static analysis, profile driven feedback, etc.) the following parts of the program:
(Identifying replicatable objects and data boundaries are two of the features of our decoupling algorithm.)
If we use annotations on the program to identify these program parts, a simple program might look like this:
This degree of annotation is fine for examples but would be excessive in practice so most real embodiments would rely on tools to add the annotations automatically based on heuristics and/or analyses.
At a high level, the algorithm splits the operations in the scope into a number of threads whose execution will produce the same result as the original program under any scheduling policy that respects the FIFO access ordering of the channels used to communicate between threads.
The particular decoupling algorithm we use generates a maximal set of threads such that the following properties hold:
For the example program above, the maximal set of threads is:
One way to generate this set of threads is as follows:
Another way if to pick an operation, identify all the operations which must be in the same thread as that operation by repeatedly adding operations which would be merged (in step 2 above). Then pick the next operation not yet assigned to a thread and add all operations which must be in the same thread as that operation. Repeat until there are no more non-replicatable operations. (There are lots of other ways of tackling this problem: basically, we are forming equivalence classes based on a partial order and there are many known ways to do that.)
Note that doing dataflow analysis on arrays one must distinguish defs which are also kills (i.e., the entire value of a variable is overwritten by an operation) and that requires a more advanced analysis than is normally used.
There are a number of extensions to this model
It is conventional to use dataflow analysis to determine the live ranges of a scalar variable and then replace the variable with multiple copies of the variable: one for each live range. We use the same analysis techniques to determine the live range of arrays and split their live ranges in the same way. This has the benefit of increasing the precision of later analyses which can enable more threads to be generated. On some compilers it also has the undesirable effect of increasing memory usage which can be mitigated by later merging these copies if they end up in the same thread and by being selective about splitting live ranges where the additional decoupling has little overall effect on performance.
The put and get operations used when decoupling can be used both for scalar and non-scalar values (i.e., both for individual values (scalars) and arrays of values (non-scalars) but they are inefficient for large scalar values because they require a memory copy. Therefore, for coarse-grained decoupling, it is desirable to use an optimized mechanism to pass data between threads.
In operating systems, it is conventional to use “zero copy” interfaces for bulk data transfer: instead of generating data into one buffer and then copying the data to the final destination, we first determine the final destination and generate the data directly into the final destination. Applying this idea to the channel interface, we can replace the simple ‘put’ operation with two functions: put_begin obtains the address of the next free buffer in the channel and put_end makes this buffer available to readers of the channel:
Similarily, the get operation is split into a get_begin and get_end pair
Using these operations, we can often rewrite sequences of code such as:
Note that doing zero copy correctly requires us to take lifetime of variables into account.
We can do that using queues with multiple readers, queues with intermediate r/w points, reference counts or by restricting the decoupling (all readers must be in same thread and . . . ) to make lifetime trivial to track. This can be done by generating custom queue structures to match the code or custom queues can be built out of a small set of primitives.
This section illustrates both how to get better results and also that we may not get exactly the same control structure but that they are very similar.
Prior art on decoupling restricts the use of decoupling to cases where the communication between the different threads is acyclic. There are two reasons why prior art has done this:
It is possible to modify the decoupling algorithm to allow the programmer to insert puts and gets (or put_begin/end, get_begin/end pairs) themselves. The modified decoupling algorithm treats the puts and gets in much the same way that the standard algorithm treats data boundaries. Specifically, it constructs the maximal set of threads such that:
For example, given this program:
The modified decoupling algorithm will produce:
This extension of decoupling is useful for creating additional parallelism because it allows f and g to be called in parallel.
Writing code using explicit puts can also be performed as a preprocessing step. For example, we could transform:
To the following equivalent code:
Which, when decoupled gives very similar code to the above.
(There are numerous variations on this transformation including computing f(i+1) unconditionally, peeling the last iteration of the loop, etc.)
A First-In First-Out (FIFO) channel preserves the order of values that pass through it: the first value inserted is the first value extracted, the second value inserted is the second value extracted, etc. Other kinds of channel are possible including:
Using most of these alternative channels has an affect on program meaning so we either have to perform an analysis before using a different kind of channel or the programmer can indicate that a different choice is appropriate/allowed.
In parallel programming, it is often necessary for one thread to need exclusive access to some resource while it is using that resource to avoid a class of timing dependent behaviour known as a “race condition” or just a “race”. The regions of exclusive access are known as “critical sections” and are often clearly marked in a program. Exclusive access can be arranged in several ways. For example, one may ‘acquire’ (aka ‘lock) a ‘lock’ (aka ‘mutex’) before starting to access the resource and ‘release’ (aka ‘unlock’) the lock after using the resource. Exclusive access may also be arranged by disabling pre-emption (such as interrupts) while in a critical section (i.e., a section in which exclusive access is required). In some circumstances, one might also use a ‘lock free’ mechanism where multiple users may use a resource but at some point during use (in particular, at the end), they will detect the conflict, clean up and retry.
Some examples of wanting exclusive access include having exclusive access to a hardware accelerator, exclusive access to a block of memory or exclusive access to an input/output device. Note that in these cases, it is usually not necessary to preserve the order of accesses to the resource.
The basic decoupling algorithm avoids introducing race conditions by preserving all ordering dependencies on statements that access non-replicated resources. Where locks have been inserted into the program, the basic decoupling algorithm is modified as follows:
Decoupling can be applied to any sequential section of a parallel program. If the section communicates with the parallel program, we must determine any ordering dependencies that apply to operations within the section (a safe default is that the order of such operations should be preserved). What I'm saying here is that one of the nice properties of decoupling is that it interacts well with other forms of paralellization including manual parallelization.
The decoupling algorithm generates sections of code that are suitable for execution on separate processors but can be executed on a variety of different execution engines by modifying the “back end” of the compiler. That is, by applying a further transformation to the code after decoupling to better match the hardware or the context we wish it to run in.
The most straightforward execution model is to execute each separate section in the decoupled program on a separate processor or, on a processor that supports multiple hardware contexts (i.e., threads), to execute each separate section on a separate thread.
Since most programs have at least one sequential section before the separate sections start (e.g., there may be a sequential section to allocate and initialize channels), execution will typically start on one processor which will later synchronize with the other processors/threads to start parallel sections on them.
In the context of an embedded system and, especially, a System on Chip (SoC), some of the data processing may be performed by separate processors such as general purpose processors, digital signal processors (DSPs), graphics processing units (GPUs), direct memory access (DMA) units, data engines, programmable accelerators or fixed-function accelerators. This data processing can be modelled as a synchronous remote procedure call. For example, a memory copy operation on a DMA engine can be modelled as a function call to perform a memory copy. When such an operation executes, the thread will typically:
This mode of execution can be especially effective because one ‘control processor’ can keep a number of accelerator's busy with the control processor possibly doing little more than deciding which accelerator to start next and on what data. This mode of execution can be usefully combined with all of the following forms of execution.
Instead of a multiprocessor or multithreaded processor, one can use a thread library, operating system (OS) or real time operating system (RTOS) running on one or more processors to execute the threads introduced by decoupling. This is especially effective when combined with the use of accelerators because running an RTOS does not provide parallelism and hence does not increase performance but using accelerators does provide parallelism and can therefore increase performance.
Instead of executing threads directly using a thread library, OS or RTOS, one can transform threads into an ‘event-based’ form which can execute more efficiently than threads. The methods can be briefly summarized as follows:
Together, these transformations can be viewed as a way of transforming a thread into a state machine with each context switch point representing a state and the code that continues execution from each context switch point viewed as a transition function to determine the next state. Execution of transformed threads can be viewed as having been transformed to an event-based model where all execution occurs in response to external events such as responses from input/output devices or from accelerators. It is not necessary to transform all threads: event-based execution can coexist with threaded execution.
Transforming threads as described above to allow event-based execution is a good match for applications that use accelerators that signal task completion via interrupts. On receiving an interrupt signalling task completion the following steps occur:
Transforming threads as described above is also a good match for polling-based execution where the control processor tests for completion of tasks on a set of accelerators by reading a status register associated with each accelerator. This is essentially the same as interrupt-driven execution except that the state of the accelerators is updated by polling and the polling loop executes until all threads complete execution.
Distributed scheduling can be done in various ways. Some part of a program may be simple enough that it can be implemented using a simple state machine which schedules one invocation of an accelerator after completion of another accelerator. Or, a control processor can hand over execution of a section within a thread to another processor. In both cases, this can be viewed as a RPC like mechanism (“{foo( ); bar( )@P0;}@P1”). In the first case, one way to implement it is to first transform the thread to event-based form and then opportunistically spot that a sequence of system states can be mapped onto a simple state machine and/or you may perform transformations to make it map better.
Two claims in this section: 1) using a priority mechanism and 2) using a non-work-conserving scheduler in the context of decoupling
If a system has to meet a set of deadlines and the threads within the system share resources such as processors, it is common to use a priority mechanism to select which thread to run next. These priorities might be static or they may depend on dynamic properties such as the time until the next deadline or how full/empty input and output queues are.
In a multiprocessor system, using a priority mechanism can be problematic because at the instant that one task completes, the set of tasks available to run next is too small to make a meaningful choice and better schedules occur if one waits a small period of time before making a choice. Such schedulers are known as non-work-conserving schedulers.
A long-standing problem of parallelizing compilers is that it is hard to relate the view of execution seen by debug mechanisms to the view of execution the programmer expects from the original sequential program. Our tools can take an execution trace obtained from running a program on parallel hardware and reorder it to obtain a sequential trace that matches the original program. This is especially applicable to but not limited to the coarse-grained nature of our parallelization method.
To achieve complete reconstruction, it helps if the parallelizing compiler inserts hints in the code that make it easier to match up corresponding parts of the program. In the absence of explicit hints, it may be possible to obtain full reconstruction using debug information to match parts of the program.
When there are no explicit hints or debug information, partial reconstruction can be achieved by using points in the program that synchronize with each other to guide the matching process. The resulting trace will not be sequential but will be easier to understand. A useful application is to make it simpler to understand a trace of a program written using an event-based programming style (e.g., a GUI, interrupt handlers, device drivers, etc.)
Partial reconstruction could also be used to simplify parallel programs running on systems that use release consistency. Such programs must use explicit memory barriers at all synchronization points so it will be possible to simplify traces to reduce the degree of parallelism the programmer must consider.
One simple case of this is reconstructing a ‘message passing’ view of bus traffic.
HP has been looking at using trace to enable performance debugging of distributed protocols. Their focus is on data mining and performance not reconstructing a sequential trace.
http://portal.acm.org/citation.cfm?id=945445.945454&dl =portal&dl=ACM&type=series&idx=945445&part=Proceedings&WantType=Proceedings&title=ACM %20Symposium%20on%20Operating%20Systems%20Principles&CFID=11111111&CFTOKEN=2222222
Suppose we can identify sections of the system execution and we have a trace which lets us identify when each section was running and we have a trace of the memory accesses they performed or, from knowing properties of some of the sections, we know what memory accesses they would perform without needing a trace. The sections we can identify might be:
We can summarize the memory accesses of each section in terms of the input data and the output data (what addresses were accessed and, perhaps, what values were read or written).
Given a sequence of traces of sections, we can construct a dynamic dataflow graph where each section is a node in a directed graph and there is an edge from a node M to a node N if the section corresponding to M writes to an address x and the section corresponding to N reads from address x and, in the original trace, no write to x happens between M's write to x and N's read from x.
This directed dataflow graph shows how different sections communicate with each other and can be used for a variety of purposes:
Many other uses exist.
The first section talks about what you need for the general case of a program that has been parallelized and you would like to serialize trace from a run of the parallel program based on some understanding of what transformations were done during parallelization (i.e., you know how different bits of the program relate to the original program). The second part talks about how you would specifically do this if the paralellization process included decoupling. The sketch describes the simplest case in which it can work but it is possible to relax these restrictions significantly.
Here is a brief description of what is required to do trace reconstruction for decoupled programs. That is, to be able to take a trace from the decoupled program and reorder it to obtain a legal trace of the original program.
Most relevant should be conditions 1-9 which say what we need from trace. Where the conditions do not hold, there need to be mechanisms to achieve the same effect or a way of relaxing the goals so that they can still be met. For example, if we can only trace activity on the bus and two kernels running on the same DE communicate by one leaving the result in DE-local memory and the other using it from there, then we either add hardware to observe accesses to local memories or we tweak the schedule to add a spurious DMA copy out of the local memory so that it appears on the bus or we pretend we didn't want to see that kind of activity anyway.
Condition 10 onwards relate mainly to what decoupling aims to achieve. But, some conditions are relevant such as conditions 5 and 6 because, in practice, it is useful to be able to relax these conditions slightly. For example (5) says that kernels have exclusive access to buffers but it is obviously ok to have multiple readers of the same buffer and it would also be ok (in most real programs) for two kernels to (atomically) invoke ‘malloc’ and ‘free’ in the middle of the kernels even though the particular heap areas returned will depend on the precise interleaving of those calls and it may even be ok for debugging printfs from each kernel to be ordered.
Initial assumptions (to be relaxed later):
Consequences of (1)-(2): We can derive which kernel instance is running on any processor at any time.
Consequences of (1)-(4): We can identify each transaction with a kernel instance and we can see all transactions a kernel performs.
Consequences of (1)-(6): Given a trace consisting of the interleaved transactions of a set of kernel instances, we can reorder the transactions such that all transactions of a kernel are contiguous and the resulting trace satisfies all read after write data dependencies. That is, we can construct a sequentially consistent view of the transactions as though kernels executed atomically and sequentially.
Note that there may be many legal traces. e.g., if A (only) writes to address 0 and then 1 and B (only) writes to address 2 and then 3 then the trace ‘0,2,1,3’ could be reordered to ‘0,1,2,3’ or to ‘2,3,0,1’.
Consequences of (7)-(9): Given a trace of the state transitions and synchronizations, we can reorder them into any of the set of legal transitions those state machines could have made where a transition is legal if it respects synchronization dependencies. Consequences of (1)-(9): Given a trace of all kernel transactions and all state transitions and synchronizations, we can reorder them into any legal trace which respects the same synchronization dependencies and data dependencies.
The challenge of trace reconstruction is to show that, if you decouple a program, then the following holds. (Actually, this is what you want to show for almost any way you may parallelize a program.)
(We further assume that we know how to do this reordering and how to identify equivalent points.)
Consequences of (1)-(10): We can reorder any trace to match a sequential version of the same program.
To show that decoupling gives us property (10) (i.e., that any trace of the decoupled program can be reordered to give a trace of the original program and to show how to do that reordering), we need to establish a relationship between the parallel state machine and the master state machine (i.e., the original program). This relationship is an “embedding” (i.e., a mapping between states in the parallel and the master machines such that the transitions map to each other in the obvious way). It is probably easiest to prove this by considering what happens when we decouple a single state machine (i.e., a program) into two parallel state machines.
When we decouple, we take a connected set of states in the original and create a new state machine containing copies of those states but:
From this, it follows that the parallel machine_can_execute the same sequence as the original machine. To show that it_must_execute an equivalent sequence (i.e., that we can always reorder the trace), we need the following properties of decoupling:
Extensions of decoupling allow the programmer to indicate that two operations can be executed in either order even though there is a data dependency between them (e.g., both increment a variable atomically). This mostly needs us to relax the definition of what trace reconstruction is meant to do. One major requirement is that the choice of order doesn't have any knock-on effects on control flow.
Outline proof: Because they share the same control flow, the two threads perform opposing actions (i.e., a put/get pair) on channels in the same sequence as each other. A thread can only block on a get or a put if it has run ahead of the other thread. Therefore, when one thread is blocked, the other is always runnable.
Extensions of decoupling allow for the following:
A sufficient (and almost necessary) condition is that a put and a get on the same channel must not be inside corresponding critical sections (in different threads):
which means that the original code cannot have looked like this:
That is, extreme care must be taken if DECOUPLE occurs inside a critical section especially when inserting DECOUPLE annotations automatically
A useful and safe special case is that all initialization code does N puts, a loop then contains only put_get pairs and then finalization code does at most N gets. It should be possible to prove that this special case is ok.
It might also be possible to prove the following for programs containing arbitrary puts and gets: if the original single-threaded program does not deadlock (i.e., never does a get on an empty channel or a put on a full channel), then neither will the decoupled program.
A long-standing problem of parallelizing compilers is that it is virtually impossible to provide the programmer with a start-stop debugger that lets them debug in terms of their sequential program even though it runs in parallel. In particular, we would like to be able to run the program quickly (on the parallel hardware) for a few minutes and then switch to a sequential view when we want to debug.
It is not necessary (and hard) to seamlessly switch from running parallel code to running sequential code but it is feasible to change the scheduling rules to force the program to run only one task at a time. With compiler help, it is possible to execute in almost the sequence that the original program would have executed. With less compiler help or where the original program was parallel, it is possible to present a simpler schedule than on the original program. This method can be applied to interrupt driven program too.
This same method of tweaking the scheduler while leaving the application unchanged can be used to test programs more thoroughly. Some useful examples:
John Regehr did some work on avoiding interrupt overload by delaying and combining interrupts.
http://portal.acm.org/citation.cfm?id=945445.945454&dl=portal&dl=ACM&type=series&idx=945445&part=Proceedings&WantType=Proceedings&title=ACM %20Symposium%20on%20Operating%20Systems%20Principles&CFID=11111111&CFTOKEN=2222222
but this is really about modifying the (hardware) scheduling of interrupts to have more desirable properties for building real time systems whereas we are more interested in:
Errors in concurrent systems often stem from timing-dependent behaviour. It is hard to find and to reproduce errors because they depend on two independently executing sections of code executing at the same time (on a single-processor system, this means that one section is pre-empted and the other section runs). The problematic sections are often not identified in the code.
Concurrent systems often have a lot of flexibility about when a particular piece of code should run: a task may have a deadline or it may require that it receive 2 seconds of CPU in every 10 second interval but tasks rarely require that they receive a particular pattern of scheduling.
The idea is to use the flexibility that the system provides to explore different sequences from those that a traditional scheduler would provide. In particular, we can use the same scheduler but modify task properties (such as deadlines or priorities) so that the system should still satisfy real time requirements or, more flexibly, use a different scheduler which uses a different schedule.
Most schedulers in common use are ‘work conserving schedulers’: if the resources needed to run a task are available and the task is due to execute, the task is started. In contrast, a non-work-conserving scheduler might choose to leave a resource idle for a short time even though it could be used. Non-work-conserving schedulers are normally used to improve efficiency where there is a possibility that a better choice of task will become available if the scheduler delays for a short time.
A non-work-conserving scheduler for testing concurrent systems because they provide more flexibility over the precise timing of different tasks than does a work-conserving scheduler. In particular, we can exploit flexibility in the following way:
In all these cases, the modification of the schedule is probably done within the constraints of the real-time requirements of the tasks. For example, when a task becomes runnable, one might establish how much ‘slack’ there is in the schedule and then choose to delay the task for at most that amount. In particular, when exploring different phases, if the second event doesn't happen within that period of slack, then the first event must be sent to the system and we will hope to explore that phase the next time the event triggers.
It is often useful to monitor which different schedules have been explored either to report to the programmer exactly what tests have been performed and which ones found problems or to drive a feedback loop where a test harness keeps testing different schedules until sufficient test coverage has been achieved.
When a sequential program is parallelized, it is often the case that one of the possible schedules that the scheduler might choose causes the program to execute in exactly the same order that the original program would have executed. (Where this is not true, such as with a non-preemptive scheduler it is sometimes possible to insert pre-emption points into the code to make it true.)
If the scheduler is able to determine what is currently executing and what would have run next in the original program, the scheduler can choose to execute the thread that would run that piece of code. (Again, it may be necessary to insert instrumentation into the code to help the scheduler figure out the status of each thread so that it can execute them in the correct order.)
Reduce amount of reordering required by reordering the execution which might reduce size of window required, simplify task of separating out parallel streams of memory accesses, eliminate the need to reorder trace at all, etc.
Working with the whole program at once and following compilation through many different levels of abstraction allows us to exploit information from one level of compilation in a higher or lower level. Some examples:
Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 60/853,756, filed Oct. 24, 2006, the entire content of which is hereby incorporated by reference in this application.
Number | Date | Country | |
---|---|---|---|
60853756 | Oct 2006 | US |