1. Field of the Invention
This invention relates to a device comprising a communications stack, the stack including a scheduler. The device performs real-time DSP or communications activities.
2. Description of the Prior Art
Modern communications systems are increasingly complex, and this fact is threatening the ability of companies to bring such products to market at all. The pressure has been felt particularly by the manufacturers of user equipment terminals (colloquially, ‘UEs’) in the wireless telecommunications space. These OEMs now find that they must integrate multiple, packet-based standards (coming, in all likelihood, from a number of independent development houses) together on an underlying hardware platform, within an ever-shortening time-to-market window, without violating a relatively constrained resource profile (memory, cycles, power etc.). We refer to this unenviable predicament as the ‘multimode problem’.
The traditional stack development approach has sometimes been referred to a ‘silo based’, because of its extreme vertical integration between software and hardware, and the general lack of any ‘horizontal’ integration with other stacks.
This silo approach breaks down dramatically when confronted with the multimode problem, for a number of reasons, amongst which:
The present invention is an element in a larger solution to the above problems, called the Communications Virtual Machine (“CVM™”) from Radioscape Limited of London, United Kingdom. Reference may be made to PCT/GB01/00273 and to PCT/GB01/00278.
The present invention, in a first aspect, is a device comprising a communications stack split into:
The likelihood of engine request state transitions describes the likely sequence of engines which the executives will impose and may be represented as a table or matrix (generated during simulation) for each of several different executives: the scheduler can at run-time in effect, as the start of a time slice, look-forward in time to discern a number of possible schedules (i.e. sequence of future engines), assess the merits of each possible schedule using pre-defined parameters and weightings (e.g. memory and power utilisation), then apply the schedule which is most appropriate given those parameters. The process repeats at the start of the next time slice. The scheduler therefore operates as a predictive scheduler.
The present invention is particularly effective in addressing the “multi-mode problem”: dynamically balancing the requirements of multiple communications stacks operating concurrently.
The scheduler may be a service of a virtual machine layer separating the engines from the executives: in an implementation, this is the CVM, which will described later. A key feature of the CVM is that executives cannot invoke engines directly but only through the scheduler.
The scheduler may use engine resource utilisation profiles; these may cover both cycles and memory. The scheduler may decide which engine execution tasks are to be submitted to the underlying RTOS for execution, how many RTOS threads to use, at what priority and at each logical timestep.
In an implementation, the scheduler operates a runtime scheduling policy comprising a heuristic forward scenario generator that takes a set of submitted immediate engine requests and generates an incomplete set of possible future scenarios, based upon the state transition information. The scheduler may operate a runtime scheduling policy comprising a set of planning metrics that can be used to evaluate each of the possible future scenarios, weighing up the relative importance of one or more of the following factors: (a) memory utilisation, (b) timeslice utilisation, (c) proximity to deadline, (d) power utilisation, and generating a single scalar score.
The planning metrics may reflect choices made at design time to weight the factors differently, for example, whether the device responds early or late to resource shortages.
The scheduler may operate a dispatcher that takes the highest scoring such scenario and schedules all forward non-contingent threads onto the underlying RTOS.
The scheduler may also be able to degrade system performance gracefully, rather than invoking a catastrophic failure, by failing some requests in a systematic manner.
The present invention will be described with reference to the accompanying Figures, in which:
The present invention will be described with reference to an implementation from Radioscape Ltd of London, United Kingdom: the CVM (communication virtual machine).
1. Overview of Predictive Scheduling
We believe that the use of predictive scheduling policies, coupled to the CVM runtime and design and simulation tools, provides a valid solution to the multimode problem (i.e. where we have a number of independent executives, which must be scheduled over a single physical thread), while not sacrificing overall system efficiency.
Under the CVM, a communications stack is split up into engines (high resource transforms, which are either implemented in custom hardware or in DSP assembly code), and executives (the rest of the software, written in a hardware-neutral language such as C). Engines must utilise a standard argument-passing format, conform in behaviour to a published model, and provide a resource utilisation profile of themselves (for memory, cycles etc.). All executives, at runtime, must request engine execution exclusively through a shared CVM service, the scheduler; they may not invoke engines directly. Only the CVM scheduler may decide which of the requested tasks to forward to the underlying RTOS for execution, on how many RTOS threads, with what relative priority. Engines have run-to-completion semantics.
An approach that we believe provides a solution to the multimode problem, and which addresses the shortcomings just discussed, is termed predictive scheduling. Under this paradigm, engine request transition likelihood tables, constructed during simulation runs, are used, together with the called engines' resource utilisation profiles, to allow the scheduling policy at runtime to ‘look forward’ in time and dynamically balance the requirements of multiple concurrent stacks.
The technique may also be referred to as ‘stochastic’ because some of the engine request state transitions are probabilistic and may therefore be written as expressions of a random variable. Additionally, the engine resource profiles themselves may be expressed stochastically, where for example the number of cycles required by a task is not simply a deterministic function of the dimensions of its inputs (consider e.g., a turbo coder that will take more cycles to process a more corrupted input vector).
1.1 Claimed Benefits of Predictive Scheduling and CVM
Our contention is: “that predictive scheduling under CVM should successfully generate valid serialised schedules for a significant class of ‘multimode problem’ scenarios where silo based approaches fail, and furthermore, that it should beat ‘simple RTOS’ scheduling approaches for such problems too.”
We additionally assert:
The CVM is a combination of run-time middleware and design-time tools that together help users implement a development paradigm for complex communication stacks.
The underlying conceptual model for CVM is as follows. We assume that a communication stack (particularly at layer 1) may be decomposed into:
Unfortunately, most system designs have tended to centre around a ‘silo’ paradigm, according to which assumptions about HRF implementation, resource usage, call format and behaviour have been allowed to ‘leak out’ into the rest of the design. This has led to a number of quite unpleasant design practices taking root, all under the banner of efficiency. For example, knowing how long various HRFs will take to execute (in terms of cycles), and how much scratch memory each will require, it often becomes possible for the system designer to write a static schedule for scratch, allowing a common buffer e.g. to be used by multiple routines that do not overlap in time, thereby avoiding potentially expensive and non-deterministic calls to malloc( ) and free( ). However, such a design also tends to be highly fragile; should any of the HRFs be re-implemented (causing a modification in their resource profiles and/or timings), or if the underlying hardware should change, or (worst of all!) if the stack should be compelled to share those underlying resources (including memory), with another stack altogether (the multimode problem), then it is a virtual certainty that a ground-up redesign will be called for. Silo development is the embedded systems equivalent of spaghetti programming (where the hardwiring is across the dimension of resource allocation, rather than specifically program flow), and with the advent of complex, packet based multimode problems, it has reached the end of its useful life.
2.1 CVM Makes HRFs Into Engines
The first step away from silo development that CVM takes is in the area of HRFs (high-resource functions). In a typical wireless communications stack, nearly 90% of the overall system resources are consumed in such functions. However, in systems developed without CVM, HRFs (such as an FFT, for example), tend to be quite variable across different implementations. This is illustrated in
The drawbacks here are:
CVM engines are HRFs with certain aspects standardized. This is illustrated in
In comparison with the HRF case just considered, the CVM engine has the following attributes:
Of course, having these nicely standardised HRFs in the form of engines is only part of the solution. We have now isolated most of our system's expensive processing inside commoditized components (engines) with known behaviour, standard APIs and profiled resource usage.
Yet all this would be for naught, from a resource scheduling point of view, if we allowed engines to be called directly by the high level code. This is because direct calls would, more or less, determine the underlying execution sequence and also the threading model. The latter point is critical for an efficient implementation. Even worse, on our CVM model of an engine, the caller would be responsible for setting up the appropriate memory (of both the scratch and persistent varieties) for the underlying engine, thereby quickly landing us back with explicit resource scheduling.
The CVM therefore takes the approach that engines must be called only via a middleware service-the scheduler. The scheduler effectively exists as a single instance across all executive process and logical threads, and decides, utilising a plug-in scheduling policy, which of these are to be submitted for execution to the underlying RTOS, using how many RTOS threads, at what priority, at each logical timestep. This is shown conceptually in
2.3 CVM Tools and Design Flow
The overall design flow for the CVM is shown in
In an extreme bottom-up flow, DSP engineers would then use the engine development kit (EDK), integrated with the appropriate DSP development tool (e.g., Visual DSP++) to construct optimised engines for all of the required HRFs in the system. These would be conformance tested against the gold standards and then performance profiled using the EDK.
For an extreme top-down flow, the same DSP engineers would simply publish their expected ‘forward declared’ performance profiles for the necessary engines, but would not actually write them. Reality is likely to lie somewhere between these two extremes, with the majority of needed engines either existing in engine form or requiring simply to be ‘wrapped’ and profiled, and with a few engines that do not yet exist (or have not yet been optimised) being forward declared.
Next, the designer would use the system designer to choose and deploy the appropriate number of instances of engine implementations against each required HRF from the executive. Then, a scheduling policy would be chosen using the system designer, and a traffic simulation executed. The results of this simulation would be checked against design constraints—and any mismatches would require either recoding of the ‘bottleneck’ engines, redesign with lowered functionality, or a shift in hardware platform or scheduler (and possibly a number of these).
Once a satisfactory result has been obtained (and multiple concurrent executives may be simulated in this manner), the executive developers can start to flesh out in more detail all of the necessary code inside the stack. As the executive is refined, traffic simulations should be continued to ensure that no surprising behaviour has been introduced (particularly where ‘forward declared’ engines have been used).
Finally, once all necessary engine implementations have been provided and the executive fully elaborated, an end deployment may be generated through the CVM system builder, which generates the appropriate runtime and also generates the makefiles to build the final system images.
3. The Multimode Problem
In the multimode problem case, we have a number of independent executives, which must be scheduled over a single physical thread. We have to assume that while engine resource profiles and engine call sequence transition probability maps may be available or in any case may be derived for these executives, no explicit deadline information is available (since we will probably be working from executives ‘imported’ into the CVM system initially, rather than code written explicitly for it; furthermore, the ‘event driven’ nature of processing means that it is very difficult in principle for executives to know how much absolute time remains to perform a process at any given point).
We assume that each executive is provided with a set of stimulus information for traffic-level simulation. Then the problem becomes deriving a valid serialised schedule for such a system at a specified loading, expressed in terms of a set of system parameters, such as the number of active channels, maximum throughput bitrate, etc. The ‘optimality’ of any such schedule will be constrained on the upper boundary by 100% limits on each of the resources (e.g., any schedule that uses 120% of the available memory at some point is invalid, or at least, requires further work to clarify its starvation behaviour), but below this point some weighting will determine the ‘goodness of fit’. For example, we may regard a serialised schedule that keeps memory allocation below 50% at all times desirable, and so weight our overall metric appropriately (we shall have more to say about metrics shortly, and in particular, the difference between planning metrics and analysis metrics).
3.1 Key Assumptions of the Multimode Problem
We make a number of assumptions for the stipulation of the design problem, as follows:
We now consider the various steps that will be followed in the production of a predictive scheduling policy. In overview, these are as follows (more detail is provided in the following text):
We shall now consider each of the above steps in a little more detail.
3.3 Generation of Initial ‘Framework’ Executives
We can begin thinking about the derivation of a successful predictive scheduling policy, once we have an understanding of the core algorithmic datapaths in our multimode system. This will have been derived from a prior analysis using a bit-true numerical simulator (such as RadioLab or SPW). It is assumed, in other words, that at the beginning of the analysis the system designer understands the primary HRFs that have to be ‘strung together’ in order to fulfil the requirements of the stack, and furthermore knows the bit widths at which each HRF must operate in order to satisfy the core engineering quality targets of the multimode system.
With this knowledge, it is assumed that the system engineer can put together a basic ‘framework’ executive, which will represent calls to all the major engine types required in an appropriate order, within the data, control and tracking planes of the modem.
These ‘proto-executives’ will probably not contain much in the way of detailed processing or inter-plane messaging at this stage, but are simply intended to represent the majority of the engine calls (and hence, by extension, resource loading) that will be imposed by the running system. It is assumed that the executives are written in a manner that yields them suitable for traffic simulation (in which engines called are not actually executed, in order to save time). Therefore, any data-dependent branches in the executive code will have to be written with polymorphs to be invoked during simulation runs. With this, and assuming that the system engineer is able to construct (or capture from a live system) a realistic stimulus set for each of the executives (for example, E1 and E2), the first phase of simulation proper may begin.
3.4 Derive State Transition Probability Matrix
At this point, we are not interested in (and nor do we necessarily have access to) the engine profiles for the underlying implementations. None of this really matters here—what we are after is an analysis of the algorithmic flow. Our goal is to build up an engine request probability matrix based upon the calls that are made, as is illustrated conceptually in
As may be appreciated, the derived matrix is sparse, with many ‘0’ transitions, and a number of ‘1’ transitions. However, in a typical stack with branching there will be some probabilities between 0 and 1, which is the first introduction of stochastic behaviour into the system.
Note that we must be careful in the way that we specify engine transitions, to determine their context: e.g. a complex 32-bit vector multiplier might be used in two quite different locations within a stack. Furthermore, with the assumptions of run-to-completion semantics that are now possible for imperative code in CVM at the plane level, state transitions (which are flattened) are not always the most informative mode: we may prefer to work with a hierarchical transition system with planar transitions at the highest level, component transitions below this and finally looking at engine transitions only within a fully resolved (and leaf level) plane/component ‘address’.
One subtle point: the modelling of the state transitions should include modelling of stimuli that are periodically emitted by sources, otherwise we will be missing a significant amount of detail from our forward world view as incoming events (and their consequences) would otherwise take the scheduler ‘by surprise’ every time.
3.5 Generate Required Engine Resource Profiles
With the state transition probability matrices derived, the design may proceed to the next phase. For this, we will need to have real engine resource profiles for each of the types cited by the executives derived above (which in turn, will require a view about the target hardware substrate for the engines; for simplicity, we'll assume that there is only a single processor of known type at the beginning of the project, since otherwise, this would represent a significant dimension of the analysis).
There are, in effect, two ways to derive the resource profiles, and it is likely in any real project that some combination of the both will be employed. The first method involves actually having DSP engineers develop the optimised runtime code using the system development environment in conjunction with the CVM EDK (engine development kit), proving that this conforms to the required behaviour by comparing it with the same behavioural models used during the numerical simulations, and then profiling the performance of the engines (at least in terms of memory and cycles) against varying dimensions of input vector.
The second method involves DSP engineering staff (or the system engineer) making an ‘educated guess’ about the likely resource profile, and then simply forward declaring it; the idea being to determine (at an approximate level) whether the overall system makes sense, before committing to any significant engine development workload proper.
In either the ‘top down’ or ‘bottom up’ case, the resources required by an engine may be deterministic or stochastic (thereby representing a second level of randomness into the overall scheduling mix). A turbo decoder is an example of a stochastic resource engine, whose cycle-loading is not expressible as a deterministic function of its input vector dimensions only (since the number of times it loops will depend upon the corruption of the data contents themselves).
3.6 Provide Core Components of Runtime Scheduling Policy
With this developed, the key components of the runtime scheduling policy must next be put in place. The three main parts are as follows:
3.6.1 Heuristic Forward Scenario Generator
At any given time, the runtime scheduler will only have presented to it, by the various logical threads in the controlling executives, the very next deterministic engine request to be considered for execution. Happily, though the use of the transition matrices discussed above, coupled with the costs of engine execution available from the engine resource profiles, it becomes possible for the scheduler to derive a number of possible forward scenarios for evaluation.
However, even were it possible, we do not want to look ‘infinitely’ into the future, because this would cause a combinatorial explosion in the considered state space. Nor do we even want to look a uniform ‘fixed’ number of hops ahead, since some schedules may be more promising than others. The problem here is cognate to that faced by chess-playing software, which must consider the possible future consequences of various moves. Not all possible outcomes will be considered (even within the constraints of e.g. a 2 move ‘lookahead’), but rather a set of heuristics will be utilised to determine which scenarios should be expanded further. Our stochastic scheduling policy faces a cognate challenge.
Indeed, the heuristics that are used for scenario generation may themselves be subject to optimisation as part of the overall development of the stochastic simulation policy (since the purpose is to optimise performance of the final serialised schedule according to the analysis metric).
3.6.2 Develop Planning Metrics
With the scenario generation heuristics in place, the next required step is to provide a set of planning metrics. These are used to analyse the merits of each of the candidate scenarios produced by the generation heuristics, and ultimately to allow each to be represented by a single scalar ‘goodness’ value.
The overall domain for these planning metrics will probably span some or most of the following ‘objective’ measures, evaluated on a per-timeslice and per-timeslice-group basis:
A number of more heuristic metrics may also be employed. Referring back to our ‘chess software’ analogy, the objective metrics would be cognate to valuing outcome positions based on piece values, and the heuristics cognate to rules such as ‘bishops placed on open diagonals are worth more than ones that command fewer free squares’.
However, with all the metrics, the system designer is able to set the transfer function curvature—determining, in effect, whether the system responds early or late to resource shortages, and in addition, the system designer is able to determine the relative weights to assign to each of the planning metrics that together add to give the final single scalar value. The overall situation is shown in
3.6.3 Provide a ‘Lazy’ Recalculation Dispatcher
Having generated the scalar planning metrics for each of the candidate scenarios at a given timestep, the scheduling policy must select the optimal candidate under that metric, and then commit a number of engine requests to the underlying RTOS for execution. Note that at this point there may be multiple underlying RTOS threads assigned and multiple ‘parallel’ RTOS tasks scheduled. The stochastic policy is required to set the overall RTOS priority for these submitted tasks.
Having submitted the schedule, the dispatcher component has completed its job and the overall scheduler policy will return to the quiescent state. To keep the overheads of calculation as low as possible, it is assumed that:
With the candidate stochastic scheduling policy in place, the next step is to run a set of traffic simulations against the (e.g.) E1 and E2 executives, and then to consider the final serialised schedules produced using an overall analysis metric. The serialised schedule represents a timeslice-by-timeslice record of which tasks were actually scheduled for processing. Note that it is assumed that E1 and E2 will be fed data from source drivers, which will simulate any appropriate relative frame time slippage and/or jitter over a large number of frames.
The analysis metric is the final arbiter of the ‘goodness’ of the scheduling fit, and should not be confused with the planning metrics, which are run-time heuristics applied with limited forward knowledge. The goal of the planning metrics is to optimise the overall analysis metric outcome for the concomitant schedule. Returning to our chess software analogy, the analysis metric would equate to the ratio of games won, drawn and lost; the planning metrics (such as ‘aim for positions that put your bishops on open diagonals, where possible’) to the heuristics that experience has shown tend to optimise the probability of achieving a win (or at least a draw). It is only with exhaustive lookahead that planning metrics and analysis metrics can be converged in form, so in general we aim only to converge them in effect.
The actual analysis metric used in practice will depend upon the system designer. One might simply regard any schedule that gives a fit as being good enough. A more sophisticated analysis, though, might use scripting to vary (e.g.) the number of channels and/or the bandwidth of the channels deployed, and then measure the schedule by the point at which the number of failed schedules (situations where denial of service occurs) exceeds a given maximum tolerance threshold. For example, we might stipulate that no more than 1 frame in 1000 of E1, or 1 frame in 100 of E2, be dropped, and then (assuming for simplicity that E2 is a fixed bandwidth service) increase the data rate through the E1 modem until this threshold is exceeded. The last ‘successful’ bandwidth could then be regarded as the output of the analysis metric, and used to compare two candidate scheduling policies.
3.8 Detect and Correct any Resource Conflicts
Starvation occurs when the executive's requests for engine processing cannot be met within the necessary overall deadlines (which are usually set implicitly by frame arrival rates into the modem, if not explicitly by ‘worst time to reply’ constraints within the standard itself).
Note that where multiple standards exist, they will ‘beat’ against one another unless their timings are locked (which will be fairly rare). Furthermore, this ‘phase offset’ will not necessarily precess regularly, as independent stochastic effects in routing, engine execution or both may occur within any of the compound executives. The system designer will need to use the stimulus scripts to get a good coverage of this underlying potential phase space (which should be plotted as an analysis metric surface). Assuming that this space is continuous, then a ‘coarse grid’ analysis may be performed first, with a more ‘zoomed in’ approach being taken where starvation effects occur. There space in general will be multidimensional; for example, with a number of different considered deployments representing another potential axis of exploration, as shown in
If, in this example, 0 were to represent the least acceptable overall analysis metric value, then we can see that for certain values of E1-E2 phase all deployments after number 4 have an unacceptable region of behaviour. The system designer would therefore wish to concentrate primarily on the acceptable deployments (for example, using more memory efficient engines, were that to be the bottleneck).
The CVM system designer tool will be used to explore the deployment state space. This process may itself be automated in a subsequent version of CVM.
When the simulation demonstrates an unacceptable level of an analysis metric being generated, the system designer has one of a four main possible avenues of attack open:
Once a relatively stable deployment has been attained, the designer can turn to the question of optimising the stochastic planning metrics. Both the transfer functions (curvature—do we ‘panic early’ or ‘panic late’ on a given resource) and the overall weights (used to combine together the various metric outputs into a single scalar) may be modified.
Again, we must remember that the overall purpose of our enquiry is to come up with a set of planning metrics that has the highest possible (and sufficiently high in an absolute sense) expected analysis metric outcome for its serialised schedules, without any ‘unacceptable’ cases as we range through the remaining free variables in the system (which, having fixed on a deployment in the previous step, will primarily refer to the relative phase of the multiple stacks as they ‘beat’ against one another). Going back to our chess program analogy, we are trying in this step to decide questions such as “what relative weight should we give to the ‘bishop on open diagonal’ rule (planning metric) if we want to optimise the system's probability of winning (analysis metric) against a player of a certain known skill, given 2 levels of lookahead?”
A number of different optimisation techniques may be used to climb the overall n-dimensional ‘hill’ (assuming that the results show it to be a continuous membrane!). Techniques such as simulated annealing and genetic algorithm selection are generally regarded as having good performance characteristics in this domain.
In all analyses of system performance, the resource requirements of the runtime scheduler itself must be taken into consideration, and that leads us to consideration of the final stage in the development of a stochastic policy.
3.10 Verify Performance Against Simple Scheduler
The analysis of the relatively complex runtime system must be considered against what would be achieved through the use of a more straightforward RTOS scheduler directly. The latter would not have the advantage of information about the resource requirements of engines prior to executing them, and nor would it have access to any ‘lookahead’ capability based upon the transition matrices; however, neither would it have the scenario generation and metric evaluation costs of the stochastic policy to contend with.
We have established in our discussion above the necessary tools to be able to answer the question of relative performance; we simply have to feed the same sample stimulus set into a model that uses the candidate predictive scheduling policy, and then repeat this test using a ‘direct mapped’ RTOS, perhaps with a policy such as first-come-first-served, or earliest-deadline-first. In this implementation, the CVM simply passes inbound engine requests directly to the scheduler (and would use a single thread priority as a first pass), rather than passing them through the stochastic machinery of scenario generation, planning metric analysis, and optimal scenario selection prior to any actual RTOS scheduling requests being issued.
In this analysis, we must be careful to analyse and factor in the overhead due to the scheduler itself. The use of run-to-completion semantics within multi-engine objects, taken together with the ‘lazy’ evaluation model discussed earlier, can help to lower this overhead significantly, by reducing the number of times that the expensive scenario generation is run.
In most cases, such an analysis will demonstrate significant benefit flowing from the use of the stochastic simulations, and this benefit will be quantified through the use of a common net-of-costs analysis metric.
Clearly, such a metric may also represent a very useful way for an organisation to express and prove the behaviour of its technology to customers, because it directly links to revenue: for example, if the analysis metric were to be ‘number of concurrent AMR voice channels sustained with <0.01 frame drop probability’, then we could (e.g.) state that our design obtained an analysis metric of 25 (channels), compared to (e.g.) a naïve design capable of only 10 channels on the same hardware. This would provide a direct value statement for the CVM runtime—it is nominally worth 15 channels (at some $/channel) in our example.
With the predictive policy built, optimised and validated it can be shipped as part of a final system. It is not currently thought likely that any significant runtime ‘learning’ capability (ie., in-situ updates to the transfer functions and weights of the planning metrics, or to the scenario generation logic) will take place in the initial release, but this may be appropriate for later versions of the CVM software.
4. Other Issues
Finally, there are a number of additional issues that are worth mentioning briefly.
4.1 Starvation Handling
Starvation occurs when necessary system processing does not occur in a timely fashion, because inappropriate resources were available to schedule it. For a number of cases, a ‘smarter’ scheduling policy can produce significantly better performance, but ultimately, as loadings increase, there comes a point where even the most sophisticated policies cannot cope, and at this point the system has to be able to fail some of the requests in a systematic manner. Such failure might actually be part of the envisioned and accepted behaviour of the system—a necessary cost of existing in a bursty environment. The important thing is that the scheduler takes action and degrades the system performance gracefully, rather than invoking a catastrophic failure.
Doing this requires that the scheduler be able to propagate error ‘exceptions’ back to the requesting plane, which can then invoke the necessary handlers, ideally integrated with the methods which handle normal channel defaults.
4.2 Scheduling Modes
It is likely that, under analysis, we will find that a system (such as a basestation) may profitably be configured in a number of different distinct ‘modes’. For example, dealing with 1) a large number of fairly similar voice subscribers, 2) with mixed traffic, and 3) with a relatively small number of quite high volume data subscribers, might represent three modes for a basestation; and a similar (traffic-graduated) analysis may be found accurate for handsets as well.
For this reason, we would like to be able to have executives communicate mode information to the underlying scheduler, which would keep ready a set of different transfer functions and weights to be swapped in for each specific mode.
4.3 Scheduling Hints
Similarly, we may want ‘intelligent’ executives to be able to pass scheduling ‘hints’, containing (for example) information about likely forthcoming engine requests, to enable more accurate decisions to be made by the CVM. In this sense, any data passed about proximity to deadlines from the executive to the scheduler constitutes a hint.
Appendix 1: CVM Definitions
The following table lists and describes some of the terms commonly referred to in this Detailed Description section. The definitions cover the specific implementation described and hence should not be construed as limiting more expansive definitions given elsewhere in this specification.
Number | Date | Country | Kind |
---|---|---|---|
0212476.2 | May 2002 | GB | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/GB03/02275 | 5/27/2003 | WO | 5/9/2005 |