1. Field of the Invention
This invention relates to a method of testing components designed to perform real-time, high resource functions.
2. Description of the Prior Art
Modern communications system developers are encountering a number of challenges:
The present invention is an element in a larger solution to the above problems, called the Communications Virtual Machine (“CVM™”) from Radioscape Limited of London, United Kingdom. Reference may be made to PCT/GB01/00273 and to PCT/GB01/00278.
The CVM addresses the following problems associated with the current design methodologies.
The present invention, in a first aspect, is a method of testing an engine designed to perform a real-time DSP or communications high resource function on an embedded target platform, the method comprising the steps of:
By allowing for engines to be conformance tested against an accurate behavioural description of the high resource function (i.e. the reference engine), it is possible to validate and guarantee the behavioural equivalence of that engine to any other engine that passes the same conformance test: this gives the system designer the ability to choose from one of several different target platform/engine pairs when actually deploying a system, because he knows that whichever platform he chooses, he will have an engine that runs on that platform to perform the high resource function in a way that is behaviourally the same as any other engine (specific to a different platform) he might have chosen. An engine should be expansively construed to cover any kind of component that implements a high resource function.
In an implementation, a desktop computer allows the conformance tests to be executed in the development environment of the target platform. A desktop computer should be expansively construed as any computer suitable for testing a component. The desktop computer may issue a conformance certificate that is securely associated with the engine if conformance tests for all vectors are passed. It is envisaged that this approach will allow a market for standardised, commoditised engines from numerous third party suppliers to arise (e.g. versions of the same engine, ported to run on different target platforms): the system designer can then select the most appropriate one, safe in the knowledge that it is behaviourally equivalent to a known reference.
The reference engine may be polymorphic in that it may handle a general data type covering a range of bit-widths and/or overflow behaviours: any engine whose specific data type is a sub-set of the general data type can therefore be conformance tested against that reference engine.
The engine may also be performance profiled by measuring the functionality of the engine in order to build up a set of points on a multi-dimensional surface that can later be interpolated to make useful estimates of engine execution time and resource usage. The profile indicates how well the engine should perform in terms of processor cycles and memory loading given a range of input values, such as a set of input data lengths.
A reference engine is hence essentially an engine, usually based on polymorphic data type definitions, that runs on a PC and provides the standard behaviour against which the system designer can compare candidate engines. A comparison between a reference engine and a candidate engine (an engine to be evaluated) is possible because a reference engine provides two ways of assessing engine efficacy. These are:
With conformance testing, the system designer runs a series of tests that judge the correctness of the functionality of the candidate engine. Each test either passes or fails, and for an engine to gain conformance it must pass all the tests.
Performance profiling is a less stringent method of evaluation, in which the system designer builds up a profile of the performance that he thinks is acceptable in an engine. The profile that is create for a reference engine indicates how well the engine should perform in terms of processor cycles and memory loading given a range of input values, such as a set of input data lengths.
Both the performance profile and the conformance tests are contained within scripts that the system designer generates when he creates a reference engine. As the reference engine created can be polymorphic, it is possible to run the tests on any candidate engine whose types fall under the wider category defined by the polymorphic types of the reference engine. So, there is no need for the system designer to write a different set of scripts for each platform he is writing a high resource function for.
Either the reference engine or the engine may be used to create a functional block and a modelling environment is used to model the performance of the functional block.
In another aspect, there is a device capable of performing a real-time DSP or communications function, the device comprising an engine that has been conformance tested using the method defined above.
In a final aspect, there is a computer program for use with a development environment for a target platform, the computer program enabling the method defined above to be performed on a desktop computer.
The present invention will be described with reference to the accompanying Figures, in which:
1. Overview of the Communication Virtual Machine (CVM)
The CVM is a combination of run-time middleware and design-time tools that together help users implement a development paradigm for complex communication stacks.
The underlying conceptual model for CVM is as follows. We assume that a communication stack (particularly at layer 1) may be decomposed into:
Unfortunately, most system designs have tended to centre around a ‘silo’ paradigm, according to which assumptions about HRF implementation, resource usage, call format and behaviour have been allowed to ‘leak out’ into the rest of the design. This has led to a number of quite unpleasant design practices taking root, all under the banner of efficiency. For example, knowing how long various HRFs will take to execute (in terms of cycles), and how much scratch memory each will require, it often becomes possible for the system designer to write a static schedule for scratch, allowing a common buffer e.g. to be used by multiple routines that do not overlap in time, thereby avoiding potentially expensive and non-deterministic calls to malloc( ) and free( ). However, such a design also tends to be highly fragile; should any of the HRFs be re-implemented (causing a modification in their resource profiles and/or timings), or if the underlying hardware should change, or (worst of all!) if the stack should be compelled to share those underlying resources (including memory), with another stack altogether (the multimode problem), then it is a virtual certainty that a ground-up redesign will be called for. Silo development is the embedded systems equivalent of spaghetti programming (where the hardwiring is across the dimension of resource allocation, rather than specifically program flow), and with the advent of complex, packet based multimode problems, it has reached the end of its useful life.
1.1 CVM Makes HRFs into Engines
The first step away from silo development that CVM takes is in the area of HRFs (high-resource functions). In a typical wireless communications stack, nearly 90% of the overall system resources are consumed in such functions. However, in systems developed without CVM, HRFs (such as an FFT, for example), tend to be quite variable across different implementations. This is illustrated in
The drawbacks here are:
CVM engines are HRFs with certain aspects standardized. This is illustrated in
In comparison with the HRF case just considered, the CVM engine has the following attributes:
Of course, having these nicely standardised HRFs in the form of engines is only part of the solution. We have now isolated most of our system's expensive processing inside commoditized components (engines) with known behaviour, standard APIs and profiled resource usage.
Yet all this would be for naught, from a resource scheduling point of view, if we allowed engines to be called directly by the high level code. This is because direct calls would, more or less, determine the underlying execution sequence and also the threading model. The latter point is critical for an efficient implementation. Even worse, on our CVM model of an engine, the caller would be responsible for setting up the appropriate memory (of both the scratch and persistent varieties) for the underlying engine, thereby quickly landing us back with explicit resource scheduling.
The CVM therefore takes the approach that engines must be called only via a middleware service—the scheduler. The scheduler effectively exists as a single instance across all executive process and logical threads, and decides, utilising a plug-in scheduling policy, which of these are to be submitted for execution to the underlying RTOS, using how many RTOS threads, at what priority, at each logical timestep. This is shown conceptually in
1.3 CVM Tools and Design Flow
The overall design flow for the CVM is shown in
In an extreme bottom-up flow, DSP engineers would then use the engine development kit (EDK), integrated with the appropriate DSP development tool (e.g., Visual DSP++) to construct optimised engines for all of the required HRFs in the system. These would be conformance tested against the gold standards and then performance profiled using the EDK.
For an extreme top-down flow, the same DSP engineers would simply publish their expected ‘forward declared’ performance profiles for the necessary engines, but would not actually write them. Reality is likely to lie somewhere between these two extremes, with the majority of needed engines either existing in engine form or requiring simply to be ‘wrapped’ and profiled, and with a few engines that do not yet exist (or have not yet been optimised) being forward declared.
Next, the designer would use the system designer to choose and deploy the appropriate number of instances of engine implementations against each required HRF from the executive. Then, a scheduling policy would be chosen using the system designer, and a traffic simulation executed. The results of this simulation would be checked against design constraints—and any mismatches would require either recoding of the ‘bottleneck’ engines, redesign with lowered functionality, or a shift in hardware platform or scheduler (and possibly a number of these).
Once a satisfactory result has been obtained (and multiple concurrent executives may be simulated in this manner), the executive developers can start to flesh out in more detail all of the necessary code inside the stack. As the executive is refined, traffic simulations should be continued to ensure that no surprising behaviour has been introduced (particularly where ‘forward declared’ engines have been used).
Finally, once all necessary engine implementations have been provided and the executive fully elaborated, an end deployment may be generated through the CVM system builder, which generates the appropriate runtime and also generates the makefiles to build the final system images.
2. Developing A Reference Engine
2.1 Bit True Engines
Essentially a reference engine, also called a Gold Standard Engine, (GSE) is a type dependent engine where the type T defines the numeric type to be used by an engine for its inputs, internal arithmetic, accumulator and outputs. For example, single precision floating point arithmetic may be used for the inputs to an engine, for the internal arithmetic and accumulator, and for the outputs from the engine. Alternatively, the type T used by an engine may, for example, be specified as fixed-point, with defined bit-widths, such as 16-bits for the engine inputs, 32-bits for internal arithmetic with a 40-bit accumulator, and 16-bits for the output. A type dependent engine using type T may not mix arithmetic types, i.e. if an engine performs single precision floating point arithmetic, it may not use any other arithmetic type, such as double precision floating point or fixed point. Specifically, type T defines that the arithmetic used by engine is of particular type, such as single or double precision floating point, or fixed point with a defined rounding and overflow mode, and precise input, internal, accumulator and output bit widths. The GSE will be written in C but crucially implemented in terms of fixed point Digital Signal Processor (DSP) recipes.
A candidate 3rd party engine will be declared to be Gold Standard Compliant if it is functionally (rather than analytically) equivalent to the GSE in terms of the “bit trueness” of the engine using type T arithmetic. Hence, if the engine's arithmetic should change, such as the accumulator's bit-width increasing from 40 bits to 48 bits, then the engine must be re-tested for Gold Standard Compliance. Gold Standard Compliance will be determined after a set of input vectors returns a set of output vectors which fall within a given target range. If the input vectors to the GSE and a 3rd party engine produce “equivalent” output, we say that the 3rd party engine is compliant in a bit true way with the Gold Standard.
2.2 Engines
There are three classes of engines: those that are behaviourally defined, those that are numerically defined, and those that process stochastic data or process data subjected to statistical noise, i.e. there is a generally unobtainable ideal result equal to the input sequence before the application of noise/distortion processes.
For a numerical engine, the optimal or most correct result will be obtained using infinite precision arithmetic (i.e. can simultaneously handle very large and very small numbers with no loss of precision), and can be expressed as (xΞidealy), where x and y are the input operands, and Ξ is the engine under consideration. Note that the result may be a scalar or a vector, and that the number of input operands may be more than the two used in the terminology here.
Since an implementation of infinite precision arithmetic is realistically unfeasible, using double precision floating point arithmetic will approach the result that may be obtained using infinite precision arithmetic, but with an inherent error. The result of executing an engine, Ξ, using double precision floating point arithmetic may be called (xΞy).
Since DSPs use a given numeric precision to perform arithmetic, some vendors define the result of an engine using double precision floating point arithmetic internally, but converting the resultant double precision floating point engine output, to the same numeric type, T, as used by a DSP (applying the relevant overflow and rounding modes). This result may be termed T(xΞy).
Our implementation of a GSE, however, will use arithmetic associated with type T throughout, and therefore, the result will deviate from that obtained using either of the above two arithmetics. This form of an engine may be given by (T(x)ΞTT(y)) and will define the minimal acceptable performance of an engine using arithmetic of type T.
However, unlike numerical engines, stochastic engines, such as a Viterbi decoder, have an ideal answer, (xΞidealY), which for a given implementation may possibly not be obtained, regardless of the arithmetic type used (including infinite precision). Nevertheless, it may be possible for an alternative implementation to approach this ideal answer more closely.
2.3 Behaviourally Defined
Behaviourally defined engines are those whose behaviour is deterministic, and that perform no mathematical operations that are dependent upon the precision used, i.e. its output is unaffected by the underlying precision and type.
For example, the interleaving process effectively transposes the contents of specified array elements with other specified array elements. Therefore, no type dependent arithmetic is performed (neglecting the array index variables), and the result is either correct or incorrect. Using the previously defined notation, the expressions, (xΞidealy), (xΞy), T(xΞy), and (T(x)ΞTT(y)) will all be exactly the same, provided that the type T has sufficient precision and dynamic range to completely represent the test vectors, and we neglect the engine's internal loop and array element indices.
Thus, given a set of N test vectors, the Gold Standard Tester will be able to determine precisely for which test vectors the engine under test returns the correct result, and for which it returns an incorrect result.
2.4 Numerically Defined
Numerically defined engines are those whose behaviour is deterministic and whose functionality can be defined in terms of mathematical functions, and therefore the result is dependent upon the numeric type and precision used within the engine. For example, if we consider the calculation of the mean of a vector, deviation from the answer obtained using infinite precision arithmetic can arise from:
For this class of engine, the expressions (xΞidealy) and (xΞy) will not be the same, with (xΞidealy) representing the mathematically perfect result, and (xΞy) representing the realistically achievable result obtained executing the GSE using double precision floating point arithmetic.
There will be cases when the error, εd, between the two results will be zero, and there will be situations when this error has a positive value. εd is defined as the absolute error between the resultant vector obtained using infinite precision arithmetic, and that obtained executing the GSE using double precision floating point arithmetic:
εd=|(xΞy)−(xΞidealy)|2.
However, executing the GSE in double precision floating point arithmetic, and converting the result to type T (applying the relevant overflow and rounding modes) may lead to a different result (dependent upon the input test), T(xΞy), where the error term, εT, may be non-zero, and may be determined thus:
εT=|T(xΞy)−(xΞidealy)|2.
Executing the engine using purely arithmetic of type T, i.e. (T(x)ΞT T(y)), will generally result in a different answer to both (xΞy) and T(xΞy), and the error term εTT will also therefore, generally be non-zero, where εTT is defined as:
εTT=|T(x)ΞTT(y))−(xΞideal y)|2.
These error terms may be expressed graphically as shown in
In practice however, the result of a numerically defined engine is unknown for infinite precision arithmetic, and we must replace (xΞidealy) by (xΞy), where the engine Ξ is executed using double precision floating point arithmetic. Hence, we must consider Ed to be zero (this is not strictly true but since we cannot evaluate (xΞidealy), we must do so for numerically defined engines).
Therefore, εT becomes:
εT=|T(xΞy)−(xΞy)|2,
and εTT becomes:
εTT=|T(x)ΞTT(y))−(xΞidealy)|2,
and the
The error associated with a third party's implementation, ε3rd party, must fall between 0 and εTT, which is the error associated with the minimal acceptable performance when using type T In other words, since our implementation of the engine using type T is the minimal acceptable performance of an engine using arithmetic of type T, the performance of a 3rd party's implementation of a given engine using arithmetic of type T, must lie between that obtained using double precision floating point arithmetic, and that obtained using our Gold Standard implementation, in order for the engine to be termed “Gold Standard Compliant”.
Hence, a 3rd party engine may be declared to be “Gold Standard Compliant” for type T if 0≦ε3rd party≦εTT, for all test vectors, where ε3rd party is defined, for numerical engines, as:
ε3rd party=|T(x)ΞT, 3rd partyT(y))−(xΞy)|2.
2.5 Stochastic Engines
Stochastic engines are those whose behaviour may be defined, but is data dependent at run-time, and the performance of the engine on a given set of data is internal arithmetic type dependent. An example of such an engine is a Viterbi decoder, where the behaviour of the engine is defined, but the operation of the engine is data dependent, and the performance (in terms of resultant Bit Error Ratio) is dependent upon the internal arithmetic used (the number of bits used for soft decoding).
The common performance metric of a Viterbi decoder engine is Bit Error Ratio (BER), which is similar in principle to measuring the error between the output sequence and the desired output sequence (the error free sequence originally fed into the channel encoder). Hence, we can use the same error metrics as for behaviourally and numerically defined engines.
The ideal result of a stochastic engine is a known set of values, such as the input vector to a channel encoder (called the source vector). In order to evaluate the performance of the associated channel decoder, the output from the channel encoder must be corrupted, and fed into the channel decoder, with the aim of obtaining the original source vector at the output of the decoder. Hence, the ideal result is the original source vector. Another example is the known multipath channel that is estimated using channel estimator engines, where again the ideal result is an input vector to the channel modelling process.
This ideal result, defined as (xΞidealy) is generally unobtainable, though it may be approached to a given point. Since this ideal result is generally unobtainable, the “best” result that may be obtained using a given engine is achieved with double precision floating point arithmetic, and is termed (xΞy). The result obtained using type T throughout the engine, Ξ, is (T(x)ΞT(y)).
For behaviourally and numerically defined engines it is necessary to use the engine result obtained using double precision floating point arithmetic as the baseline performance. However, for stochastic engines it is conceivable that a derivative of a given engine is developed which exceeds the performance of our engine using double precision floating point arithmetic, and approaches the ideal result more closely, and therefore the baseline must be the original source vector, i.e. (xΞidealy). Therefore, for stochastic engines the error calculations are given as:
εd, stochastic=|(xΞy)−(xΞidealy)|2,
εT, stochastic=|T(xΞy)−(xΞidealy)|2,
εT, stochastic=|(T(x)ΞTT(y))−(xΞidealy)|2,
ε3rd party, stochastic=|(T(x)ΞT, 3rd partyT(y))−(xΞidealy)|2,
and for a 3rd party's engine to be declared as “Gold Standard Compliant” the following condition must be satisfied, for all test vectors:
0≦ε3rd party, stochastic≦εTT, stochastic.
2.6 Gold Standard Compliance Test Vectors
There will be two different types of test vectors used to determine whether an engine is Gold Standard Compliant. The first set of test vectors will be individually designed to test specific situations that the engine may encounter. These situations will include:
2. Internal boundary cases; the input vector(s) will be defined such that they will cause boundary case values to be encountered during the engine's internal processing.
3. Output boundary cases; the input vector(s) will be defined such that they will cause boundary case values to be encountered in the result of the engine's execution.
4. Internal underflow and overflow; the input vector(s) will be defined such that they cause underflow and overflow conditions to be encountered during the internal processing.
5. Stuck at faults; the input vector(s) will be defined such that every bit used to represent a value will be tested for stuck-at-0 and stuck-at-1. This testing strategy is more commonly employed in digital circuit design, such as FPGAs but may prove useful to include.
6. Special cases are engine dependent; the input vector(s) will be defined to test particular engines, to determine their capability to handle special inputs that can lead to incorrect answers. For example, there are certain sequences for which a fixed-point FFT may fail, but a floating point FFT may function correctly.
The second set of test vectors will be generated on-the-fly by the Gold Standard Engine Tester software, using a seeded pseudo random sequence generator, and may consist of thousands of test vectors, each of which will be passed through the Gold Standard Engine and the 3rd party engine, in order to determine whether Gold Standard Compliance has been achieved. The use of a seeded pseudo random number generator will allow the same test vectors to be repeatedly generated, without having to store the test vectors themselves; only the seed need be stored. The seed will have a default value, but may be altered by the user to alter the set of test vectors used.
For both sets of test vectors, a compliance report will be generated after the testing has been completed. This report will include the number of test vectors for which compliance was achieved, and for which it was not, and will list those test vectors on which it failed.
The first set of test vectors will be primarily dependent upon the type T used for the engine's arithmetic, and as such the boundary values may be generated from the knowledge of the input, internal, accumulator and output bit-widths when using fixed point arithmetic, for example.
The second set of test vectors is not strictly dependent on the type T, but the values of the test vectors may need to be scaled to enabled the full dynamic range of the types to be exploited.
As shown in
The test vectors for stochastic engines will also be generated as double precision floating point values, due to the distortion processes involved in the test vector generation. For example, the input signal to a Viterbi decoder may be a binary signal, from a channel encoder, corrupted by thermal noise and therefore, must be represented using double precision floating point values. However, when evaluating the GSEs and 3rd party engines using arithmetic of type T, the test vectors will be converted to type T The ideal result, (xΞidealy), from a GSE such as a Viterbi decoder, will be the source vector that was encoded, for decoding by the Viterbi decoder, and therefore will be available for comparison purposes. Note that in
The Gold Standard Tester application will write to a log file, the list of the test vectors for which the 3d party's engine achieved Gold Standard Conformance, and the list of test vectors for which Gold Standard Conformance was not achieved
Appendix 1
User Guide for CVM Engine Developers v 2.0
The aim of this Appendix 1 is to describe how you go about developing a CVM engine, so that it can become a fully functioning component of a CVM system.
The content is aimed at DSP programmers developing HRFs (High Resource Functions), such as FFTs, Viterbi encoders/decoders and FIR Filters.
So, what is a CVM engine? In simple terms it is a standardised set of files, including an executable, that performs an HRF. Put like this, there doesn't seem to be anything particularly remarkable about an engine. The power of an engine, though, comes from the fact that the infrastructure code that handles memory management and communication with the client code that needs to make use of the HRF is generated automatically—you can leave this peripheral functionality to the engine while you concentrate on writing the code that performs the HRF.
The role of the engine does not end with providing the infrastructure for your HRF development. Once you've written your code, you'll want to test it to make sure it does what you want it to do. Normally this is a rather expensive process, but with CVM you can test your engine for conformance to a standard to ensure that the results are correct before you go any further.
The standard against which you test an engine for conformance is known as a reference engine. A reference engine is a version of the same functionality you would expect to find in an engine, but instead of running on your DSP it runs on a Windows PC. You can write a reference engine to be polymorphic in that any given type can be quite general (such as an integer instead of a 16-bit signed integer array or an 8-bit unsigned scalar, for instance). This polymorphism of the data types enables you to create many engines from a single reference engine—as long as each explicit data type in the engine fits into the polymorphic definition of its associated data type in the reference engine, the engine is capable of reproducing the functionality of the reference engine.
We have, though, missed something in going straight from reference engines to engines. In between the two, forming an interface between reference engine and engine, is an engine interface. There are two basic steps in going from a reference engine to engine that illustrate the part that an engine interface plays in CVM:
You can think of an engine interface as a template for engines. From an engine interface you can create an engine for a particular platform, taking from the engine interface its explicit data type definitions.
The first step of the workflow diagram
The case we're presenting in this diagram is the perhaps ideal one where you start off with a reference engine, and then create your engine interface and engine from that reference engine. This workflow might not meet your requirements, especially if you already have engine-based functionality that you are satisfied with and want to enclose this functionality in a CVM engine. In such a scenario you are free to create your engine interfaces and engines with no association to a reference engine at all. This means that you will not be able to test your engines for conformance and performance, but this won't be a problem for your if you are already comfortable with the capabilities of the algorithms that you intend to wrap up as engines.
In this guide we largely follow steps in the diagram
Let's look at each of the steps above in turn. Here we'll assume that you already have a reference engine and accompanying scripts for the HRF you want to model.
Step 2 is the creation of an XML file containing details of the engine interface for the HRF. You accomplish this step, and the two steps that follow it, on RadioScape's System Designer IDE (Integrated Development Environment), and example of which is shown at
The central window here is an editor for XML files. In this example we see an engine interface XML file (ComplexInt16FIRFilter) in the background and an engine XML file (Win2KComplexInt16FIRFilter) in the foreground. Creating your own XML files in this window is straightforward because the System Designer generates a skeleton XML file for whichever CVM element (such as an engine or engine interface) you want to produce. You simply fill in the fields where necessary.
For an engine interface the details in most of the fields will be taken from the reference engine the interface is based on. All you really need to do is to replace polymorphic types for the two main methods, Configure and Process, with true types. For instance, if there is just one polymorphic type, T1, which represents integer arrays, you might want to replace this with type Int16Array, which represents 16-bit integer arrays.
Perhaps a better way of looking at the way types operate in CVM is that they are really dependent on the hardware and it is the details of these types that you specify in the engine interface XML. If you then have a reference engine containing polymorphic representations of these types, you can then run the reference engine and select the engine on which to base the run-time behaviour, so that the reference engine inherits the characteristics, especially the typing, from the engine interface and is consequently able to simulate the engine's behaviour.
Step 3, to create an XML file containing details of the engine, is also done on the System Designer. Since a new engine must be based on an engine interface, most of the fields for an engine XML file are initially taken from the engine interface XML. You simply specify the platform you want to target the engine for.
Step 4 is the building of a C++ project from the engine XML. From your point of view this is a trivial task, involving the selection of a single menu option for the engine, but for the System Designer it involves generating a set of stub files that form the basis for the C++ project in which you are to write the processing of the HRF you are modelling.
Step 5 is the most labour-intensive part of the process. This is where you write the code that actually performs the HRF on the target platform. You do this coding on your regular DSP coding IDE, but to facilitate this CVM comes with an extensive set of API functions.
Step 6 is the testing of your engine to assess whether or not it conforms to the reference engine representation of the HRF functionality. You do this testing by invoking the EDK (Engine Development Kit) utility that plugs in to your regular coding IDE. When you select the conformance option on EDK for a particular engine, EDK runs the conformance script for the reference engine that the candidate engine is based on, which means that all the tests in that script are applied to the candidate engine. If the engine fails any of the tests, it is deemed not to conform. If it passes all the tests, the candidate engine becomes a conformed engine, and is issued with a conformance certificate stating this.
Step 7, the profiling for performance, is very similar to testing for conformance, at least in the way you carry out the profiling. From EDK on your regular coding IDE you can choose a performance option for an engine that you select, which causes EDK to run the performance script for the reference engine that the conformed engine is based on. By running this script EDK records various performance indicators for the engine, in terms of processor efficiency and memory loading. These details are recorded on a performance certificate, which gives you a profile of the performance characteristics of the engine
Step 8, the final step, is the publishing of the engine. This involves making that engine widely available so that it can be plugged into any CVM system.
Conformance Testing Your Engine
Although you can create an engine independently of a reference engine, if you want to be able to run tests that judge the correctness of the functionality of an engine, you must base the engine interface for your engine on a reference engine.
So, what exactly is a reference engine? In basic terms it is a polymorphic fixed-point bit-true behavioural description of an engine that runs on a PC. It provides the definitive statement of functionality for the type of engine you have been developing. Effectively, then, it is a behavioural version of your engine.
Part of the process of developing a reference engine is to create a conformance test script to go with it. This script, which is written in the interpreted language, Python, should contain a set of tests designed to establish whether or not an engine conforms to the standard behaviour of the reference engine. These tests involve identical Configure and Process call sequences on the same data. A candidate engine, that is, an engine that you want to conform, must pass all the tests to become a conformed engine.
You run the conformance tests from your own DSP coding IDE (Integrated Development Environment) through a plug-in utility that we provide, the Engine Development Kit (EDK). If an engine you choose with EDK passes all the tests, CVM issues the engine with a file that shows the tests that have been passed. This file is known as a conformance certificate, since it certifies that your engine now conforms to a polymorphic standard.
Attributes for Conformance Certificate XML
The tags and attributes that are used in the conformance certificate XML file are explained in this section.
ConformanceCertificate
This tag is a container tag, which frames the conformance certificate description. The tag has a number of attributes, which are listed below. Additionally it will contain Signature and MetricTable tags.
Name
The name of the conformance certificate. This is identical to the name of the engine being tested for conformance.
Description
Describes what the conformance certificate is for.
Date
Gives the date when the conformance certificate was generated.
Script
Gives the name and location of the script that was used for conformance testing. This script will have been supplied to you by the reference engine developer.
Result
Briefly states the result of the conformance test—whether the engine passed of failed the test. If an engine failed the test, details of exactly which part of the test failed will generally be listed in the log file as shown below.
Logfile
Gives the name and location of the log file generated when the conformance test was run. The log file will contain conformance test details; exactly what is listed in the log file will depend on the conformance test script supplied to you by the reference engine developer.
Signature
This frames the PGP (Pretty Good Privacy) signature used to ensure that the conformance test is valid.
PGPSignature
The PGP (Pretty Good Privacy) encrypted signature is used to ensure that the conformance test is valid. This is an important safeguard, enabling engine packages to be sold to third parties.
MetricTable
This is a container tag. It frames the metrics for one particular conformance test. The tag has a number of attributes, which are listed below. Additionally it will contain Entry tags.
The metric table is the place where the reference engine developer will place details about their engines algorithmic performance in addition to the simple pass/fail recorded in the Result attribute of the Conformance Certificate tab above.
ParameterName
The name of the parameter that the reference engine developer has chosen to vary in order to examine the result metric.
ResultName
This is the result metric.
Entry
This tag must be contained within a metric table tag. It contains the results for a single entry in the metric table. This tag has a number of attributes. It contains a parameter value and the value of the result metric at that parameter value.
ParameterValue
This is a value of the parameter specified in ParameterName.
ResultValue
This is the value of the result metric at the given parameter value.
Appendix 2: CVM Definitions
The following table lists and describes some of the terms commonly referred to in this Detailed Description section. The definitions cover the specific implementation described and hence should not be construed as limiting more expansive definitions given elsewhere in this specification.
Number | Date | Country | Kind |
---|---|---|---|
0212176.2 | May 2002 | GB | national |
0212524.3 | May 2002 | GB | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/GB03/02292 | 5/27/2003 | WO | 5/6/2005 |