The present disclosure is concerned with model-based testing of software.
When developing software, testing needs to be performed during the development stage to determine whether the software actually performs the desired functions. The most basic form of testing is to create the software by trial and error, i.e. to create a program based on a set of high level requirements and then test it in the environment for which it is designed and then re-design as required based on the test results in an iterative manner. This is, of course, time consuming, especially where the real-life environment for testing the software is not easily accessible for testing purposes such as in the field of software for aerospace applications. Automated testing procedures have been developed to increase the efficiency of software design. Typically, in many fields, model-based testing is used.
Model-based software development and verification involves, first, specifying high level requirements—i.e. defining in simple terms what the software is intended to do or achieve. These high level requirements can be defined in the form of models which can be tested and verified on the basis of a created control model. The verified models are then converted into source code. The models are used to generate a test suite using a formal reasoning engine.
In simple terms, in model-based software development, the designer may create models by using a standard modelling language that expresses information in a structure that is defined by a consistent set of rules. Requirements and specifications of the software are thus represented by models which enables automated analysis, source code generation and verification.
In developing software for the aerospace industry, for example, a model-based standard DO-331 defines how models can be used to represent high-level requirements (specification models) and low-level requirements (design models). Such model-based software development is now common in the aerospace and many other fields.
Once the high-level software requirements have been captured in specification models, they can be used to generate test cases.
Individual programs are often intended for use in combination with other programs and hardware and testing individual programs will not necessarily give an indication of how those programs will function in combination with the other programs and software in the environment for which they are intended.
Conventionally, individual specification models (i.e. individual programs for individual requirements) will be generated and tested as individual blocks and these will then be combined into a combination of blocks as an LRU model representing the overall system. A dedicated LRU test rig is then used to verify the correct implementation of the various models in combination.
The conventional method of testing assumes the LRU to be in a target initial state by means of an initialisation sequence i.e. a sequence of input assignments—a test vector sequence—at the boundary of the LRU.
Since, however, the controllable inputs and observable outputs at the various components of the LRU may not necessarily correspond to the boundaries of the individual specification models when they were tested, (as they would have been tested in isolation from the other parts of the system with which they are, in the LRU, integrated) it is necessary to find a way to manipulate the primary inputs to the LRU such that, moving from an initial state, the sequence of input assignments corresponding to a requirement test case is received at the boundary of that requirement and the actual requirement output can be verified to be equal to the expected test case output. The way in which the requirements interact within the LRU determines which input/output sequences can be observed at a requirement boundary (i.e. are realisable) and which cannot.
There is a need for an improved manner of reliably testing individual requirements in the context of an LRU by deriving test vectors at the LRU boundary that, when simulated on the LRU, reproduce input and output of given test cases at the boundary of the individual requirements, and for knowing whether there are test cases that cannot be realised, i.e. test vectors cannot be derived at the LRU boundary to reproduce them at the boundary of individual requirements.
According to one aspect, there is provided a method of generating a test vector sequence for an integrated model including individual requirement software models and hardware models, the method comprising:
According to a second aspect, there is provided a system for generating a test vector sequence for an integrated model including individual requirement software models and hardware models, the method comprising:
In an embodiment, it is also determined whether or not the test case sequence is realisable and an indication thereof may be provided.
According to the method of the present disclosure, executable specification models of high level system requirements are combined with models of hardware behaviour into a functional integrated LRU model and a sequence of test vectors is generated for the LRU model.
By having an integrated LRU model—i.e. a model that integrates the specification models and hardware models—enables an algorithm to be created to analyse the overall LRU for realisability and, where realisable, to produce a sequence of primary inputs to the LRU model that can realise each test case. This is based on the premise that for any test case, there may have been a preceding test case whose execution preceded and will affect the starting state for that test case, this leading to a test case sequence.
For the integrated model, the present disclosure provides a method that performs an initialisation sequence that brings the integrated LRU model to a known initial state and also includes generation of a test vector sequence that produces a test case sequence if the sequence of test cases is realisable.
Preferably, the method also detects and reports if the test case sequence is not realisable.
In its preferred form, as shown in
This automatic generation of test vectors results in considerable time and cost savings and reduces errors compared to manual determination of test vectors. Also, since the algorithm performs an early determination as to whether a test case is realisable or not, time and resources are not wasted. The method allows automatic generation and verification of test vectors in a virtual environment before they are executed in real life implementation on actual physical test equipment.
Number | Date | Country | Kind |
---|---|---|---|
20202801 | Oct 2020 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
7644398 | Cleaveland | Jan 2010 | B2 |
9098619 | Bhatt | Aug 2015 | B2 |
10108536 | Li | Oct 2018 | B2 |
10346140 | Johnson | Jul 2019 | B2 |
11327874 | Klein | May 2022 | B1 |
20030135802 | Klein | Jul 2003 | A1 |
20100175052 | Prasad | Jul 2010 | A1 |
20180113796 | Meyers | Apr 2018 | A1 |
20190179733 | Lyberis | Jun 2019 | A1 |
Number | Date | Country |
---|---|---|
3032425 | Jun 2016 | EP |
2010018415 | Feb 2010 | WO |
WO-2018007822 | Jan 2018 | WO |
Entry |
---|
Extended European Search Report for European Patent Application No. 20202801.5, dated Apr. 13, 2021, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20220121559 A1 | Apr 2022 | US |