The illustrative embodiment of the present invention relates generally to model analysis and more particularly to the use of data obtained during model analysis to subsequently generate directed test vectors to identify errors in a deployed system.
The overall life-cycle of engineered systems typically starts with a requirements capture for a system model being designed and continues to the maintenance of the deployed real-life system based on the model. Frequently, the maintenance of the deployed system costs as much as the initial design of the system with the result that good maintenance technologies have taken on an increased importance.
Much of the present day repair aspect of maintenance schemes relies on the use of measurement trees. Measurement trees are a hierarchical tree-like arrangement of acceptable operating parameters for different components in a system. Starting from a root diagnosis as to which component may be faulty in a system, a selection of measurements can be made on the system that is behaving erratically. Each of the measurements rules out a number of branches on the tree which relate to different sub-components of the suspected faulty component. As a path along the tree is traversed by making additional measurements, a detected failure can ultimately be pinpointed to a replaceable component.
Conventionally problems with deployed systems based on a computer designed model have been diagnosed using two types of mechanisms. Diagnosis of problems has been made through the use of heuristics or the previous experiences of the user tasked with identifying the cause of the system fault. Some analysis is performed through a combination of the mechanisms. Unfortunately these approaches fail to take advantage of the information gathered during the design phase of the system when the system model undergoes testing to make sure it satisfies the set of requirements for the system design.
The illustrative embodiment of the present invention exploits the data gathered about a system model during the system design phase to aid the identification of errors subsequently detected in a deployed system based on the system model. The present invention utilizes the coverage analysis from the design phase that is originally created to determine whether the system model as designed meets the specified system requirements. Included in the coverage analysis report is the analysis of which sets of test vectors utilized in simulating the system model excited individual components and sections of the system model. The present invention uses the information associated with the test vectors to select appropriate test vectors to use to perform directed testing of the deployed system so as to confirm a suspected fault.
In one embodiment, a method of identifying errors in a deployed system in a technical computing environment is performed and includes the step of testing a system model being designed with multiple sets of test vectors. The testing identifies a degree of model capture associated with each set of the multiple sets of test vectors. The multiple sets of test vectors are saved following testing. The method further includes the step of identifying an error in the functioning of a deployed system based on the system model using the test vectors from the previous analysis.
In another embodiment in an electronic device with a technical computing environment, a system for detecting errors in deployed systems based on system models includes a system model for a real system being designed in the technical computing environment. The system also includes a test system for testing the system model. The test system simulates the execution of the system model with multiple sets of test vectors. The multiple sets of test vectors and their associated data are saved following testing. The system additionally includes a coverage analysis report of the results of a simulation of the system model with the test system. The coverage analysis report is subsequently used to select at least one of the multiple sets of test vectors to perform directed testing to identify an error reported in a deployed system based on the system model.
In one embodiment, a method of identifying errors in a deployed system in a technical computing environment is performed and includes the step of providing a model coverage analysis report for a model identifying a degree of model capture associated with each of multiple sets of vectors. The method additionally includes the step of receiving a notification of an error in the functioning of a deployed system based on the system model. The method further includes the step of identifying an error in the functioning of a deployed system based on the system model using the model coverage analysis report and at least one of the multiple sets of vectors.
The present invention takes advantage of the increased data gathered about systems during their design in technical computing environments in order to perform subsequent fault identification and validate repairs to the system. The increasing use of computer modeling in the different aspects of the design process has led to the growing availability of high fidelity models that capture different characteristics of the system that was designed. Additionally the use of automatic code generation provides a very close link between model and implementation. The use of the data gathered from the test of the system model thus enables subsequent operations performed on deployed systems to be conducted in a more rapid and efficient manner than was possible with conventional methods used to diagnose faults.
Block diagrams are used to model real-world systems. Historically, engineers and scientists have utilized time-based block diagram models in numerous scientific areas to study, design, debug, and refine dynamic systems. Dynamic systems, which are characterized by the fact that their behaviors change over time, are representative of many real-world systems. A dynamic system (either natural or man-made) is a system whose response at any given time is a function of its input stimuli, its current state, and the current time. A block diagram model of a dynamic system is represented schematically as a collection of blocks interconnected by lines that represent signals. A signal represents the input and output of a dynamic system. Each block represents an elemental dynamic system. A line emanating at one block and terminating at another signifies that the output of the first block is an input to the second block. Each distinct input or output on a block is referred to as a port. Signals correspond to the time-varying quantities represented by each line connection and are assumed to have values at each time instant at which the connecting blocks are enabled. The source block of a signal writes to the signal at a given time instant when its system equations are solved. The destination blocks of this signal read from the signal when their system equations are being solved. The user 24 who is accessing the electronic device 2 may view the results of a simulation of the system model in the block diagram view 22 generated on the display 20. Upon successful completion of the design of the system model the system model frequently will be the basis of a real-life deployed system 30.
The electronic device 2 may be any of a number of different types of computing devices equipped with a processor and able to execute computer instructions. For example, the electronic device 2 may be a server, client, workstation, laptop, mainframe, PDA, or other type of device equipped with a processor. The display 20, rather than displaying a block diagram view 22, may display some other type of user interface showing the results of the modeling of the system that is being performed to determine the degree to which the design system satisfies the set of requirements for the model 10. The plurality of sets of test vectors 12 may correspond to the various requirements for the model 10 and are used as inputs to the design system during the test of the system in order to determine how much of the model design is excited by the use of the different test vectors.
The coverage analysis performed during system design is performed to check whether the functionality embodied by the current state of the designed model satisfies the requirements set forth for the system. To perform the coverage analysis, a number of test vectors that may or may not correspond to requirement scenarios are used as input to the designed system and an analysis shows whether and how much of the design is excited/activated by the use of those particular test vectors. This information can be exploited to do directed testing when a fault has occurred as the functionality that is suspected to be affected by the fault can be optimally tested.
In addition to modeling the actual system, engineers have the ability to model the test system used to test the actual part. By using this model of the test system users can develop test vectors to exercise parts of the model. The present invention thus may include two types of models: one model is the system under test called the DUT (DUT) and one model for the test system itself. The user can then exercise the DUT by producing different test vectors. Each test vector will test different requirements and scenarios of the DUT.
The technical computing environment 4, 42, or 82 generates a model coverage analysis report for each of these sets of test vectors created in the combined model (the test system and the DUT) based on the recorded detail. Sample model coverage analysis reports are discussed below. These coverage analysis reports describe what parts of the DUT were tested for each test vector and the testing results. The results in the coverage analysis report description may be returned as a percentage of time active. The results inform the user what parts of the model are being tested with each set of test vectors. Following the receipt of an error in a deployed system (an indication that the system is not working as expected and/or designed) based on the system model, the illustrative embodiment of the present invention uses the coverage analysis report of the DUT to determine which sets of test vectors are necessary to do a directed test for the suspected failed model component. Errors may be faults of different types such as abrupt and incipient faults.
The following table is an example of the coverage analysis report generated for each set of test vectors. The left column is a list of components found in the DUT and each column after that is the coverage number for a given test vector input to the test system.
Using these numbers the user can now understand what test to run to find the problem in a more rapid manner. If the device comes in for maintenance and the suspected problem is BlockC then the user will know to run test vector 3 because the coverage for BlockC is higher for this test vector (75%). In comparison, for the above test vectors, running test vector 1 results in blockC being activated only 25% of the time. Similarly, running test vector 2 results in blockC being activated only 50% of the time. If the execution of test vector 3 fails to diagnose the problem, the user may run Test vector 2 and Test vector 1 until the problem is found. The coverage analysis report thus allows the test programs to direct the test sequence based on which test vectors are more likely to exercise the area of concern.
The coverage analysis reports may include coverage for many different types of areas including lookup tables, whether particular states were entered into in a STATEFLOW diagram, ranges of signal and types of transitions for a switch. For example, if signal range analysis is selected as a coverage selection, then the coverage analysis report will include the maximum and minimum signal values at each block in the model measured during simulation.
The coverage analysis reports may also include lookup table coverage. Lookup table coverage analysis examines blocks that output the result of looking up one or more inputs in a table of inputs and outputs, interpolating between or extrapolating from table entries as necessary. Lookup table coverage records the frequency that table lookups use each interpolation interval. A test case achieves full coverage if it executes each interpolation and extrapolation interval at least once. For each lookup table block in the model, the coverage report may display a colored map of the lookup table indicating where each interpolation was performed. For example,
The coverage analysis report may also analyze Cyclomatic complexity, decision coverage, condition coverage, and modified condition/decision coverage (MC/DC).
Cyclomatic complexity analysis measures the structural complexity of a model. It approximates the McCabe complexity measure for code generated from the model. In general, the McCabe complexity measure is slightly higher because of error checks that the model coverage analysis does not consider. Model coverage uses the following formula to compute the cyclomatic complexity of an object (block, chart, state, etc.)
where N is the number of decision points that the object represents and on is the number of outcomes for the nth decision point. The tool adds 1 to the complexity number computed by this formula for atomic subsystems and Stateflow charts.
Decision coverage analysis examines items that represent decision points in a model, such as a Switch block or Stateflow states. For each item, decision coverage determines the percentage of the total number of simulation paths through the item that the simulation actually traversed. A screenshot 145 of a decision analysis report is depicted in
Condition coverage analysis examines blocks that output the logical combination of their inputs (for example, the Logic block), and Stateflow transitions. A test case achieves full coverage if it causes each input to each instance of a logic block in the model and each condition on a transition to be true at least once during the simulation and false at least once during the simulation. Condition coverage analysis reports for each block in the model whether the test case fully covered the block. A screenshot 147 of a condition coverage analysis report is depicted in
Modified condition/decision coverage (MC/DC) analysis examines blocks that output the logical combination of their inputs (for example, the Logic block), and Stateflow transitions to determine the extent to which the test case tests the independence of logical block inputs and transition conditions. A test case achieves full coverage for a block if, for every input, there is a pair of simulation times when changing that input alone causes a change in the block's output. A test case achieves full coverage for a transition if, for each condition on the transition, there is at least one time when a change in the condition triggers the transition. A screenshot 149 of a modified condition/decision analysis report is depicted in
It will be appreciated that multiple types of reports may be combined in order to display data to a user. For example, a model coverage report for a SIMULINK model may include signal range analysis, Lookup table analysis, decision analysis, condition analysis, MC/DC analysis and state-transition analysis (for an embedded STATEFLOW block). The data may be cross-linked so that the user may easily navigate from one set of data to the next.
In addition to providing information about the DUT, the coverage analysis report also highlights areas in the Test system connected to the DUT that are used. The coverage analysis report further indicates the resources in the test system that are not used in the testing of the DUT.
The information in the coverage analysis report from the design phase may be retrieved in a number of different ways. In one implementation, the results contained in the coverage analysis report are presented to a user who makes a manual selection of which set of test vectors to use to verify a suspected fault in a deployed system. In another implementation, the process is automated with the set of test vectors with the greatest degree of model capture for the suspected fault being automatically selected and provided to the test system. In a different implementation, the set of test vectors with the greatest likelihood of exciting the suspected failed component is automatically selected and presented to a user for manual confirmation. Those skilled in the art will recognize that other implementations combining different automatic and manual selections are also possible within the scope of the present invention.
In one aspect of the illustrative embodiment, the coverage analysis report is linked to the block diagram view being shown to the user. A selection of a component in the coverage analysis report causes the associated component in the block diagram view to become highlighted or otherwise visibly identifiable so as to provide a visual cue to the user regarding the particular portion of the model in question. The block diagram view may be used to show the execution path of the model during a simulation performed using a selected set of test vectors. Similarly, the present invention may be configured so that a selection in the block diagram view results in a highlighting/visual identification of the information regarding the component in the coverage analysis report.
The illustrative embodiment of the present invention may also be used to analyze a computer designed model of a deployed system following the initial design phase. Although the examples contained herein have described the use of a model coverage analysis report that is generated during the design phase to determine compliance with a set of requirements, it will be appreciated that the model coverage analysis report may be generated following the initial design phase and then serve as the basis for fault identification in the deployed system using the mechanisms described above. As long as the model coverage analysis report is generated before the deployed system generates a fault, it provides a baseline which may be used for future fault analysis.
Although the descriptions contained herein have made reference to a block diagram view being used to present the coverage analysis and other information to the user, it will be appreciated by those skilled in the art that other types of user interfaces may be utilized without departing from the scope of the present invention.
The present invention may be provided as one or more computer-readable programs embodied on or in one or more mediums. The mediums may be a floppy disk, a hard disk, a compact disc, a digital versatile disc, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that can be used include C, C++, C#, or JAVA. The software programs may be stored on or in one or more mediums as object code.
Since certain changes may be made without departing from the scope of the present invention, it is intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative and not in a literal sense. Practitioners of the art will realize that the sequence of steps and architectures depicted in the figures may be altered without departing from the scope of the present invention and that the illustrations contained herein are singular examples of a multitude of possible depictions of the present invention.
This application is a continuation of U.S. patent application Ser. No. 11/173,977, Now Pat. No. 7,970,594, which was filed on Jun. 30, 2005, by Thomas Gaudette for a SYSTEM AND METHOD FOR USING MODEL ANALYSIS TO GENERATE DIRECTED TEST VECTORS.
Number | Name | Date | Kind |
---|---|---|---|
4320509 | Davidson | Mar 1982 | A |
4937765 | Shupe et al. | Jun 1990 | A |
5189365 | Ikeda et al. | Feb 1993 | A |
5475624 | West | Dec 1995 | A |
5483544 | Shur | Jan 1996 | A |
5513339 | Agrawal et al. | Apr 1996 | A |
5515384 | Horton, III | May 1996 | A |
5570376 | Kunda et al. | Oct 1996 | A |
5581742 | Lin et al. | Dec 1996 | A |
5712857 | Whitman et al. | Jan 1998 | A |
5717928 | Campmas et al. | Feb 1998 | A |
5724504 | Aharon et al. | Mar 1998 | A |
5790565 | Sakaguchi | Aug 1998 | A |
5815513 | Hiraide | Sep 1998 | A |
5923567 | Simunic et al. | Jul 1999 | A |
5944847 | Sanada | Aug 1999 | A |
6058253 | Lowe | May 2000 | A |
6076173 | Kim et al. | Jun 2000 | A |
6141630 | McNamara et al. | Oct 2000 | A |
6151694 | Nozuyama | Nov 2000 | A |
6197605 | Simunic et al. | Mar 2001 | B1 |
6205559 | Sakaguchi | Mar 2001 | B1 |
6223313 | How et al. | Apr 2001 | B1 |
6237117 | Krishnamoorthy | May 2001 | B1 |
6253344 | Fin et al. | Jun 2001 | B1 |
6342790 | Ferguson et al. | Jan 2002 | B1 |
6467058 | Chakradhar et al. | Oct 2002 | B1 |
6484135 | Chin et al. | Nov 2002 | B1 |
6487700 | Fukushima | Nov 2002 | B1 |
6532440 | Boppana et al. | Mar 2003 | B1 |
6615379 | Tripp et al. | Sep 2003 | B1 |
6631344 | Kapur et al. | Oct 2003 | B1 |
6675138 | Hollander et al. | Jan 2004 | B1 |
6687662 | McNamara et al. | Feb 2004 | B1 |
6810372 | Unnikrishnan et al. | Oct 2004 | B1 |
6848088 | Levitt et al. | Jan 2005 | B1 |
6865706 | Rohrbaugh et al. | Mar 2005 | B1 |
6874135 | Gupta et al. | Mar 2005 | B2 |
6876934 | Marlett | Apr 2005 | B2 |
6883150 | Soltis et al. | Apr 2005 | B2 |
6952796 | Watanabe | Oct 2005 | B2 |
6968286 | Watkins | Nov 2005 | B1 |
7107190 | Tsuchiya et al. | Sep 2006 | B1 |
7114111 | Noy | Sep 2006 | B2 |
7137083 | Hildebrant | Nov 2006 | B2 |
7139955 | Rohrbaugh et al. | Nov 2006 | B2 |
7181376 | Fine et al. | Feb 2007 | B2 |
7219287 | Toutounchi et al. | May 2007 | B1 |
7237161 | Volz | Jun 2007 | B2 |
7237166 | Weller et al. | Jun 2007 | B2 |
7239978 | Cheng et al. | Jul 2007 | B2 |
7260793 | Boateng | Aug 2007 | B2 |
7337102 | Mosterman | Feb 2008 | B2 |
7617468 | Thakur et al. | Nov 2009 | B2 |
7623981 | Achkar et al. | Nov 2009 | B2 |
7984353 | Furukawa et al. | Jul 2011 | B2 |
20020073375 | Hollander | Jun 2002 | A1 |
20040000922 | Witte | Jan 2004 | A1 |
20040017213 | Witte | Jan 2004 | A1 |
20040025123 | Angilivelil | Feb 2004 | A1 |
20040034495 | Marlett | Feb 2004 | A1 |
20040044973 | Parker et al. | Mar 2004 | A1 |
20040073892 | Fallah et al. | Apr 2004 | A1 |
20040163023 | Ishida et al. | Aug 2004 | A1 |
20040181763 | Soltis et al. | Sep 2004 | A1 |
20040216023 | Maoz et al. | Oct 2004 | A1 |
20040230928 | Nozuyama | Nov 2004 | A1 |
20040249618 | Fine et al. | Dec 2004 | A1 |
20050010886 | Urata et al. | Jan 2005 | A1 |
20050131665 | Ho et al. | Jun 2005 | A1 |
20050165793 | Mosterman | Jul 2005 | A1 |
20050240887 | Rajski et al. | Oct 2005 | A1 |
20050278599 | Fujiwara et al. | Dec 2005 | A1 |
20060041814 | Rajski et al. | Feb 2006 | A1 |
20060048026 | Fine et al. | Mar 2006 | A1 |
20060248390 | Hori et al. | Nov 2006 | A1 |
20070245188 | Ono | Oct 2007 | A1 |
20070282556 | Achkar et al. | Dec 2007 | A1 |
20080092004 | Watkins et al. | Apr 2008 | A1 |
20090037858 | Thakur et al. | Feb 2009 | A1 |
20090063086 | Kawasaki | Mar 2009 | A1 |
20090077441 | Huynh et al. | Mar 2009 | A1 |
20090282307 | Chaturvedula et al. | Nov 2009 | A1 |
20100229039 | Chikada | Sep 2010 | A1 |
20100235136 | Bassett et al. | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
WO 9908212 | Feb 1999 | WO |
Number | Date | Country | |
---|---|---|---|
20080109194 A1 | May 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11173977 | Jun 2005 | US |
Child | 11970897 | US |