This application relates to a verification and debugging technique allowing forward and reverse time traversal of variable values for signal-processing-type programming and model-based verification.
Several categories of software require very careful verification. In some cases the verification is mandated by government agencies, with strict requirements and procedures. This is typical, but not limited, to aircraft flight code and critical medical equipment code. For instance, the Federal Aviation Administration requires that software used in aeronautical applications be tested according with DO 178B standards.
Software that is critical to flight operations (Class A) is tested with extreme care. Usually this software belongs to a special category of coding, which can be generally described as signal processing. In other words, the software receives input signals from a variety of real-world sources, such as a pilot adjusting a control yoke, rudder pedals, and/or throttle controls, altimeter, attitude indicator, airspeed indicator, magnetic compass, or other flight instrumentation, analog to digital converters (A/D), etc., and continuously processes such signals at a steady rate, fixed frames or whenever necessary. The software produces output signals in the form of commands or warnings, via Digital to Analog converters (D/A).
The user, by himself or herself or possibly jointly with the manufacturer of the actual hardware, often referred to as a black box, that runs the software prepares a set of clearly defined and understandable specifications and requirements, which define the processes inside the black box. In the verification process, a set of test cases and test procedures are independently prepared in order to verify that the black box performs properly. That is, the black box does everything it is required to do, and only what it is required to do.
In the past, a typical approach to verification has been to list the requirements to an “atomic” level, the smallest possible testable unit, such as an AND or OR logic gate, write test cases that apply combinations of Boolean values to the input of the element under test, and state what results should be expected. Such test cases are translated into test procedures. The test procedures are then run to collect actual results, and compare the actual results with the expected results. A common methodology for such procedures is to force values into the code, referred to as stabbing, and extract output values from some intermediate place in the code process.
Regardless of the methodology or completeness of testing, the method discussed above only proves that the black box has the correct collection of elements, not that the elements are correctly connected. In other words, the verification process discussed above does not prove that the black box works properly end-to-end. It, therefore, does not prove that the black box, as a whole, will behave as intended.
Another shortcoming of this verification testing approach occurs everywhere that timing is involved, such as the verification that a timer expires as required. A simple verification at the desired expiration time does not preclude the black box from generating unexpected values at any or all of the intermediate, unverified times. Other issues of partial verification occur in the verification process of certain components or groups of components. These partial verification problems, which can be potentially very dangerous, can include: overlapping of data structures stored in memory but cannot be accessed; or “memory leaks”; incorrect connections between components, known as “stitching”; incorrect order of execution, such as loop-backs. None of these types of faults, and potentially others, would be detected.
Currently, verification has started to move from test cases that test component-by-component, or function-by-function to black box level verification. This level includes verification by test procedures that enter “real world” inputs into the black box which then generates real world outputs. The inputs are generated for each “run” of the program, or for each “frame”, to exactly mimicking the real application. In other words, the inputs are presented to the black box during the verification process exactly as they are presented in reality. Expected outputs are then produced for each run or frame, which are then compared to the collected actual outputs. This “continuous” method of verification is far superior to any other method, provided that the test procedures are able to exercise and verify each component and requirement of the black box.
However, the generation of such test procedures at the black box level is difficult for several reasons. Mainly, the variables used for inputs to a component, or block, may not be the same as the black box level real world inputs, and the outputs from the component, or block, may not be real world outputs from the black box. Therefore, the inputs to the element under test must be “demoted” to real world inputs, and the outputs from the element under test must be “promoted” to real world outputs. In other words, the verification process requires the test engineer to generate one test procedure for each targeted component, diagram, or block of components. The test procedure must be of a duration that addresses all timing issues, achieves the desired inputs using a combination of real world input values, and also generates real world outputs that are representative, in a detectable, unique way, of the targeted component outputs. The test procedure is complete when all the aspects of the targeted component are tested under the specified guidelines, such as modified condition/decision coverage, exhaustive, etc.
In the above environment, one way to generate independent expected outputs is to provide and run an alternative code in the black box. This is often referred as a “model,” but it could be just an alternative instantiation of the code, such as code independently produced by another team. Of course, it is desirable that the expected results are produced using a different compiler, a different hardware platform, and a different operating system than the black box.
Usually, one of the first problems encountered is reconcile differences between the expected results and the actual results. In a complex system, with thousand of components, this is a difficult task. A currently available method of performing reconciliation use the debugging features of two compilers in an integrated design environment (IDE), for example, compiling the model code side-by-side with the black box code. This method allows only “forward” debugging, meaning that both compilers must be stopped at the same point in the respective codes so that the user can look at values held by variables and data structures. The user then move to the next break point and again looks at values held by variables and data structures, and so on. The reconciliation process is difficult and may take very long time. Reconciliation is particularly long and difficult if the model is written using visual techniques, such as Simulink®, and/or if the input data to the black box cannot be provided faster than real time—it is normally provided much slower than real time. (Simulink® is a registered trademark of MathWorks, Inc., of 3 Apple Hill Drive, Natick, Mass. 01760.) The user may wait 10 minutes between break points, the point at which he may be able to investigate mismatching values, just to learn that the process must be started anew.
In one embodiment, a layer of code is provided that “wraps around” the model code and at each execution of the code (at each frame of signal processing), saves the names of the variables explicitly (i.e. in ASCII format or any readable format with the name as it appears on the specifications, and not as a memory address) and saves also all the values assumed by that variable at each frame evaluation. Optionally, some other information will be saved, such as the type of variable, the time at which it was saved, etc., depending on the project. The variables for which the values are saved are all the variables in the system or a subset of them or even a superset (including special variables desirable for “setting the stage” of the test).
This innovative technique can be applied to any programming related to signal processing, i.e. to any program that receives a set of inputs and produces outputs at intervals, such as model-based verification as in DO 178B. Nevertheless, similar techniques could be expanded to any Integrated Development Environment (IDE).
While this technique is generally applicable to any computing environment, the technique is commercially enabled by the current availability of inexpensive memory for personal computers. As an example, consider a relatively complex signal processing system, with 5,000 variables. Consider also that a Test Procedure may have duration of 100 seconds and the signals are re-evaluated 100 times per second (10 millisecond frames). If each “value” saved uses “N” bytes, the required memory is 5,000×100×100×N=50×N Million Bytes. Assuming that N=16 Bytes (a reasonable assumption for saving one data point), this would be approximately 800 Megabytes of storage. Current offering of inexpensive Personal Computers commonly offer three to four times this amount, or more. The operating system (Windows XP or later), offers also nearly unlimited virtual memory on disk and is capable to address this amount of memory without problems.
The innovative technique described herein allows the user to select a point in time (for example using a slider or moving the mouse on a timeline) and inspect the values assumed by one or more selected variables (also clicked with mouse actions). Note that the user can go forward and reverse in the time selection on any value, and repeat at will, with immediate visualization of the results. The time required for this technology to display any value at any time is miniscule in comparison to alternative techniques. Of course, this technique requires the program (the Test Procedure or the “model”) to run entirely through once (without break points or debug settings) to completion, so that all the values can be saved.
In one possible implementation, the variables are shown in a convenient order (possibly in alphabetical order), and a small window allows a user to type any portion of a variable name the user intends to select. Matching variable names are then automatically highlighted and selected. Values for the selected variables can be automatically shown at the time selected by the mouse sliding back and forth on the canvas. If only one variable (or a few variables) are selected, there is enough room on the screen to show the values on a sliding window that shows, in the center, the value at the corresponding selected time, and also shows values for the same variable some frames before and some frames after the selected time.
In another implementation, the variables are shown in a window with a “scroll box”, and the values are shown for the desired time (frame) selection for the variables visible in the scroll box.
In yet another implementation, the selected (or all) variable(s) may be shown in graphical format. The arrangement of the values (color coded for positive/negative, Analog/Boolean, device related, etc.) is particularly important in model based verification, because “flag shaped” boxes can be displayed attached to points of interest on a diagram showing the desired functionality of the code. Since the flag-shaped boxes show the values of the signal at the spot indicated on the diagram at the time selected by the user moving the mouse on the canvas timeline, the debugging of a model-based becomes fast and easy.
A pseudo-code description of one embodiment of the process follows (it should be understood that the coding technique can vary widely).
The design engineer, test programmer or other user should perform the following initial steps:
An automated program process is then run, for example, code which is “wrapped around” the Model Code The process is run in a way such that, for each SimVar at each DataSim input time, all outputs can be stored. The pseudocode for an example program process can be as follows:
Conceptually, therefore, input signals are associated with input variable names, and then an input variable holds in a timeline all the desired input signal values for that variable.
At each computational frame of the Black Box, the collection of input values are transferred to memory using the pointers associated with the explicit variable names. Then, one computational event (frame) is evaluated and the Real World outputs are stored.
At the end, the values assumed by all the internal or Real World outputs are transferred to the appropriate variable's timelines using similar pointer associations between the list of tracked variables and memory locations.
As one example of the above, assume that a timeline is made of 100 seconds and 100 samples per second. The timeline for each variable thus contains 10,000 values. Then the above pseudo-code would be repeated 10,000 times to create a “SimRun”, but the execution time is not much more that executing the Black Box program itself, given that what is added is just moving data from one location in memory to another, for the tracked variables. The above process is executed ONCE and then all the data are ready to be displayed for any selected time (frame).
The showing portion of the program or “user interface” can then produce a “canvas” where a user can manipulate an input device such as a mouse or a touchscreen to select a time or times and variables to be viewed (perhaps adding a cross-hair), and data are then visualized (in any form) from the SimRun Timeline.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the embodiments.
A description of example embodiments follows.
High Level Description of the Model-Based Verification Application
to The Model-Based Verification Application 100 is made of several components.
Model Core: The Model Core 105 provides many of the features of the Model-Based Verification Application 100, such as generating input signal timelines using Intelligent Points, all graphical rendering and displaying of signals and corresponding signal processing components or blocks of connected components (also referred to hereafter as the “diagrams”), debugging capabilities, file name and file input/output (I/O) services.
UNITSCD: The UNITSCD (Specification Control Drawings) 110 is a code unit where all the information pertaining to the diagrams is stored. The information can include the devices, ASCII names of variables, the positions of flags, and constants on the diagram visual representations.
VARSANDBODIES: The VARSANDBODIES 115 enables the functionality of each diagram. It also enables the functionality of the complete Black Box under test by connecting together or “stitching” two or more of the diagrams that make up the Black Box. Typically, the VARSANDBODIES 115 is made up of calls to a library of components with standard behavior, or in some cases custom behavior. The library components can include, but are not limited to, Boolean logic gates, such as, OR gates, AND gates, and NOT gates, and other signal processing functions, such as delay timers, and various filters. The VARSANDBODIES 115 also provides all the variables of the Black Box in static memory.
This structure allows this same Model Core 105 to be re-used in different projects. New custom components can be added to the library of components as necessary. Using minimal additional resources, the custom components can be carried on as legacy elements in projects that do not use them.
The UNITSCD 110 and VARSANDBODIES 115 code units are typically new and unique for each project. Thus, they typically are developed for a specific project, and cannot be re-used in a different project. These code units can be considered to belong to a customer of the user. The VARSANDBODIES 115 needs to be compiled for performance, while the UNITSCD 110 is a database-like set of information.
Other (External) Model Components:
ALIAS Table: Alias Table 120 is a computer file, such as an Excel® file that contains information regarding names of variables used in the actual code, i.e., the code under test. (Excel is a registered trademark of Microsoft Corporation, One Microsoft Way, Redmond, Wash. 98052.) Such information can further include minimum and/or maximum limitations for values, granularity or decimation/digitalization properties, diagram stitching information, and other names in use. Information regarding other variable names in use is especially useful when there is redundancy, and/or inconsistency in the signal naming scheme among and between model code and actual code.
It should be understood by those of skill in the art that the mention of Excel® files throughout this document is due only to the convenience of using the Excel® format as a presentation layer, and not for any other features of Excel®. All specific file formats such as Excel, Visio, Simulink, Subversion, etc. referred to in this document can be in other formats, such as Comma Separated Values (.csv) or any other suitable form.
Diagram Bitmaps: Diagram Bitmaps 125 are a visual representation of the diagrams. The Diagram Bitmaps 125 can be a screen shot, or scan, of a diagram. While visual representations can be generated from the file that stores the diagram in native format, such as Visio®, Simulink®, or similar tool, it is preferred to use a bitmap file, because there are no possible errors in the interpretation of the native-mode files. In other words, what you see is what you get with bitmap files. (Visio is a registered trademark of Assistance Technique Et Etude De Materiels Electroniques Société Anonyme, France, Burospace Bâtiment 26 Route de Gizy; F-91570 Bievres, France; and Simulink is a registered trademark of MathWorks Inc., 3 Apple Hill Drive, Natick, Mass. 01760.) This preferred visual representation allows for an excellent independent interpretation and analysis of the diagram.
CMOD Files: CMOD Files 130 are computer files, such as Excel® files, that contain visual information, and environment or harness parameters about a specific test. The CMOD Files 130 contain only the “Intelligent Points” that allow the creation of complete test files in conjunction with the Model 101a/b.
Test Cases: The Test Cases 135 are a collection of computer files, such as Excel® files, which are related to a single diagram. In an example embodiment the columns represent all the input variables and output variables of the related diagram, and each row represents the inputs, and expected outputs, for each and every processing step. In other words, the Test Case 135 files contain all of the I/O variables, input values, and expected output values for each step of time along a timeline.
Test Procedures: Test Procedures 140 are a collection of files, such as Excel® files, which have columns representing Real World input values and output values. The input values contain all the initial condition values for variables. That is, Test Procedure 140 files contain all the Real World values that set the stage for the test, including the values for all the variables that have non-nominal values—even if they do not change during the test. The Real World output variables represented are a set of manually selected variables pertinent to a specific test. Each row represents input values and expected output values, for every time step. Each Test Procedure 140 is a companion to a Test Case 135, in the sense that the Test Procedure 140 will faithfully generate the Test Case 135 data for the related diagram, when the Real World input values traverse the diagram. In other words, when the Real World input values are completely processed according to a Test Procedure 140, the Test Case 135 data will be generated for the associated diagram.
Hardware (Rig): The Hardware (or Rig) 145 is the actual physical equipment against which the Model 101 is verified. Test Procedures 140 are submitted to the Hardware 145 to produce Actual Results, which are then fed back to the Model 101 and/or stored in record-keeping or version control systems 155, such as Subversion® or ClearCase®. (Subversion is a registered trademark of CollabNet, Inc., 8000 Marina Blvd., Suite 600, Brisbane, Calif. 94005; and ClearCase is a registered trademark of Atria Software, Inc., 24 Prime Park Way, Natick, Mass. 01760.)
Intelligent Points Setup and Manual Review Process:
The purpose of the Model-Based Verification Application 100 is to ascertain that each block in the Back Box, such as one or more selected components or diagrams, produces the intended functionality. The verification process is a complex function that depends on device type, signal type, required verification procedures, and several other parameters. Verification must sometimes also comply with applicable standards 170, such as DO178 as applied to flight code, and perhaps other custom requirements.
In a preferred example embodiment, each device can be verified by specifying a combination of Intelligent Points 175 (also referred to herein as “IntelliPoints”). An Intelligent Point is a command for a variable to assume a value, or a signal type, from one point in time until the point in time of the next Intelligent Point 175. In other words, an Intelligent Point 175 dictates the signal type for an input variable, and therefore input values, for a period of processing time along a time line. The simplest form of input variable signal is a linear interpolation between the two points, but other signal can be created in accordance with the selected attributes of the Intelligent Point 175.
While the number of Intelligent Points 175 in a particular Model 101 may vary depending on many considerations, normally a very small set of Intelligent Points 175 is sufficient for a particular verification. For example, a method for verification of a DelayOn device requires only four Intelligent Points 175 to generate the Boolean input values necessary to verify Persistency and Expiration tests. However, the Intelligent Points 175 must be carefully and appropriately set in time. For other variables more points may be required to promote the output signal to a visible Real World signal, using a promotion process.
The selecting or setting of Intelligent Points 175 can be achieved via a trial-and-error manual process, aided by the Model's 101 recalculation of all signals when a point is moved. Useful to selecting of Intelligent Points 175 is a real-time recalculation, which allows the user to graphically see the implications on all signals when an Intelligent Point 175 is modified in time, value or signal type, or another Intelligent Point 175 is added.
The combination of Intelligent Points 175 and real-time recalculation with graphical-aided feedback enables the Model-Based Verification Application 100 to verify the Black Box components. Without this combination, the Black Box verification of a complex system would exceed reasonable human resources.
User Interface:
Virtual Input Variable
The stitched Model 101 consumes all the Real World input variables and produces all the Real World output variables. However, the list of all the input variables can be impractical to maintain in some projects, therefore the Model 101 resorts to Intelligent Points 175 for Virtual Real World input variables. Here, for Virtual Real World input variables, it is intended as a variable that does not exist as a real world input variable, that may have a similar name to a Real World input variable, and has the purpose of setting a number of Real World input variables to the desired value or values, according to rules.
For example, the setting of a variable carried by a triple-redundant ARINC bus typically requires setting the value of 12 input variables; 3 values for each of the following: Data, Activity, Parity, and Sign/Status Matrix (SSM). A Virtual Real World input variable would set all 12 variables so that the internal variable selected after the redundancy process assumes the same value as the Virtual Real World input variable.
In another example, an input variable can assume a value in degrees inside the Black Box, or even inside a particular diagram, but has a real world input variable value in radians, or Sines and Cosines, the Virtual Real World input variable is set in degrees, and the Model 101 adjusts the input values to Sines and Cosines accordingly.
The Virtual Variable can be set to any value, but the Model 101 limits the Real World input values according to the specified minimum and maximum values in the Alias table 120 (shown in
Ranges for Variables
In the real world, physical hardware, variable values are carried to the Black Box model via communications channels or buses. Usually, a bus or channel has a width in bits.
For example, if the variable is generated by a 10-bit analog-to-digital converter (A/D), the bus may also have a 10-bit width. Assuming, as an example, a 10-bit A/D is connected to a sensor that measures Vertical Speed Accelerations (VSA) in gravitational-force (Gs). Further assuming that the aircraft in which the VSA is installed is designed to operate between −3 and +7 Gs, and that the device will disintegrate if the acceleration exceeds the range between −6 and +14 Gs. The sensor is capable of providing data values between −20.48 and +20.44 Gs.
The example assumed will be able to accommodate 1024 discrete values. Further assuming that one bit is used for the sign, and that −20.48 G would correspond to −512 and +20.44 G would correspond to 511. It is useful to introduce a scale to convert from the raw input data signal to a more meaningful, and floating point, value. In this case, the scale would be 1/25. (Range of values±Number of discrete values for the range=(20.44−(−20.84))/(1024−1)=1/25.) This would also implicitly dictate that the signal is received in minimum increments (granules, decimation, or digitization units) of 0.04 Gs.
In the above example, the Alias table 120 would carry the information about range and granularity for this variable. The Model 101 would use the Alias table 120 to create input values only in accordance with the range and granularity. Although the variable input can assume values from −20.48 to +20.44 Gs, in 0.04 increments, the values outside of the range between −6 and +14 Gs are meaningless, because, in the real world the VSA sensor would be destroyed. It would useful that a project have a limiting layer of code that ensures that all input variable values are meaningful operational values.
Demo Project
In the following discussion of several drawings, it is assumed that the specifications are for a Demo project.
Therefore, the model therefore enables the selective visualization of output variables, allowing a user to immediately determine if a Real World output is a viable function, such as a linear transfer function, or the best possible approximation, of the selected diagram output.
An example embodiment of a disclosed method can ultimately result in a CMOD file 130 (
Verification of a Diagram with Timer on Boolean Logic
In an example embodiment, the Test Cases generated using appropriately selected IntelliPoint verify all the relevant and necessary variables at all signal processing frames. The Test Procedures compare expected results, those produces by the Model, and actual results, those produced by the Black Box, for all the variables at each and every frame, leaving no uncertainties regarding random, unexpected variable values or wrong stitching between diagrams.
The IntelliPoints are similar to fractals in that IntelliPoints expand to a lengthy and complex Test Case and Test Procedure files. This expansion process is relatively simple and the Test Cases and Test Procedures are automatically generated by the Model. While the Test Case and Test Procedure files are lengthy and complex, at the same time, the IntelliPoints are humanly understandable and settable, with the aid of the backward and forward timeline traversal debugging and visualization features of the Model.
An example embodiment of the method has been proven successful in a project with 200 complex diagrams and 6,000 variables, while still providing reasonably approximate real time feedback when moving IntelliPoints. In the test case, the 500 CMOD files, having an average of 20 IntelliPoints each, produced four giga bytes (4 GB) of Excel® Test Cases and Test Procedures in about 6 hours of batch processing on a medium-powered PC. The complete verification process produces only IntelliPoints that verify the desired specifications for all components. With this method, each Intelligent Point only requires a few minutes of user attention. This is particularly useful to save time in the event that the requirements change; the user needs only to add or modify a few Intelligent Points, after which the system can re-compute all expected results for all diagrams.
The batch process that re-computes output variable values at all times is particularly useful because a change in requirements, or a change in a diagram, can significantly alter the expected results, and is very difficult to predict. Then, the batch process may produce then correct verification, which perhaps is incomplete. The verification of a component is complete only when the component is fully exercised according to specification. However, an incomplete verification is still useful if the expected results match the actual results. The completion of the verification may be achieved in steps, for example, by adding Intelligent Points, to the existing Test Cases and Test Procedures.
The above described forward and reverse timeline traversal debugging technique is more than simply moving a mouse back and forth on a timeline. It is a complex balance of programming resources allocation. It provides the user with the perception that all the variables in all the diagrams are immediately recalculated for any change to the Intelligent Points and other parameters, and that all the values are immediately available at the time frame where the mouse is positioned on the timeline. This perception of real-time availability is useful to enable the interactive interface allowing the user to easily and quickly set IntelliPoints for verification. Test Case is manually generated, and the generation process is aided by the real-time feedback of the expected results, immediately available to the user to explore for each device and each variable at any time interval along the timeline.
Consider the typical software code to be verified, such as a control law for avionics, or medical applications, while appearing very complex, it is usually designed to run at a duty cycle of 40 percent or less on an inexpensive microchip, which perform digital signal processing in computational “frames” of a few milliseconds. Rarely there are complex, non linear, mathematical algorithms needed. And if such complex algorithms are needed, they are highly optimized. Furthermore, rarely does a single test case required more than a 30 second timeline to prove that one aspect of the design is verified. However, many test cases may be required—normally a few per diagram. In other words, if the model of the program under test, produced by an independent interpretation of the requirements diagrams, is compiled on a relatively fast PC in “C” or Pascal, with virtually no memory constraints, the Test Case that would take microchip CPU time of 12 seconds (40% of 30 seconds), takes only a few seconds to run on the PC. This is an acceptable real time-response for the test engineer that changes a meaningful parameter—but it is not acceptable for looking at the changing values of variables when a mouse is moved on the timeline. When the engineer explores the possibilities of changing signals, the response time of the program must be much quicker. The above described forward and reverse timeline traversal debugging technique stores all the values for a complete timeline run within a few seconds, but allows the user to select a time, for example by using mouse movement or using arrow keys, and immediately—within milliseconds—see the values for all the variables.
The manual generation of Test Cases, aided through the use of timeline views of the signals, is not possible if the model program is written in an interpretative or semi-interpretative language. Experience with a large project provided that an environment such as PYTHON® has a greater than 10× response time, and SIMULINK® has a greater than 100× response time compared to a streamlined, compiled PASCAL rendition in which all variables are in static memory. (PYTHON is a registered trademark of Python Software Foundation Corporation, P.O. Box 37 Wolfeboro Falls, N.H. 03896-0037.)
After establishing that the Model code runs in a few seconds or less, two other features are useful in providing the perception of immediacy in displaying of data to the Test Case designer.
The first feature is the fact that the program tracks which diagram the user is viewing. The user can only look at one diagram at a time. Thus, when the user changes a diagram tab, a two second time for recalculation is acceptable. When the program tracks the diagram under test and the information associated with it, such as the position the flags on the diagram, the Model Core only needs to lift the values of the variables the user can see from static memory, at end of each computational frame. This can be done without disturbing the model code execution or adding cycles to it. The data is quickly accessed and copied from the variable's values, at each computational frame, are stored sequentially in memory, for immediate access, with access via frame number. This allows the corresponding values to be immediately displayed when the engineer selects any time corresponding to a computational frame. While it is true that all the data could be saved at each frame, it would require many gigabytes of memory for a large project, and the program would be slow for a variety of other reasons, such as memory paging.
Therefore, all the data related to all the I/O variables that the user can actually “see” on the timeline, and only those I/O variables, appropriately organized in meaningful groups, are stored in a memory array for each frame of execution along the timeline. The data related to the I/O variables that the user can see may include either the flag over the variable name in the diagram bitmap or the variable plotted with appropriate bias and colors.
After the data has been stored, when the user moves the mouse on the timeline plot, the model core only displays all the variables that the user can see on the diagram under being viewed. Since this process typically takes a very short time for a reasonable diagram, for example, less than 100 milliseconds after the mouse settles, the user perceives that the program responds immediately.
Also useful is the fact that the user does not have to type the variable names; the values of all the variables that can be seen are immediately reported. The user needs only to select the time (i.e. processing frame) with a single mouse motion—for any frame in the timeline—moving the cross-hair on the canvas.
Sometimes diagrams pre-consume variables that are produced later during the computational chain. In other words, a variable assumes two different values during a single computational frame, such as when an output signal is fed back to some input. In an example embodiment, pre-consumption is solved by storing the value of the intermediate, non-Real World I/O variables at the time when the diagram under test is executed, in the order of execution chain, and not at the end of the computational frame. This feature can also be bypassed, if the need arises.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/416,152, filed on Nov. 22, 2010, U.S. Provisional Application No. 61/416,153, filed on Nov. 22, 2010, U.S. Provisional Application No. 61/458,406, filed on Nov. 23, 2010, U.S. Provisional Application No. 61/458,410, filed on Nov. 23, 2010 and U.S. Provisional Application No. 61/481,973, filed on May 3, 2011. The entire contents of the referenced applications are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61416152 | Nov 2010 | US | |
61416153 | Nov 2010 | US | |
61458406 | Nov 2010 | US | |
61458410 | Nov 2010 | US | |
61481973 | May 2011 | US |