Integrated circuit design, such as processor design, is an extremely complex and lengthy process. The design process includes a range of tasks from high-level tasks, such as specifying the architecture, down to low-level tasks, such as determining the physical placement of transistors on a silicon substrate. Each stage of the design process also involves extensive testing and verification of the design through that stage. One typical stage of processor design is to program the desired processor architecture using a register transfer language (RTL). The desired architecture is represented by an RTL specification that describes the behavior of the processor in terms of step-wise register contents. The RTL specification models the function of the processor without describing the physical details. Thus, the processor architecture can be verified at a high level with reference to the RTL specification, independent of implementation details such as circuit design and transistor layout. The RTL specification also facilitates later hardware design of the processor.
The RTL specification is tested using test cases. The test cases comprise programs that define an initial state for the processor that is being simulated and the environment in which it operates. Such test cases are generated, by way of example, by a pseudo-random generator. During verification testing of a processor, literally millions of these test cases are run on the RTL specification. Execution of so many test cases enables verification of every component of the processor design in a variety of situations that may be encountered during processor operation.
Certain test cases are better than others at testing particular components or conditions. For example, when multiple test cases are run, there will be a subset of test cases that are best at testing a memory subsystem of the processor design. Given the sheer number of test cases that are typically run, however, it can be difficult to determine which test cases are best for testing which components or conditions. This is disadvantageous given that the design tester may wish to identify and apply only certain test cases in a given situation. For instance, in keeping with the previous example, if the memory subsystem has been modified during the design process, it may be desirable to identify and apply those test cases that are best suited to test the memory subsystem.
Due to the desirability of identifying test cases, mechanisms have been employed to identify the occurrence of given events in relation to particular test cases. Although such mechanisms can help quantify the number of events that are observed for any given test case, those mechanisms do not provide the design tester with an evaluation or measure of the test case's ability to test particular components or conditions.
In one embodiment, a system and a method for evaluating a test case pertain to assigning weights to at least one of system components and system events, processing the test case to determine the number of event occurrences observed when the test case was run, and computing an overall score for the test case relative to the number of occurrences and the assigned weights.
The disclosed systems and methods can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale.
Disclosed are systems and methods for evaluating the functional coverage of test cases. More particularly, disclosed are systems and methods for evaluating the functional coverage of test cases applied to an integrated circuit design for the purpose of identifying the test cases that are best suited to test particular circuit components or conditions that may arise in operation of the circuit. In the following, the underlying integrated circuit is described as being a computer processor. It is to be understood, however, that the systems and methods described herein apply equally to other types of integrated circuits, including application-specific integrated circuits (ASICs).
Referring to
The RTL specification 12 is operated relative to information specified by the test case 10. The test case 10 comprises a program to be executed on the processor architecture 14 in the RTL specification 12. The test case 10 is a memory image of one or more computer-executable instructions, along with an indication of the starting point, and may comprise other state specifiers such as initial register contents, external interrupt state, etc. Accordingly, the test case 10 defines an initial state for the processor that is being simulated and the environment in which it operates. The test case 10 may be provided for execution on the RTL specification 12 in any suitable manner, such as an input stream or an input file specified on a command line.
The RTL specification 12 may be implemented using any suitable tool for modeling the processor architecture 14, such as any register transfer language description of the architecture that may be interpreted or compiled to act as a simulation of the processor. The RTL specification 12 of an exemplary embodiment contains an application program interface (API) that enables external programs to access the state of various signals in the simulated processor, such as register contents, input/outputs (I/Os), etc. Thus, the output of the RTL specification 12 may be produced in any of a number of ways, such as an output stream, an output file, or as states that are probed by an external program through the API. The RTL specification 12 may simulate any desired level of architectural detail, such as a processor core, or a processor core and one or more output interfaces.
In the embodiment of
Notably, other embodiments of a processor architecture verification system may comprise a hybrid of the embodiments shown in
The FSB 24 is a broadcast bus in which bus traffic is visible to each agent connected to the FSB. Each component on the bus 24 monitors the traffic to determine whether the traffic is addressed to them. A given operation or “transaction” performed by Core 1 (26), such as a memory read operation, may comprise multiple phases. For example, consider an exemplary read operation performed by the Core 1 (26) using the FSB 24 to read data from the memory 32. Such a transaction may comprise an arbitration phase, a request A, a request B, a snoop phase, and a data phase. Each of these five phases is performed by transmitting or receiving a block of information over the FSB 24. The different phases are defined in the FSB output format and place the system into various states. For example, during the snoop phase, the transaction becomes globally visible so that the transaction is visible to each core 26, 28, and 30, thereby facilitating a shared memory architecture.
As noted above, certain test cases are better than others at testing particular processor components or conditions (i.e., events).
Below the components nodes A, B, and C are further nodes A1, A2, A3, B1, B2, and C1. Each of these nodes pertains to either an event or a sub-component that is associated with one of the components A, B, and C. For instance, node A1 may pertain to a first arithmetic action (e.g., multiplication of first and second operands), node A2 may pertain to a second arithmetic action (e.g., addition of first and second operands), node A3 may pertain to a third arithmetic action (e.g., subtraction of a second operand from a first operand), node B1 may pertain to a first queue in the memory subsystem B, node B2 may pertain to a second queue in the memory subsystem B, and node C1 may pertain to a full condition of the cache C. As is further illustrated in
In view of the above example, each leaf node, i.e., each end node from which no other nodes depend, pertains to a given event for which the design tester (i.e., user) may wish to collect information, whether that event is associated with a main component (e.g., A, B, or C) or a sub-component (e.g., B1 or B2). The event checker 16 and/or the event counters 20 (depending upon the particular system implementation) is/are configured to detect the occurrence of the various events for the purpose of enabling analysis of those events to provide the design tester with an idea of how well a given test case tests those particular events. Specifically, the event checker 16 and/or event counters 20 identify the number of occurrences of each tracked event and weight is applied to each according to the each event's importance relative to a particular system component or condition about which the design tester is interested. Accordingly, through such weighting, each test case can be evaluated to generate relative scores that measures the ability of the test case to test the given system component or condition. When such analysis is performed upon each test case of a group of test cases (e.g., each test case that has been run to date), an ordered list of best to worst for testing the given component or condition can be provided to the design tester.
Once each system component and functional coverage event of interest is identified, the mechanisms to detect and record occurrences of the various functional coverage events are provided within the verification system, as indicated in block 54. As mentioned in the foregoing, these mechanisms can include one or more of an event checker (e.g., checker 16,
Referring next to block 56, various test cases are run on the modeled architecture (e.g., processor design), and the functional coverage information for which the verification system was configured to obtain is collected. By way of example, the functional coverage information can be stored in association with the various test cases in a test case database in which other test case results are stored.
At this point, some or all of the test cases that have been run can be evaluated by a test case evaluator program to determine which test case or cases is/are best for testing certain aspects of the system design, such as particular system components or conditions. To conduct this evaluation, the various test cases are analyzed and scored relative to their ability to test the component(s) or condition(s) of interest. This is accomplished by providing greater weight to collected information that pertains to the specific components and/or events about which the design tester is interested. Accordingly, with reference to block 58, the test case evaluator (e.g., in response to a selection made by the design tester) assigns weights to the components and/or functional coverage events so that the information associated with those components and/or events is allotted greater importance and, therefore, the test cases that have higher occurrences of the events associated with the components will receive higher scores.
Such weight can be assigned, for example, prior to conducting the test case evaluation. For instance, the design tester can be prompted to set those weights to suit his or her search for suitable test cases. Notably, weight can be individually assigned to the components as well as the events associated with those components. Therefore, in terms of the tree structure 42 of
With reference next to block 60, the test case evaluator processes the test cases. For instance, the evaluator processes all of the test cases contained in a test case database, or a subset of those test cases if the design tester has so chosen. In processing the test cases, the test case evaluator determines the number of occurrences for each event for which information was collected. Optionally, the test case evaluator limits the number of event counts for certain events. In such a situation, cases in which occurrences of a given event beyond a given number of times (e.g., once) are not considered probative of the test case's value for testing a particular system component or condition, all occurrences beyond the given number of times can be ignored. For instance, if the design tester is only interested in the first 10 occurrences of a given event, and 15 occurrences were observed for that event in a given test case, the number of occurrences counted for purposes of the evaluation is limited to 10. Such event count limits can, for example, be established by the design tester prior to running the test case evaluation.
In addition to limiting the event counts, the test case evaluator can further separately normalize the weights assigned to the components and events so as to render the results of the evaluation more suitable for comparison with each other. Such normalization comprises dividing each event's weight by the sum of all the applicable event weights. For example, in
An example of the above-described process will now be described in view of the example tree structure 70 of
Assume that the design tester (i.e., user) considers overflow events to be more important than unsigned additions. In such a case, the design tester may assign a weight of 10 to leaf nodes A1 and B1, and a weight of 5 to leaf node A2. Assume further that the design tester considers the multiplier to be more complex (and therefore more important to test) than the adder. In such a case, the design tester may assign a weight of 10 to node B and a weight of 5 to node A. Therefore, the assigned weights are as follows:
Next, it is assumed that the design tester wishes to normalize those weights. Such normalization results in the following normalized weights:
In addition to normalizing the weights, assume that the design tester wishes to place limits on the number of event occurrences that will count in the test case evaluation. For example, assume that a limit of 1 is assigned to leaf node A1, a limit of 3 is assigned to leaf node B1, and a limit of 100 is assigned to A2.
If a given test case is observed to cause 2 overflow events on additions, 6 overflow events on multiplies, and 50 unsigned additions, the scores for each event are as follows:
With those event scores, the component scores are calculated as follows:
Next, the overall score for the test case can be calculated as the sum of the two components scores, or 0.95.
In view of the above, the disclosed evaluation systems and methods provide an effective tool to aid design testers in selecting test cases to evaluate specific components of a design, or conditions that may arise during operation of the underlying architecture. In addition to identifying test cases that are effective in testing individual components, the evaluation systems and methods can be used to identify test cases that are best suited for testing multiple components. Such flexibility is possible through the weight assignment process. Furthermore, the evaluation systems and methods are easy to use, even for design testers that are not highly familiar with the underlying design, because relative scores are provided that enable simple identification of the most suitable test cases.
The processing device 92 can include a central processing unit (CPU) or an auxiliary processor among several processors associated with the computer system 90, or a semiconductor-based microprocessor (in the form of a microchip). The memory 94 includes any one or a combination of volatile memory elements (e.g., RAM) and nonvolatile memory elements (e.g., read only memory (ROM), hard disk, etc.).
The user interface device(s) 96 comprise the physical components with which a user interacts with the computer system 90, such as a keyboard and mouse. The one or more I/O devices 98 are adapted to facilitate communication with other devices. By way of example, the I/O devices 98 include one or more of a universal serial bus (USB), an IEEE 1394 (i.e., Firewire), or a small computer system interface (SCSI) connection component and/or network communication components such as a modem or a network card.
The memory 94 comprises various programs including an operating system 102 that controls the execution of other programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. In addition to the operating system 102, the memory 94 comprises the RTL specification 12 identified in
Various programs (i.e., logic) have been described herein. Those programs can be stored on any computer-readable medium for use by or in connection with any computer-related system or method. In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that contains or stores a computer program for use by or in connection with a computer-related system or method. These programs can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.