The present invention relates generally to software testing, and more particularly, to a framework for facilitating the verification and validation of pieces of software.
The process of producing software is laborious, intellectually challenging, and error-prone. Like many engineered products, software undergoes testing to ensure that it performs or functions as designed by engineers and desired by customers. Whereas other engineered products are tested by using various different machinery and processes, software is tested by more software (“test software”) that must be written.
Test software is designed and written by a test team, which is common at many software organizations. The test team typically works side by side with a software development team. Laboring under many constraints, such as time and resources, the test team 102a typically produces monolithic test software 106a running on a test infrastructure code 104a also developed by the test team 102a. The problem with monolithic test software 106a is its lack of reusability. For example, suppose a piece of monolithic test software is a function for creating files. Suppose further that this function creates all files with a particular name for a particular word processing application. Such a test software design is monolithic in that data, among other things, are closely coupled to the test software. In other words, the function for creating files cannot be used to create other files with different names for different applications.
Another problem with monolithic test software is that small changes made to the test software force a complete recompilation, which can be quite time consuming for software products that have many lines of code. Another problem is that monolithic test software 106a is not scalable because it is domain-specific and is not written to address testing problems that are general in nature. Monolithic test software 106a is also not as reliable as other pieces of software because it must be written anew for each function and cannot leverage existing test code, which may have a history of reliable performance.
The most pernicious problem of all lies in software organizations that have multiple test teams, such as test teams 102a-102c. Given various constraints, each test team creates monolithic test software 106a -106c independent from other teams. Each test team 102a-102c also develops its own test infrastructure code 104a -104c so as to execute the monolithic test software 106a -106e and track test results. The test infrastructure code 104a -104c allows each test team to test specific requirements of a developed piece of software. With each test team 102a-102c developing its own test infrastructure code 104a -104c, duplication occurs. However, these duplications do not allow one test team to use another test team's test infrastructure code. Duplication also occurs at the creation of the monolithic test software 106a-106c in that common pieces of test software cannot be reused due to the monolithic design.
When an organization only has only one software product, the inefficiency of monolithic test software created by one test team may not be problematic. But in any software organization that develops numerous software products, that require testing by a multitude of test teams, having each team develop its own test infrastructure code and monolithic test software can cause the cost of software to rise. Without a better framework that facilitates reusability, scalability, and reliability, software may become too expensive for consumers to afford. Thus, there is a need for a system, method, and computer-readable medium for a better test framework while avoiding or reducing the foregoing and other problems associated with an existing system.
In accordance with this invention, a system, method, and computer-readable medium for testing software is provided. The system form of the invention includes a display, a user input facility, and a user interface presented on the display, as well as a software test framework, which comprises test items for representing test concepts that are disassociated with a test context, test data, and test logic. The test context defines interrelated conditions in which the test item is to be executed. The test data defines the value of a test parameter. The test logic defines executable instructions to implement a test item.
In accordance with further aspects of this invention, a computer-readable medium form of the invention has one or more data structures stored thereon for use by a computing system to facilitate a software test framework. These data structures comprise a statement class for defining attributes and services connected with the treatment of a test item or a test scenario. These data structures also comprise a managed item class for defining attributes and services connected with a test item that is implemented with code that behaves and provides results defined by a predetermined architecture. The data structures further comprise an unmanaged item class for defining attributes and services connected with a test item that is implemented outside of the predetermined architecture.
In accordance with further aspects of this invention, a system form of the invention includes a display, a user input facility, and an application executed thereon for presenting a user interface on the display. The application comprises a first portion of the user interface for presenting a number of statements from which to build a test scenario. The number of statements includes a sequence statement for declaring executable instructions for causing test items to be executed in a particular order. The number of statements further includes a parallel statement for declaring executable instructions for causing test items to be executed in parallel.
In accordance with further aspects of this invention, a method form of the invention includes a computer-implemented method for testing software. The method comprises discovering published test items, each test item being disassociated with test context, test data, and test logic. The method further comprises creating test scenarios from combinations of test items. Each test scenario organizes as a tree structure with nodes that are linked together in a hierarchical fashion. The method also includes executing test scenarios using a software test framework to produce test results. The test results are analyzable to verify and validate a piece of software.
In accordance with further aspects of this invention, a computer-readable medium form of the invention includes a computer-readable medium having computer-executable instructions stored thereon that implements a method for testing software. The method comprises discovering published test items, each test item being disassociated with test context, test data, and test logic. The method further comprises creating test scenarios from combinations of test items. Each test scenario is organized as a tree structure with nodes that are linked together in a hierarchical fashion. The method also includes executing test scenarios using a software test framework to produce test results. The test results are analyzable to verify and validate a piece of software.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Various embodiments of the present invention provide a reusable software test framework, including abstract and concrete classes as well as a user interface, for assisting in creating test scenarios from test items. A test item 202 is a reusable test unit. See
Test item, statement, and test scenario 212a-212c can be aggregated in various combinations to form a test scenario 210. See
A system 300 illustrates class diagrams in which each class is a generalized category that describes a group of more specific items, called objects. See
A statement abstract class 302 defines virtual attributes and virtual services representing a declaration in the system 300 in regard to the treatment of a test item or a test scenario comprising combinations of test items, statements, and test scenarios. A statement abstract class 302 counts among its members action statements that define how other statements should be treated (i.e. thread or parallel). A statement abstract class 302 counts among its members control statements that apply some action to other statements, which can be statements, test items or test scenarios. A managed item class 304 defines attributes and services connected with test items that are implemented with code that behaves and provides results defined by a predetermined architecture. One suitable predetermined architecture includes the .NET architecture of Microsoft Corporation. For a particular test item that is an instance of the managed item class 304, it implements a method run that receives a context data structure as a parameter and returns a Boolean result. An edge emanating from the managed item class 304 and terminating in an arrow-shaped figure at the statement class 302 indicates that there is an inheriting relationship between the managed item class 304 and the statement class 302.
An unmanaged item class 316 defines attributes and services connected with test items that are implemented outside of a predetermined architecture, such as the .NET architecture. The unmanaged item class 316 frees test developers to create test items from any suitable languages, such as C, C++, Java, are scripting languages, among others. The unmanaged item class 316 allows the execution of legacy test code, whereas the managed item class 304 allows the execution of test code written in the previously discussed predetermined architecture. An edge emanating from the unmanaged item class 316 and terminating in an arrow-shaped figure at the statement class 302 indicates that there is an inheriting relationship between the unmanaged item class 316 and the statement class 302. A variation class 306 defines attributes and services representing a grouping of statements together in combination and collectively defining a test unit. An edge emanating from the variation class 306 and terminating in an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the variation class 306 derives certain attributes and services from the statement class 302. The variation class 306 allows the execution and reporting of variations of a test scenario performed to test a piece of software.
A parallel class 308 defines attributes and services connected with a statement for causing two or more test items to be executed in parallel. An edge emanating from the parallel class 308 and terminating with an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the parallel class 308 derives certain attributes and services from the statement class 302. A sequence class 317 defines attributes and services connected with a statement for causing two test items to be executed in sequence. An edge emanating from the sequence class 317 and terminating with an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the sequence class 317 derives certain attributes and services from the statement class 302. A “for” class 310 defines attributes and services connected with a looping control statement that executes statements and test items a specified number of times. An edge emanating from the “for” class 310 and terminating with an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the “for” class 310 derives certain attributes and services from the statement class 302. A thread class 312 defines attributes and services connected with defining an independent path of execution for test items in a test scenario or a number of test scenarios. An edge emanating from the thread class 312 and terminating with an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the thread class 312 derives certain attributes and services from the statement class 302. A remote class 314 defines attributes and services connected with the declaration of executing test items or test scenarios on a remote machine by specifying a location and accessing information, such as user name, domain, and session. An edge emanating from the remote class 314 and terminating with an arrow-shaped figure at the statement class 302 indicates an inheriting relationship in which the remote class 314 derives certain attributes and services from the statement class 302.
An abstract generator class 328 defines attributes and services in connection with the generation of data for parameters used with various test items and test scenarios. The managed generator class 330 defines attributes and services connected with generating data for use by managed code written in a suitable predetermined architecture, such as the .NET architecture. An edge emanating from the managed generator class 330 and terminating with an arrow-shaped figure at the generator class 328 indicates an inheriting relationship in which the managed generator class 330 derives certain attributes and services from the generator class 328. An unmanaged generator class 332 defines attributes and services connected with the generation of data for use by code written outside of the predetermined architecture previously discussed so as to include legacy test code. An edge emanating from the unmanaged generator class 332 and terminating with an arrow-shaped figure at the generator class 328 indicates an inheriting relationship in which the unmanaged generator class 332 derives certain attributes and services from the generator class 328. An abstract validator class 322 defines attributes and services connected with the validation of the result of the execution of a test item. A managed validator class 324 defines attributes and services connected with validating managed code written in a predetermined architecture previously discussed. An edge emanating from the managed validator class 324 and terminating with an arrow-shaped figure at the validator class 322 indicates an inheriting relationship in which the managed validator class 324 derives certain attributes and services from the validator class 322. An unmanaged validator class 326 defines attributes and services connected with validating data for unmanaged code written outside of a predetermined architecture discussed previously. An edge emanating from the unmanaged validator class 326 and terminating with an arrow-shaped figure at the validator class 322 indicates an inheriting relationship in which the unmanaged validator class 326 derives certain attributes and services from the validator class 322. An abstract executor class 318 defines attributes and services connected with an execution engine that prescribes how test items and test scenarios will be executed. A code executor class 320 defines attributes and services connected with a particular execution engine, which inherits certain attributes and services from the abstract executor class 318 (visually illustrated by an edge emanating from the code executor class 320 and terminating with an arrow-shaped figure at the abstract executor class 318).
A user interface 334 for a scenario test framework, consisting of implementations of abstract and concrete classes, and which assists in building test scenarios, is illustrated at
A remote statement 336 declares executable instructions for causing test items or test scenarios to be executed on a remote computer by defining the location of the remote computer and access information, such as user name. A parallel statement 338 declares executable instructions for causing test items or test scenarios to be executed in parallel. A thread statement 340 declares executable instructions for causing an independent path of execution to occur for a particular test item or a group of test items under a test scenario. A for statement 342 declares executable instructions for implementing a loop in which a test item or a test scenario is executed for a specified number of times. A user context statement 344 declares executable instructions that specify the level of user access in which to execute test items or test scenarios, such as an administrator or a guest user. A sequence statement 346 declares executable instructions for causing test items or test scenarios to be executed in a particular order. A leak detection statement 348 declares executable instructions for detecting whether a memory leak has occurred after the execution of a test item or a test scenario. A performance measurement statement 350 declares executable instructions for measuring computer performance, such as CPU usage, among other things, after execution of a test item or test scenario. A generator statement 352 declares executable instructions for generating data for a test item. A validator statement 354 declares executable instructions for validating data for a test item. A coverage statement 356 declares executable instructions for determining the code coverage of the execution of a particular test item or test scenario. A logging statement 358 declares executable instructions for logging test activities in connection with a test item or a test scenario. An error handling statement 360 declares executable instructions for reporting errors generated as a result of the execution of a test item or a test scenario. A deadlock detection statement 362 declares executable instructions for determining whether a deadlock or a situation in which two or more programs are each waiting for a response from the other before continuing. A variation statement 364 declares executable instructions in order to group together a set of test items for execution.
A managed item (MI) statement 366 allows a test developer to insert a particular piece of test code written in a predetermined architecture, such as the .NET architecture. An unmanaged item (UMI) statement 368 allows a test developer to insert test code, which includes legacy test code or code written external to the predetermined architecture previously discussed. A section of the first portion 334a allows a test developer to execute a discovery query 370 to find test items so as to form a desired test scenario. A second portion 334b of the user interface 334 is a working area in which a test developer develops a test scenario in a suitable form. One suitable form includes a tree data structure containing one or more nodes that are linked together in a hierarchical fashion. For example, a sequence statement at line 372 defines a root node by the dragging and dropping of the sequence statement 346 from the first portion 334a. Line 374 defines another node of the tree structure formed by the dragging and dropping of the remote statement 336. Line 376 illustrates a create file test item created by dragging and dropping either a managed item statement 366 or the unmanaged item statement 368 onto the second portion 334b. When either the managed item statement 366 or the unmanaged item statement 368 is dropped onto the second portion 334b, a third portion 334c discloses suitable pieces of code, such as the create file function 378 stored at a location indicated by line 380, which is “C:/CF.DLL”. If the test item, such as the test item create file on line 376, requires parameters, the third portion 334c discloses line 382 where the test developer may specify a parameter on line 384, such as the name “FOO”.
From terminal A (
From terminal A1 (
From terminal A2 (
From exit terminal B (
From terminal C2 (
From the exit terminal D, the method 400 continues to a set of method steps 406 defined between a continuation terminal (“terminal E”) and an exit terminal (“terminal F”). The set of method steps 406 defines steps where test scenarios are executed, and the result is captured for analysis. From terminal E (
While the preferred embodiment of the invention has been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.