1. Field of the Invention
The present invention relates generally to software application testing. In particular, the present invention involves testing software using a natural input focus sequence of the software.
2. Description of Related Art
Software testing is generally performed to determine and correct defects in the software before placing the software in production or releasing the software for public use. Conventional testing includes scripting, generally written in a programming language such as Visual Basic, JavaScript, or Perl. Scripting allows a user to express a test as a sequence of programmed steps that controls the software under test. In particular, the programmed steps direct how the software is tested and what part of the software gets tested. The script attempts to force the software to perform a specific task, generally in a sequence not normal to the general operations of the software. For example, the script attempts to test certain aspects of the software; however, scripting does not account for updates to the software occurring at runtime, and thus, may not thoroughly verify the functionality of the software. Additionally, changes to the software may require updates to the software, and thus is inefficient.
Scripting may allow for checks to be embedded in the scripts to verify the correct or incorrect operation of the software. However, if a user has a plurality of scripts that exercise a particular subsection of the application, and the user wants to verify the software when a particular place in that subsection is accessed, the user will have to insert the check into the right place in many if not all of the scripts used. Also, checking can only be performed during the execution of the sequences provided by the user.
Another example of conventional software testing is based on a table driven technique, where a user specifies a sequence of steps in a tabular form. These tables typically specify an interaction point, e.g., a point in the software where data or a stimulus may be provided. The table can also provide the data or stimulus. Upon receiving an outcome, optional actions may be performed. Although the user is not expressing the test in a programming language, the test still represents a set of steps to be asserted on the software with the expectation that the software will follow a predetermined set of steps, similar to scripting.
Another example of software testing methods include model based testing, where important functions of the software are modeled as a finite state machine and represented as a directed graph of edges and vertices, where the edges represent input actions and the vertices represent program states. Starting in one state and performing the action specified by an edge takes the model to another state of the edge.
A traversal of the directed graph model of the software represents an analogous sequence of steps in the actual software. A large number of tests which cover many different paths in the software can be generated quickly by well known and ad-hoc graph traversal algorithms. Checking in model-based testing must be bound to the model states. These states are high level abstractions of the actual application state and the level of abstraction makes checking complicated and difficult. For this reason, model-based testing is primarily used to assure the software does not terminate unexpectedly. Model-based testing is similar to scripting, table-driven testing, and keyword-based testing in that the test is an externally provided sequence of steps that is asserted on the software.
Conventional software testing also includes automatic test pattern generation (ATPG) where the software is abstracted to a set of Boolean equations or a Boolean logic diagram. By using a stuck-at fault model and automatic test pattern generation techniques developed for digital integrated circuits, a sequence of input stimuli and output responses is generated. ATPG is similar to model-based testing in that it uses a high-level model of the software as the basis for creating test sequences. It is also similar to the other previously mentioned testing techniques in that the test is an externally provided sequence of steps that is asserted on the software.
Any shortcoming mentioned above is not intended to be exhaustive, but rather is among many that tends to impair the effectiveness of previously known techniques for software testing; however, shortcomings mentioned here are sufficient to demonstrate that the methodologies appearing in the art have not been satisfactory and that a significant need exists for the techniques described and claimed in this disclosure.
The present disclosure provides a method for system level functional test and a verification platform that works at the user interface. In one respect, a method for testing a software application is provided. The method may include monitoring the software application during natural execution to determine an active focus site of the software application. The method may generate a stimulus and provide the stimulus for the active focus site. The stimulus may be generated based on a current execution state of the application.
In some respects, the method may include steps for verifying the behavior of the software application before and after providing the stimulus. In particular, the method may first determine the expected response of the software application to the stimulus and may monitor the response of the application to the stimulus to see if it differs from the expected response.
An “active focus site” as described and used in this disclosure refers to an input site of the application to which an operating system will direct input from external sources including, for example, other software, a storage device, a human interaction site, the Internet, a keyboard, a mouse, or the like.
“Focus sites” as described and used in this disclosure are input points of the application.
“Provider” as described and used in this disclosure, refers to an object that generates a stimulus for use in interacting with an application under test (AUT).
“Bindings” as described and used in this disclosure, refer to a connection of a form or document to a behavior or a control to a provider and optionally, at least one rule.
A “template” as described and used in this disclosure, includes a set of configuration files containing a partial configuration intended as a starting point for a testing configuration process.
A “rule”, as described and used in this disclosure, includes the expected state of an application under test (AUT). This may include, for example, the state of the application before and/or after the stimulus is applied. Alternatively, the rule may also include the conditions under which that expectation is applicable. The rule may include optional information to be remembered for future use by this rule or other testing elements, and may also include the outcomes of matching or not matching the expected state of the AUT or the applicable conditions.
Other features and associated advantages will become apparent with reference to the following detailed description of specific embodiments in connection with the accompanying drawings.
The following drawings form part of the present specification and are included to further demonstrate certain aspects of the present invention. The figures are examples only. They do not limit the scope of the invention.
FIG.7 shows a GUI for editing the settings for a provider, in accordance with embodiments of the present disclosure.
The disclosure and the various features and advantageous details are explained more fully with reference to the nonlimiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well known starting materials, processing techniques, components, and equipment are omitted so as not to unnecessarily obscure the invention in detail. It should be understood, however, that the detailed description and the specific examples, while indicating embodiments of the invention, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions, and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
The present disclosure provides for a system level functional test and verification platform that works at the user interface level. In particular, embodiments of the present disclosure provide automatic methods for observing an application under test (e.g., software program) and dynamically respond to the application. The method allows for working with native window and browser-based applications that run under, for example, Microsoft Windows® operating systems and Microsoft Internet Explorer®. The software testing techniques can support users adding customizable interaction and verification elements, configuration templates, reports, and redefine Pass or Fail criteria.
Referring to
In some embodiments, tester 104 may execute on networked device, such as processor 106, and may constitute a terminal device running software from a remote server, wired or wirelessly. For example, tester 104 may be used to test AUT 102, which may be at a remote location accessible through a network link. Output, if necessary, may be achieved through one or more known techniques such as an output file, printer, facsimile, e-mail, web-posting, or the like. Storage may be achieved internally and/or externally and may include, for example, a hard drive, CD drive, DVD drive, tape drive, floppy drive, network drive, flash, or the like. Processor 106 may use any type of monitor or screen known in the art, for displaying information, such as test configurations, verification reports, etc. In other embodiments, a traditional display may not be required, and processor 104 may operate through appropriate voice and/or key commands.
In one embodiment, AUT 102 may be stored in a read-only-memory (ROM). Alternatively, AUT 102 may be stored on the hard drive of processor 106, on a different removable type of memory, or in a random-access memory (RAM). AUT 102 may also be stored for example, on a computer file, a software package, a hard drive, a FLASH device, a floppy disk, a tape, a CD-ROM, a DVD, a hole-punched card, an instrument, an ASIC, firmware, a “plug-in” for other software, web-based applications, or any combination of the above.
In one embodiment, tester 104 may model the AUT as a set of interaction elements organized into groupings called forms and/or documents. These forms or documents generally correspond to a visual grouping of elements presented to the user, and as such, the term form and document may be used interchangeable throughout the disclosure. The groupings also generally correspond to the collection of controls placed on a form or dialog by a developer in an application that runs under Microsoft Windows® operating systems or the collection of HTML elements placed in an HTML page or document. These collections of elements can be created statically as the program is created or dynamically as it executes.
As an AUT executes, the input focus shifts from element to element and document to document. Tester 104 may use objects, called observers to look at the application under test and map the focus sites of the application into the document and/or element model. Focus sites, as noted above, are input points of the application. Being able to uniquely identify each document and element pair allows tester 104 to track the execution of the application. For example,
In some applications, including traditional HTML pages and native applications the AUT controls and forms may be mapped directly to controls and forms in the tester 104 by the observers, i.e., a 1 to 1 mapping. More complex application implementation techniques may dynamically create or reuse documents and elements, which may require a more complex mapping process. However, most applications provide some form of visual queues that can be used to help identify the document and element with focus. These applications generally reuse a floating text box to capture input for many different input sites. Since each site occurs at a different place on the screen, the position of the floating text box identifies its intended use. For example, many applications display information in a tabular form in a table or grid. In many implementations, the table or grid is not directly interactive. Navigating to a particular item may be accomplished with the arrow keys or mouse, and editing the item occurs in a text box that is superimposed over the background table or grid. Visually, the user appears to be editing data directly in the grid or table. Rather than create a unique text box for each item in the table, the application can create one or just a few text boxes and reuse them by changing their position as needed. As such, the observer may need to differentiate each reuse of the text box so the tester treats editing each item uniquely. In one embodiment, the observer may determine the row and column location of the textbox over the grid and may incorporate a combination of the column name and row number into the returned name, allowing the observer to map a reused text box to many unique identifiers. The reuse and superimposition of controls is a common technique and is used in many different applications, including browsers like Microsoft Internet Explorer.
Other situations can arise where the AUT contains a plurality of uniquely named elements but due to the nature of the application and the testing goals, a plurality of elements should be treated as the same element. In this case, the observer may map many different names to the same name. This situation occurs in automatically generated tables in HTML applications.
In one embodiment, tester 104 may include a test main loop which includes an initial observation (step 200) of an application under test (AUT) during execution as shown in
In step 202, tester 104 may perform a behavior modification based on a behavior object. A behavior object maintains a history of the focus sites and makes decisions for altering the focus site based on a current focus site and the execution history of the AUT. This is useful to detect undesired loops or other conditions where the AUT is failing to progress as desired during testing. In one embodiment, tester 104 may know which user interface element is active in the AUT (the focus site) and can choose to proceed with an input, advance to the next focus site, jump to a different focus site, or other behavioral choices. Based on the results of behavior modification some or all of the subsequent steps can be abbreviated, or skipped. The general purpose of behavior modification is to assert control over the natural input flow of the AUT when that flow becomes problematic for testing purposes.
In step 204, a stimulus may be generated. Stimulus generation creates the stimulus that will be applied to the AUT at a later step. In one embodiment, the stimulus may be created by stimulus generation functions called providers. Provider, as described and used in this disclosure, refers to an object that generates a stimulus for use in interacting with AUT. For example, the provider may emulate what a user may be providing via an input device, including, but not limited to, a keyboard, a mouse, a microphone, etc. The choice of provider may be determined by the association or binding of a provider to the active user interface element in the configuration file. Bindings, as described and used in this disclosure, refer to a connection of a form or document to a behavior or a control to a provider and optionally, at least one rule. Examples of bindings include, without limitations, a file open command, a file save command, a print command, or save command, etc.
A user may configure these bindings before execution of the application begins. If an element is encountered during execution that is not present in the bindings, tester 104 may automatically add a binding entry for the new element and associate it with a default Provider based on the new element's name or type. In some embodiments, step 204 may be skipped if the behavior recommends something other than regular input to the AUT. Tester 104 may choose to skip the active focus site, advance to another focus point, or proceed with other behavioral choices.
Once the stimulus is generated, a first verification stage (V1) may begin (step 206). In V1, tester 104 may evaluate rules (if any) associated with the focus site. A rule, as described and used in this disclosure, includes the expected state of the AUT before or after the stimulus is applied, the conditions under which that expectation is applicable, optional information to be remembered for future use by this rule or other testing elements, and the outcomes of matching or not matching the expected state of the AUT or the applicable conditions. In one embodiment, the rule may include a plurality of portions, as shown in
If the filter part of the rule indicated the check part should be evaluated then the check is evaluated in step 206 or 212 of
As noted above, the outcome of each rule may include, but is not limited to, Pass, Fail, Schedule, Immediate, or Ignore. In some embodiments, this step may be skipped if the behavior recommends something other than regular input to the AUT. If the rule is based on only the current state of the application, then the V1 evaluation may result in Pass or Fail. If the rule is based on how the AUT responds to a stimulus, then the V1 evaluation may issue a Schedule to cause a second verification stage (V2) evaluation of the rule to occur after the stimulus is applied. If the stimulus makes the rule not applicable then the V1 evaluation results in an Ignore. A typical example of this situation is a rule for a button. If the button is not going to be activated by the stimulus then the V1 evaluation will result in Ignore. Referring to
A stimulus may be applied (step 208), and the actions of the AUT after the stimulus application may be observed by tester 104. In one embodiment, tester 104 may determine the active form or document and which user interface element on that form or document will receive input from a keyboard, mouse, or other external sources. This information is called the focal site and is the basis for actions by the other steps of the main loop. Users can create custom observers by deriving from the Observer base class.
In step 212, a verification step 2 (V2) may be performed. Verification steps V1 and V2 can occur before and after the stimulus is applied, respectively. In particular, step 212 determines if the AUT responded as expected to the stimulus that was applied in step 208. The V2 evaluation can result in the outcomes Pass, Fail, or Ignore, but never Schedule. The Ignore outcome should be interpreted as meaning “not applicable.”
In step 214, a focus shifting process may be performed. In some embodiments, the focus site can be shifted for a variety of reasons, including, but not limited to, a random shift triggered by randomization or selection of an element that does not participate in the main tab sequence such as menus, toolbars, and graphical hotspots. These are referred to collectively as non-tab-sequence elements (NTSEs), which may have to be handled separately in the main loop because the normal way of advancing, may not cause NTSEs to receive the focus.
In some embodiments, step 214 may not be required. The frequency of occurrence may be dependent on the randomization probability and the non-tab element probability values read from the configuration. If both probabilities are 0 then no shifting will occur. If both types of shifts are triggered, the NTSE shift takes precedence. If a shift occurs, the application under test may follow the steps shown in
In other embodiments, tester 104 may generate random focus shifts, which may emulate random user inputs from either a keyboard, mouse, tab sequence, or the likes. If a focus shift occurs it is accompanied by a new observation before proceeding to Behavior Modification.
The method steps of
The behavior modification (step 202), stimulus generation (step 204), stimulus application (step 208), and verification (step 206 and/or step 212) may be executed code objects that are late bound in the execution process. The code objects for each are specified in testing configuration files that are read at startup and may be executed on a computer, such as processor 106 of
Creating a Testing Configuration
In one embodiment, the test configuration may provide a graphical user interface (GUI) similar to the GUI shown in
After the project is selected, a template for each selected project may be determined, as shown in
To configure a test for a particular project, a user may select an editor that may provide information about the bindings and provider groups among other information, as shown in
To create a provider binding with the GUI of
Referring to the GUI illustrating the “Bindings” tab of
The GUI illustrated in
The GUI of
To create a rule binding with a GUI, similar to the one shown in
The GUI of
The “Default Bindings” tab of the GUI shown in
The “Provider Groups” tab of the GUI shown in
The GUI of
The GUI of
Executing and Verifying the Test
Upon configuring the test parameters, an application may be tested. Referring to
It is noted that the verification steps may be used to confirm that the AUT meets a predetermined specification. In one embodiment, the verification steps may continuously be evaluated (comparing the actual behavior against the expected behavior) during the execution of the AUT. While the verification steps may not prove definitively that an AUT has met the predetermined specification, it may provide a probability.
In one embodiment, the verification steps may be performed independent of the testing of the application. In particular, a tester 104 may be provided that omits steps 202, 204, 208 and 214 but implements steps 206, 210, and 212 to track the execution of the AUT and compare the expected response to the actual response. Such a tester would not actively interact with the AUT but would passively observe and check the behavior of the AUT. Such a tester could be used to check the behavior of the AUT while the AUT is driven by other means such as users using the AUT in production use.
Description of the Implementation
The testing environment can be implemented using, for example, Microsoft Visual Studio .Net 2003 development environment, .Net Version 1.1 framework, and C# and C++ languages. The testing environment can be designed to run under the Microsoft Windows XP operating system, provide GUIs, and to test applications that run under Microsoft Windows XP and Microsoft Internet Explorer 6. One of ordinary skill in the art can realize that other platforms and browsers may be used.
The Test Configuration Files
The test configuration files contain text, structured as XML, that tester 104 can use for initialization and testing of the AUT. There are three configuration files, referred to as the main configuration file, the bindings file, and the defaults file. The configuration file specifies the other two configuration files, the applications that will be tested, data connections, various test run parameters, the DLLs that will be used during testing for providers, rules, behaviors, and the like. An example main configuration file is shown in
In one embodiment, DLL file names are specified for observers, behaviors, rules, providers, and responders. There may be at least one DLL file for each and there may be multiple entries of each type. In one embodiment, referring to
1. If the AUT is tested or just launched (Test);
2. If rules are evaluated (Verify);
3. If screen pictures are taken at each step (Trace),
4. If the AUT is closed or left open at the end of testing (Close);
5. Which observer will be used with the AUT (Observer);
6. Which responder will be used with the AUT (Responder);
7. The command line arguments to set for the AUT when starting the AUT (CommandLine);
8. The number of test cycles to perform before terminating testing (RunLimit);
9. The delay in milliseconds between each test cycle (Delay);
10. The delay in milliseconds between starting the AUT and starting testing (InitialDelay); and
11. The relative probabilities for following the AUT tab sequence (TabSequenceActivity), random focus changes (Randomization), selecting menu items (MenuActivity), and selecting a graphical area (HotspotActivity).
The main configuration may contain 1 or more application specifications
Next, a data source specification is specified, which defines a type, connection string, and selection statement for use in creating a data set for the providers and rules during testing.
In general, the bindings file contains the connections between AUT elements and testing specifications. The file may contain a plurality of group specifications. A group specification includes a plurality stimulus provider specifications and may be an alternative or composition group. The result of evaluating an alternative group is the result of evaluating one member chosen at random, based on the relative probabilities specified for each member. The result of evaluating a composition group is the concatenation of the results of evaluating each member in the order the members appear in the group. The group named grpCelsius in
The bindings file may also contain zero or more form specifications. A form represents an object that may contain multiple interaction elements.
The controls in
Referring to
If no match is found, tester 104 may set the behavior or provider property in the binding to a fixed value “Default.” The Default behavior is designed to work with most forms and the Default provider echoes the attribute setting. If the user leaves the Default provider's attribute blank, the Default provider will effectively do nothing. Tester 104 may also compare both TypeName and Type in the default specifications so that a value for TypeName can appear more than once, but the combination of TypeName and Type must be unique. This permits specifying different defaults for the same name when used as different types of elements. For example, a DataGrid may appear as both type Form and type Control. This is useful because tester 104 may map a DataGrid as a form under some circumstances and as a control under others.
Initialization for Test Execution
Initialization begins by reading the configuration files (e.g.,
Referring to
Once the base class type is found, the base type may be used to select all objects in the assembly that are derived from the base class. An exception is made for type Group because group evaluations are handled differently than other providers. Each type object that meets the selection criteria is instantiated to make sure the object can be instantiated when needed during test execution. The type object may be added to the array of provider objects. The load method shown may create an array of provider instances instead of type objects if desired and the choice is based on whether a single instance or multiple instances of a given provider type are needed in tester 104. The loading of the other late bound types is handled similarly.
The Test Execution Cycle
Once initialization is complete, the applications are started and testing begins. The first step in a test cycle is to execute an observer and determine the interaction element in the AUT, and when necessary, perform a remapping to make the active focus site names and types more useful in testing.
The observer function is named Eval and takes as input a process object. The observer may wait for the process to finish any pending processing and then determines the threads in the process that contain an input queue. From those threads, the observer may find one that has an active focus site. The observer may determine if the focus site is an element within a browser window and if so, retrieves the document object model (DOM) for the document associated with the browser window and sets the name and type of the active and focus sites from elements within the DOM.
The observer may return, among other information, a thread information structure containing the active and focus site handles, the remapped names and types of the active and focus sites, the browser and DOM objects, the active element in the DOM, and a flag indicating the focus site is an element in a browser window. If the focus site is not in a browser window, then the DOM and browser related return values are not useful.
After the observer concludes, tester 104 determines if the focus site should shift as a random jump event or because, for example, the focus site has failed to advance from one element to another. If a focus shift occurs, the name and type of the active focus site are changed.
Next, tester 104 may execute verification stage 2 (step 212) and may execute any checks that were scheduled in the prior verification stage 1 step (step 206). Tester 104 may evaluate the behavior object associated with the active window. The behavior object can alter the focus site setting to achieve a more tester-useful value. The choice of altering the focus setting is very specific to the type of the active window. For example, the natural sequence of a file open dialog starts in the file name text box, advances to the filter selection combo box, and then to the open button. This may not be desirable for testing because testing may be more effective if the filter selection is left unchanged. So the behavior may, through intentional focus shifting, implement a virtual sequence that flows from the file name text box to the open button, to the file selection list, and then to the filter selection combo box. The behavior object may also be used to check for cycles in focus sequencing and shift focus to break a cycle.
After the behavior is evaluated, the provider for the focus site may be retrieved and evaluated.
The ControlLookup function retrieves the name and attributes for the provider bound to the active control. The names and types of the active focus site are passed to the ControlLookup function in and the name and attribute of the bound provider are returned. Next, the returned provider name is used to retrieve an executable instance of the provider with the same name by calling the GetInstance function with the name of the provider. GetInstance returns an executable instance of the provider object with the corresponding name. Next, the Eval method object is retrieved from the provider instance using the GetMethod function. Arguments for the Eval method are copied into an object array named ProviderParams and the Eval method is executed by calling Invoke and passing it the provider instance object and the parameters. The Eval method returns values in the parameters array and the code fragment shows retrieving values from the parameter array The response return value is text or a command like .Net SendKeys.Send method accepts to be sent to the AUT. The comment text, if any is sent to the log. The purpose of the comment is to provide a way for the provider to give the user an explanation for how it chose to generate the response. The sequencename parameter is the returned name, if any, of a test sequence. If a sequence name is present, tester 104 will follow this sequence of steps like a traditional script or table based tester. The dbcommand parameter if present causes data source actions like advance to next row or reset row number to occur after checking, if any occurs. The delay in executing the command permits the checking to use the same row in a data table as the provider used. The lookup, instantiation, and execution techniques of FIG. 23 may be used for other late bound objects, including Rules, Observers, Behaviors, and Responders.
After the provider formulates a stimulus for the AUT, if there are any rules bound to the focus site, the corresponding rule objects may be retrieved and the verification stage 1 method from the rule objects will be executed. The results of these executions are logged and if any result indicates a verification stage 2 evaluation should occur, the rule is added to the list of rules to be executed at the verification stage 2 step of the testing cycle.
In the final step of the testing cycle, tester 104 may execute a Responder to transmit the stimulus to the AUT. One Responder available to tester 104 may retrieve the stimulus string of text and commands and may convert the string into an array of structures appropriate for the SendInput Windows API function and then calls SendInput to transmit the stimulus to the AUT.
The CheckEval method retrieves the saved control name and uses it to retrieve a window handle to the control. It then uses this handle to retrieve the value of the control. Next, a check for control values that should be ignored because they do not represent a numeric value is determined. The value is converted to a number and compared against the known physical minimum value.
The attribute string for the Number provider shown in
Other providers use the SequenceName parameter to return the name of a sequence. If a sequence name is returned, tester 104 will operate like a traditional script or table-based tester and follow the sequence. The DataCommand parameter is used by providers that use data sources which may be configured using the GUI shown in
All of the methods and systems disclosed and claimed can be made and executed without undue experimentation in light of the present disclosure. While the methods of this invention have been described in terms of embodiments, it will be apparent to those of skill in the art that variations may be applied to the methods and in the steps or in the sequence of steps of the method described herein without departing from the concept, spirit and scope of the invention. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope, and concept of the disclosure as defined by the appended claims.