One goal of software application testing is to find defects. A defect causes an application to behave in an unexpected manner. The unexpected manner may be due to errors in coding, a lack of an expected program requirement, an undocumented feature, and other anomalies. Most application testing is done to show that the application performs properly, however, an effective test will show the presence and not the absence of defects. Application testing is typically done with both the application software developers (DevOps) and an independent testing team of quality assurance engineers (QAEs). Despite considerable management, engineering, and monetary resources dedicated to testing applications, most applications today still ship with several defects per thousand lines of code.
The disclosure is better understood with reference to the following drawings. The elements of the drawings are not necessarily to scale relative to each other. Rather, emphasis has instead been placed upon clearly illustrating the claimed subject matter. Furthermore, like reference numerals designate corresponding similar parts through the several views.
As noted, defects are a common problem in application creation and other software development. Finding such defects is typically a manual process that takes considerable amounts of time and resources. For instance, quality assurance engineers (QAEs) and software developers (DevOps) not only have to spend their time using an application but also need to document how to repeat the defect and subjectively classify how severe the particular defect is with respect to other defects, in medium and large software applications there may be a large accumulation of defects in the application backlog. Accordingly, when the software creation process is done using continuous delivery or agile software development, management has to assess and plan the distribution of available resources, including development hours carefully. In addition, as a defect is a departure from an expectation, it is important to understand the expectations of the users of the application rather the DevOps/QAEs themselves, which may have different priorities and beliefs about how severe a defect is with respect to the overall application. For instance, a DevOp may believe a particular defect is severe and want to prevent release of a new revision, however, analysis of the user flows may determine that the defect is rarely, if ever, encountered by the users of the application. Further, over time with a users application use and with various updates, the expected severity level may continually change.
Accordingly, this disclosure describes and teaches an improved system and method for defect assessment of severity. The method provides an automatic way to objectively classify the severity level of a defect using a combination of real-time and historical analytical information, including real-time customer usage. The described solution includes (1) recording a set of user interface steps taken to produce the defect, (2) automatically opening a defect report in a defect management system and attaching the recording to the defect, and (3) assessing the defect severity level using one or more analytic engine calls and usage information from hosted web-based or stand-alone analytic engine providers. The analytic calls and usage information includes user flows and bounce rate. The bounce rate is the percentage of visits that are single page visits.
More specifically, a tester provides a set of recorded steps to a defect assessment tool that takes those recorded steps and extracts a set of analytic cells from an analytic engine, such as Google Analytics or others that monitor the recorded steps in user flows within a live environment. A customer's use of the recorded steps may be monitored and assessed dynamically over time using usage statistics from the analytic engine to create an objective based severity level rating. The statistics from the analytic engine are used to create a Usage Factor for the recorded steps and a Bounce Rate Factor for users of the recorded steps. These two factors are representative of the recorded steps with respect to the overall application use and also with respect to the overall number of clicks and overall users. The Usage Factor and the Bounce Rate Factor can be weighted and combined; to create an overall severity level that is compared to a threshold to determine various criticality ratings or actions to be taken. These factors may also be normalized as needed to account for various application usage models among different users.
Consequently, the defect assessment tool provides an objective method based on customer usage of the application. By monitoring how a customer is using the application, a defect may be deemed serious if the user uses the feature with the defect and then abandons its use (Bounce Rate) or it may be deemed non-serious f the particular feature with the defect is never used (Usage).
QAEs/DevOps 18 are able to communicate with AUT 12 via network 30, typically with a workstation 19. QAWs/DevOps 18 may also communicate their findings and results with a defect management system 26, such as “HP's Agile Manager”™. The defect management system 26 may be integrated with or separate from the defect recording assessment tool 20. During testing, the QAEs/DevOps 18 document their defect findings for each of the defects 13 by creating a recorded steps 16 document for defect 13 on defect recording analysis (DRA) tool 20 or workstation 19. The DRA tool 20 then opens a new defect report 27 in defect management system 26 and analyzing over time the severity level 46 or severity rating of the defect 13 using the analytic engine's 22 statistics 24.
In one example, a QAE/DevOp 18 encounters a problem while manually testing an application. The QAE/DevOp 18 then records the graphical user interface (GUI) or other user interface steps taken to produce the defect 13. For instance, one example set of steps might be “click button”, “select box”, “navigate down”, etc. The recording system may be built into the DRA tool 20 or may be done in a separate utility too! such as “HP's TruClient”™ or “Selenium”™ as just a couple of examples. The DRA tool 20 opens a defect report 27 and attaches the recorded steps 16 for defect 13 in defect management system 26. The DRA tool 20 extracts analytic calls generated by the recorded steps when the recorded flow of user interface steps are executed in a live environment. For example, with “Google Analytics”™ and a flow of recorded steps 16 such as “enter login page, enter home page, enter new user page, and press create new user button”, the following calls to “Google Analytics”™ are extracted and the relevant information is held in the eventLabel parameter;
As the usage statistic's 24 real-time data from the analytic engine 22 changes over time, the severity level 46 classification may he dynamically re-evaluated. For instance, if usage for an eventLabel drops within a period to a lower level that may indicate that the defect 13 is not being experienced by users. In that case, then the DRA tool 20 might consider lowering the defect severity level 46 for the respective eventLabel, Another factor that may be used when classifying severity level 46 is the user bounce rate. As noted previously, the bounce rate is the percentage of visits that are single page visits. That is, when users leave an AUT 12 in this flow of recorded steps, a defect 13 may be upgraded to critical as the user when encountering the defect 13 quits using the particular defect recorded flow.
Let # of unique users of recorded steps=500:
Let # of unique users of AUT=1000;
Let usage of recorded steps=8000; end
Let usage of AUT=70000.
Then Usage Factor=average (500/1000, 8000/70000)=30.7% a medium usage.
In block 192 the number of unique users 14 bounced for the recorded steps are determined as well as the number of unique users 14 for the recorded steps in block 194. The number of users 14 bounced for the AUT 12 is determined in block 196, in block 198, the Bounce Rate Factor 44 can be calculated from these three sub-factors along with the sub-factor determined in block 186 for the usage of the recorded steps. In one example the Bounce Rate Factor 44 may be calculated as follows:
Let # of unique users bounce for recorded steps=500;
Let # of unique users for recorded steps=1000:
Let # of users bounced for recorded steps=6000; and
Let usage of recorded steps=8000,
Then Bounce Rate Factor=average (500/1000, 6000/8000)=62.5%, a high rated defect.
In block 199, the severity level 46 of the defect 13 can be calculated from the Usage Factor 42 and the Bounce Rate Factor 44. For instance in one example, the Usage 42 and Bounce Rate 44 Factors are averaged, such as:
Severity Level of Defect=average(Usage Factor, Bounce Rate Factor)
Example: using the two calculated examples for the Usage Factor 42 and the Bounce Rate Factor 44 above:
Then Severity Level of Defect=average (30.7%, 62.5%)=46.6%, a medium severity level.
In other examples, rather than averaging, a weighted sum of the two factors may be used such as:
If result>=75% mark as critical;
If result>=50% mark as high;
If result>=25% mark as medium;
If result<25% mark as low.
By having the recorded steps 16 available and extracting a set of analytical calls for the ongoing analytical engine 22 usage statistics 24, DevOps 18 can use the DRA tool 20 without having to bother or request the services of the quality assurance teams. Further, the recorded steps 16 may fee used as AUT 12 tests which are periodically executed to assess and determine when the defect 13 was solved, if the defect 13 is indicated as solved, the DRA tool 20 may then automatically close the defect report 27 in the defect management system 26. The recorded steps 16 may also be used as regression tests for AUT 12 in order to ensure the defect 13 does not reappear during various revisions, updates, and feature additions.
The computer readable medium 200 includes a first module 204 with instructions to receive a set of recorded steps 16 for a defect 13 and open a report 27 for the defect 13 in defect management system 26 along with attaching the recorded steps 16 to the report 27. A second module 206 includes instructions to extract a set of analytic calls from an analytic engine 22 generated from the recorded steps 16 for the defect 13. The analytic engine 22 continually assesses a severity level 46 of the defect 13 based on customer usage statistics 24 accumulated in the analytic engine 22 for the AUT 12. The statistics 24 include data to allow for calculation of a Usage Factor 42 and a Bounce Rate Factor 44 and the severity level 46 of the defect 13 is based on a mixture of the Usage Factor 42 and the Bounce Rate Factor 44. The mixture may be a simple average of the two factors or it may be a weighted average of two factors.
Processor 102 is also communicatively coupled to local non-transitory computer readable memory (CRM) 314, such as cache and DRAM which includes a set of instructions organized in modules for defect recording assessment program 320 that when the instruction are read and executed by the processor to cause the processor to perform the functions of the respective modules. While a particular example module organization is shown for understanding, those of skill in the art will recognize that the software may be organized in any particular order or combinations that implements the described functions and still meet the intended scope of the claims. The CRM 314 may include a storage area for holding programs and/or data and may also be implemented in various levels of hierarchy, such as various levels of cache, dynamic random access memory (DRAM), virtual memory, file systems of non-volatile memory, and physical semiconductor, nanotechnology materials, and magnetic/optical media or combinations thereof, in some examples, all the memory may be non-volatile memory or partially non-volatile such as with battery backed up memory. The non-volatile memory may include magnetic, optical, flash, EEPROM, phase-change memory, resistive RAM memory, and/or combinations as just some examples.
A defect recording Assessment software program 320 may include one or more of the following modules. A first module 322 contains instructions to receive recorded steps 16 for a defect 13. A second module 324 has instructions to open a defect report 27 on a defect management system 26 along with the recorded steps 16 for the defect 13. A third module 326 contains instructions to interact with an analytic engine 22 to extract analytic calls related to the recorded steps 16. Fourth module 328 has instructions to monitor the consumer usage based on the analytic engine 22 statistics 24 over time. The fifth module 330 includes instructions to create an ongoing severity level 46.
There are several benefits of the disclosed DRA tool 20. For instance, there is an automatic objective-based classification of defect severity as well as ongoing and reclassification over time as the application is used. This objective-based technique replaces the idiosyncratic nature of the typical QAE/DevOp's subjective classification of a defect's severity. Further there is automatic opening and closing of defects by just using the recorded steps and defect severity level 46 assessment from the set of results 40. This feature reduces or eliminates the time that QAEs and DevOps often waste during ongoing testing in reproducing the relevant defect and the steps to replicate it. Thus, the DRA tool 20 allows QAEs and DevOps to perform higher value work rather than having to continually retest for defects particularly without even having any actual knowledge of how recorded steps for the defect are actually being used by customers. Accordingly, the severity level rating is tied more objectively to the actual customer expectations than the subjective judgment of QAEs/DevOps. Thus the overall quality of the application under test will be perceived better by users even if some defects remain unresolved as they will be the least severe defects based on customer usage patterns.
While the claimed subject matter has been particularly shown and described with reference to the foregoing examples, those skilled in the art will understand that many variations may be made therein without departing from the intended scope of subject matter in the following claims. This description should be understood to include all novel and non-obvious combinations of elements described herein, and claims may be presented in this or a later application to any novel and non-obvious combination of these elements. The foregoing examples are illustrative, and no single feature or element is essential to all possible combinations that may be claimed in this or a later application. Where the claims recite “a” or “a first” element of the equivalent thereof, such claims should be understood to include incorporation of one or more such elements, neither requiring nor excluding two or more such elements.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/052285 | 9/25/2015 | WO | 00 |