1. Field of the Invention
The present invention relates to a fault management apparatus for managing faults in a software system, and a test management apparatus for managing tests performed for software system development and maintenance.
2. Description of the Background Art
Conventionally, faults, such as those called “defects” and “bugs”, often occur during or after development of a software system. Such faults include those correctable with relatively little effort, and those difficult to correct, for example, because their causes are unidentified. In addition, there are faults that greatly affect customers who use the software system, as well as faults that only affect them slightly. Conventionally, to determine the priority order for addressing such various faults, information (such as values) indicating the severity of faults is included beforehand in fault data (a collection of fault-related information, such as dates of fault occurrence and details of faults). For example, the fault data contains assessment values assigned for fault-by-fault assessment in three degrees (1. fatal, 2. considerable, and 3. slight), so that faults with assessment value “1” have high priorities, and faults with assessment value “3” have low priorities. Note that in the following description, priority assignment to fault data to clarify which fault is to be preferentially addressed is referred to as “prioritization”, and a process for this is referred to as a “prioritizing process”.
Also, there are various known software system development techniques, including “waterfall development”, “prototype development”, and “spiral development”. These various development techniques employ software system development phases, such as “requirement definition”, “design”, “programming”, and “testing”. Of these phases, the testing of a software system is typically performed based on test specifications. The test specifications indicate for each test case a test method and conditions for determining a passing status (success or failure).
In software system development, the aforementioned phases might be repeated. In such a case, test cases created at the beginning of development or as a result of any specification change or suchlike are repeatedly tested. In addition, if any fault occurs, test cases are created to perform a test for confirming whether the fault has been appropriately corrected (herein after, referred to as a “correction confirmation test”) and such test cases are also repeatedly tested. For example, supposing a case where a system is upgraded from version 1 (Ver. 1) to version 2 (Ver. 2), correction confirmation tests, along with regression, scenario, and function tests based on the upgrade, have to be performed in relation to faults found in version 1 and faults having occurred during development of version 2 (see
In the case of the above-described conventional configuration where the priority order for addressing faults is determined based on only the assessment values indicating the severity of faults in, for example, three grades, if both a frequently-occurring fault and a rarely-occurring fault have the same assessment value, they are not distinguished when determining the priority order for addressing faults. In addition, for example, concerning post-fault system reactivation, some customers require early recovery, yet some others don't. However, conventionally, the priority order for addressing faults cannot be determined considering such customer requirements. Accordingly, there is some demand to determine the priority order for addressing faults, considering various factors other than the severity of faults, for the purpose of software system development and maintenance. In addition, there is some demand for test case extraction to be performed considering various factors.
Therefore, an object of the present invention is to provide a system capable of determining the priority order for addressing various faults in a software system, considering various factors. An other object of the present invention is to provide a system capable of extracting suitable test cases to be currently tested from among prepared test cases, considering various factors.
To achieve the above objects, the present invention has the following features.
One aspect of the present invention is directed to a fault management apparatus for managing faults in software, including:
a fault data entry accepting portion for accepting entry of fault data which is information concerning the faults in software and includes indicator data for assessing a plurality of fault assessment items in a plurality of assessment grades;
a fault data holding portion for storing the fault data accepted by the fault data entry accepting portion; and
a fault data ranking portion for ranking the fault data stored in the fault data holding portion based on fault assessment values each being calculated for each fault data piece based on the indicator data.
According to this configuration, a plurality of (fault) assessment items are provided for fault data which is software fault-related information, so that each of the assessment items can be assessed in a plurality of grades. In addition, the fault management apparatus is provided with the fault data ranking portion for ranking the fault data, and the fault data ranking portion ranks the fault data based on the fault assessment values each being calculated for each fault data piece based on assessment values regarding the plurality of assessment items. Accordingly, the fault data can be ranked considering various factors. Thus, when addressing a plurality of faults, it is possible to derive an efficient priority order (for addressing faults).
In such an apparatus, preferably, the fault data entry accepting portion includes an indicator value entry accepting portion for accepting entry of an indicator value in one of four assessment grades for each of three assessment items as the plurality of fault assessment items, and the fault data ranking portion calculates the fault assessment value for each fault data piece based on the indicator value accepted by the indicator value entry accepting portion.
According to this configuration, four-grade assessment regarding three assessment items is performed per fault. Specifically, FMEA (failure mode and effect analysis) employing a four-point method is adopted for software fault assessment. Thus, it is possible to enter each individual fault data piece with relatively little effort, and it is also possible to effectively prevent any software fault-related trouble.
Preferably, such an apparatus further includes a customer profile data entry accepting portion for accepting entry of customer profile data which is information concerning customers for the software and includes requirement degree data indicative of degrees of requirement by each customer regarding the assessment items, and the fault data ranking portion calculates the fault assessment value based on the requirement degree data accepted by the customer profile data entry accepting portion.
According to this configuration, the fault assessment values for ranking the fault data are calculated based on degrees of requirement by each (software) customer regarding the plurality of assessment items. Thus, it is possible to rank the fault data considering the degrees of requirement by the customer regarding the fault.
In such an apparatus, preferably, the fault data ranking portion calculates for each fault data piece a customer-specific assessment value determined per customer, based on indicator values for the three assessment items and the requirement degree data for each customer, and also calculates the fault assessment value based on the customer-specific assessment value only for any customer associated with the fault data piece.
According to this configuration, in the fault assessment values for a fault the degrees of requirement (regarding the plurality of assessment items) for only the customers associated with the fault are reflected. Thus, it is possible to rank the fault data considering, for example, customers provided with a function having the fault.
In such an apparatus, preferably, the customer profile data entry accepting portion includes a customer rank data entry accepting portion for accepting entry of customer rank data for classifying the customers for the software into a plurality of classes, and the fault data ranking portion calculates the fault assessment value based on the customer rank data accepted by the customer rank data entry accepting portion.
According to this configuration, the fault assessment values for ranking the fault data is calculated based on the customer rank data for classifying software customers into a plurality of classes. Thus, it is possible to rank the fault data considering, for example, the importance of customers to the user.
An other aspect of the present invention is directed to a test management apparatus for managing software tests, including:
a test case holding portion for storing a plurality of test cases to be tested repeatedly;
a fault assessment value acquiring portion for acquiring the fault assessment values each being calculated based on indicator data for fault data stored in the fault data holding portion of the fault management apparatus according to one aspect of the present invention, the fault data being associated with any of the test cases; and
a first test case extracting portion for extracting a test case to be currently tested from among the plurality of test cases stored in the test case holding portion, based on the fault assessment values acquired by the fault assessment value acquiring portion.
According to this configuration, a test case to be currently tested is extracted from among a plurality of test cases based on the fault assessment values each being calculated per fault based on assessment values regarding a plurality of assessment items for the fault. Thus, test cases can be extracted considering various factors related to the fault that is the base for the test cases.
A still another aspect of the present invention is directed to a computer-readable recording medium having recorded thereon a fault management program for causing a fault management apparatus for managing faults in software to perform:
a fault data entry accepting step for accepting entry of fault data which is information concerning the faults in software and includes indicator data for assessing a plurality of fault assessment items in a plurality of assessment grades;
a fault data storing step for storing the fault data accepted in the fault data entry accepting step to a predetermined fault data holding portion; and
a fault data ranking step for ranking the fault data stored in the fault data holding portion based on fault assessment values each being calculated for each fault data piece based on the indicator data.
A still another aspect of the present invention is directed to a computer-readable recording medium having recorded thereon a test management program for causing a test management apparatus for managing software tests to perform:
a fault assessment value acquiring step for acquiring fault assessment values each being calculated based on indicator data for assessing a plurality of fault assessment items in a plurality of assessment grades, the indicator data being included in fault data associated with any of a plurality of test cases to be tested repeatedly which is stored in a predetermined test case holding portion; and
a first test case extracting step for extracting a test case to be currently tested from among the plurality of test cases stored in the test case holding portion, based on the fault assessment values acquired in the fault assessment value acquiring step.
Still another aspect of the present invention is directed to a fault management method for managing faults in software, including:
a fault data entry accepting step for accepting entry of fault data which is information concerning the faults in software and includes indicator data for assessing a plurality of fault assessment items in a plurality of assessment grades;
a fault data storing step for storing the fault data accepted in the fault data entry accepting step to a predetermined fault data holding portion; and
a fault data ranking step for ranking the fault data stored in the fault data holding portion based on fault assessment values each being calculated for each fault data piece based on the indicator data.
Still another aspect of the present invention is directed to a test management method for managing software tests, including:
a fault assessment value acquiring step for acquiring fault assessment values each being calculated based on indicator data for assessing a plurality of fault assessment items in a plurality of assessment grades, the indicator data being included in fault data associated with any of a plurality of test cases to be tested repeatedly which is stored in a predetermined test case holding portion; and
a first test case extracting step for extracting a test case to be currently tested from among the plurality of test cases stored in the test case holding portion, based on the fault assessment values acquired in the fault assessment value acquiring step.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
Before describing an embodiment of the present invention, the basic concept of the present invention will be described. A reliability assessment method called “FMEA (failure mode and effect analysis)” is conventionally known for systematically analyzing potential failures and defects of various products in order to prevent the failures and defects of various products. FMEA employs three factors (indicators): “degree (severity)”, “frequency (occurrence)”, and “potential (detectability)” defined to perform failure mode assessment in view of each factor. Here, the “degree (severity)” is an indicator of the magnitude of effect by a failure. The “frequency (occurrence)” is an indicator of how frequently a failure occurs. The “potential (detectability)” is an indicator of the possibility of finding a failure in advance. In addition, failure modes are classified by forms of fault condition, including, for example, disconnection, short-circuit, damage, abrasion, property degradation. FMEA employs a four-point method for performing assessment with four grades per factor and a ten-point method for performing assessment with ten grades per factor. In general, it is reported that the four-point method requires less assessment time than the ten-point method, and therefore can rapidly address failures. An analysis method by FMEA employing the four-point method will be outlined below.
In the case of the FMEA with the four-point method, the meaning of each assessment grade is defined for each factor, for example, as shown in
RI={square root over (A×B×C)} (1)
According to
The embodiment as described below adopts the concept of the above FMEA with the four-point method to mange software system faults. Concretely, three assessment items: “importance”, “priority”, and “probability” are provided as fault management indicators, and four-grade assessment is performed per assessment item. Here, the “importance” is an indicator of the magnitude of an effect by a fault. “Priority” is an indicator of how quickly the recovery from the fault should be brought about. “Probability” is an indicator of how frequently the fault occurs. Fault-related information is stored as fault data, and faults to be corrected are prioritized based on RI values calculated from the fault data.
Hereinafter, the embodiment of the present invention will be described with reference to the accompanying drawings.
<2.1 System Outline>
<2.2 Hardware Configuration>
The configuration of the personal computer 8 is approximately the same as that of the software development management apparatus (server) 7 shown in
<2.3 Functional Configuration>
The following functions are achieved by programs being executed by the CPU 10 utilizing the memory 60. Specifically, the fault data entry accepting portion 210 is achieved by executing the fault data entry program 21. The fault data prioritizing portion 230 is achieved by executing the fault data prioritization program 23. The customer profile data entry accepting portion 240 is achieved by executing the customer profile data entry program 22. In addition, the fault table 31 constitutes the fault data holding portion 220. The customer profile table 32 constitutes the customer profile data holding portion 250.
The test management system 3 includes a test case entry accepting portion 310, a test case holding portion 320, and a test case extracting portion 330. The test case entry accepting portion 310 displays an operating screen for the operator to enter test cases, and accepts entries from the operators. The test case holding portion 320 holds the test cases entered by the operator. The test case extracting portion 330 extracts a test case to be tested in the current phase from among a plurality of test cases based on conditions set by the operator.
The test case entry accepting portion 310 is achieved by executing the test case entry program 24. The test case extracting portion 330 is achieved by executing the test case extraction program 25. In addition, the test case table 33 constitutes the test case holding portion 320.
The test case extracting portion 330 at least includes a parameter value entry accepting portion 332, a first test case extracting portion 334, and a second test case extracting portion 336, as shown in
The requirement management system 4 includes a requirement management data holding portion 410. The requirement management data holding portion 410 holds requirement management data. Note that the requirement management data is data for managing specifications required for a software system (required specifications). The requirement management table 34 constitutes the requirement management data holding portion 410.
Note that the correspondence between the functions and the subsystems is not limited to the configuration shown in
<2.4 Tables>
Next, the tables used in the software development management system will be described.
Note that in the present embodiment, the “importance”, “priority”, and “probability” fields in the fault table 31 constitute indicator data.
Note that in the present embodiment, the “importance”, “priority”, and “probability” fields in the customer profile table 32 constitute requirement degree data. In addition, the “customer rank” field in the customer profile table 32 constitutes customer rank data.
Note that as for the test result, “pass” means that the test resulted in “pass” (success); “fail” means that the test resulted in “fail” (failure); “deselected” means that no test was performed on the test case (i.e., the test case was not selected for testing in the test phase); “unexecuted” means that the test case is currently queued in the test phase, but has not yet been tested; “under test” means that the test case is currently being tested; and “untestable” means that no test can be performed because the program has not yet been created, for example.
Also, the “test result ID”, “test result”, “reporter”, “report date”, “environment”, and “remarks” fields are repeated in the test case table 33 the same number of times as tests performed. Accordingly, the test case table 33 may be normalized. Specifically, the test case table 33 can be divided into two tables having record formats as shown in
Note that in the present embodiment, the “function-specific importance” field in the requirement management table 34 constitutes required specification rank data.
Next, processes performed in the fault management system 2 will be described. The processes include a “fault data entry process” for data entry of information concerning an incurred fault, a “customer profile data entry process” for entering the aforementioned customer profile data, and a “fault data prioritizing process” for prioritizing fault data in accordance with the order of addressing faults. Note that the description will be given on the assumption that the operation by the operator for executing each process is performed with the personal computer 8. Accordingly, various dialogs or suchlike to be described later are displayed on the display portion of the personal computer 8.
<3.1 Fault Data Entry Process>
First, the fault data entry process will be described.
When the operator selects a menu or suchlike for entering fault data, the fault data entry accepting portion 210 displays a fault data entry dialog 500 as shown in
The fault data entry dialog 500 includes: text boxes or suchlike for entering fault-related general information (e.g., a text box for entering the “fault number”); an importance list box 502, a priority list box 503, and a probability list box 504 for selecting an assessment grade for each fault assessment item; an RI value display area 505 for displaying an RI value calculated based on the assessment grades of the three assessment items; an “indicator expository” button 506 for displaying an expository screen for each assessment item (an indicator expository dialog 510 to be described later); a “set” button 508 for setting the contents of the entry; and a “cancel” button 509 for canceling the contents of the entry.
Here, when the operator presses (clicks on) the importance list box 502, the fault data entry accepting portion 210 displays four values that can be selected as importance assessment grades, as shown in
When the operator presses the “indicator expository” button 506, the fault data entry accepting portion 210 displays an indicator expository dialog 510 as shown in
When the operator presses the “set” button 508 in the fault data entry dialog 500, the fault data entry accepting portion 210 imports the contents of the entry by the operator, and adds a single record to the fault table 31 based on the contents of the entry.
Also, in the present embodiment, the fault data entry dialog 500 is provided with a “test case registration” button 501. The “test case registration” button 501 is available to generate a test case based on fault data. When the operator presses the “test case registration” button 501, a test case registration dialog 520 as shown in
<3.2 Customer Profile Data Entry Process>
Next, the customer profile data entry process will be described. When the operator selects a menu or suchlike for entering customer profile data, the customer profile data entry accepting portion 240 displays a customer profile data entry dialog 530 as shown in
The customer profile data entry dialog 530 includes: a customer name entry text box 531 for entering the name of a customer; an importance list box 532 for selecting the value of importance; a priority list box 533 for selecting the value of priority; a probability list box 534 for selecting the value of probability; a customer rank list box 535 for selecting the rank of the customer; a “set” button 538 for setting the contents of entries; and a “cancel” button 539 for canceling the contents of entries. Note that the importance as used herein refers to a value indicating the level (assessment grade) required by the customer for the fault assessment item “importance”. The same principle applies to the priority and the probability. Also, in the present embodiment, the customer rank list box 535 constitutes a customer rank data entry accepting portion.
When the operator presses the “set” button 538 in the customer profile data entry dialog 530, the customer profile data entry accepting portion 240 imports the contents of entries by the operator, and adds a single record to the customer profile table 32 based on the contents of entries.
<3.3 Fault Data Prioritizing Process>
Next, the fault data prioritizing process will be described. In this process, fault data is prioritized in accordance with the order of addressing faults. The fault data prioritization is performed based on an RI value for each fault data item, and at this time, the intensity of requirement (requirement degree) by each customer with respect to each fault assessment item and the importance of the customer to the system user are taken into account. That is, the RI value is calculated not only based on the fault data but also in consideration of the contents of data stored in the customer profile table 32 and the requirement management table 34. Note that the RI value calculated for each customer in consideration of the contents of the customer profile table 32 is referred to as the “customer-specific profile RI value (customer-specific assessment value)”, whereas the RI value used for final prioritization of the fault data considering not only the contents of the customer profile table 32 but also the contents of the requirement management table 34 is referred to as the “total RI value (fault assessment value)”. In the present embodiment, the total RI value rises with the priority.
<3.3.1 Calculation of the RI Value>
In the present embodiment, the three “(broadly-defined) RI values”, i.e., the “(narrowly-defined RI value”; the “customer-specific profile RI value”, and the “total RI value”, are calculated for each fault data item (i.e., for each record). The calculation method will be described below. Note that the following description will be given on the assumption that data as shown in
The RI value is the third root of the product of the fault data assessment items “importance”, “priority”, and “probability”. Specifically, when the importance, priority, and probability for fault data are A, B, and C, respectively, an RI value R1 is calculated by equation (2).
R1={square root over (A×B×C)} (2)
For example, for the fault data with fault number “A001” in
Note that after selecting values in all of the importance list box 502, the priority list box 503, and the probability list box 504 in the fault data entry dialog 500, a value is calculated in a manner as described above, and stored as an RI value to the RI value field of the fault table 31.
The customer-specific profile RI value is the sum of a “value obtained through division of the importance for fault data by the square of the importance for a target customer in the customer profile data”, a “value obtained through division of the priority for the fault data by the square of the priority for the target customer in the customer profile data”, and a “value obtained through division of the probability for the fault data by the square of the probability for the target customer in the customer profile data”. Specifically, if the importance, priority, and probability for the fault data are A, B, and C, respectively, and the importance, priority, and probability for the target customer in the customer profile data are D, E, and F, respectively, then a customer-specific profile RI value R2 is calculated by equation (3).
For example, as for company A associated with fault data with fault number “A005” in
The total RI value is the sum of the “product of the customer-specific profile RI value and the customer rank” for any customer with which a faulty function is provided identified based on the requirement management table 34. Specifically, in the case where companies L, M, and N are the customers provided with the faulty function, when the customer-specific profile RI value and the customer rank are respectively L1 and L2 for company L; M1 and M2 for company M; and N1 and N2 for company N, a total RI value R3 is calculated by equation (4).
R3=L1×L2+M1×M2+N1×N2 (4)
Note that the total RI value for the fault data with top-priority flag “1” is “9999”.
For example, as for the data with fault number “A002” in
In the present embodiment, the total RI value is calculated as described above during the fault data prioritizing process (see steps S151 to S157 in
<3.3.2 Operating Procedure>
In step S130, the fault data prioritizing portion 230 determines whether the fault data being read in step S110 is based on required specifications for “custom”. For example, the requirement management number is “0002” for the fault data with fault number “A004” in
In step S140, the fault data prioritizing portion 230 determines whether the fault data being read in step S110 is based on required specifications for “optional”. The determination is performed in a manner similar to the above-described determination for “custom”. If the determination result finds that the fault data is based on the required specifications for “optional”, the procedure advances to step S153, or if not, advances to step S151.
In step S151, the fault data prioritizing portion 230 calculates the sum of the “products of the customer-specific profile RI value and the customer rank” for all customers to obtain a total RI value. In step S153, the fault data prioritizing portion 230 calculates the sum of the “products of the customer-specific profile RI value and the customer rank” for customers with their data stored in the “customer with optional feature” field of the requirement management table 34 to obtain a total RI value. In step S155, the fault data prioritizing portion 230 calculates the sum of the “products of the customer-specific profile RI value and the customer rank” for customers with their data stored in the “customer with custom design” field of the requirement management table 34 to obtain a total RI value. In step S157, the fault data prioritizing portion 230 sets a total RI value of “9999”. After the above steps (steps S151 to S157), the procedure advances to step S160.
In step S160, the fault data prioritizing portion 230 determines whether all records for the fault data stored in the fault table 31 have been completely read. If the determination result finds that all records have been completely read, the procedure advances to step S170, or if not, returns to step S110.
In step S170, the fault data prioritizing portion 230 performs fault data prioritization based on the total RI values calculated in steps S151, S153, S155, and S157. At this time, for example, each fault data is assigned a priority in order from highest to lowest value based on the total RI value for the fault data shown in
Next, processes to be performed in the test management system 3 will be described. The processes include: a “test case entry process” for data entry of test case information; and a “test case extraction process” for extracting a test case to be tested in the current phase from among a plurality of test cases based on conditions set by the operator. Note that the test management system 3 performs processes for entering test results and so on, but such processes are not particularly related to the contents of the present embodiment, and therefore any descriptions thereof will be omitted herein. In addition, as with the above-described processes in the fault management system 2, the operator operates the personal computer 8 to execute each process.
<4.1 Test Case Entry Process>
First, the test case entry process will be described. When the operator selects a menu or suchlike for test case entry, the test case entry accepting portion 310 displays a test case entry dialog 540 as shown in
When the operator presses the “set” button 548 in the test case entry dialog 540, the test case entry accepting portion 310 imports the contents of entries by the operator, and adds a single record to the test case table 33 based on the imported contents of entries.
<4.2 Test Case Extraction Process>
Next, the test case extraction process will be described.
When the operator selects an intended test project from the test project name list box 551, the number of test specifications included in the test project is displayed in the test specification number display area 552, and the number of test cases included in the test project is displayed in the test case number display area 553. With the test type list box 554, the type of the test to be currently executed is selected from among some test types such as “correction confirmation test”, “function test”, “regression test”, and “scenario test”. When the operator presses the “thin” button 555, a predetermined dialog is displayed, and the operator sets detailed conditions for narrowing down the test cases via the dialog. When the operator presses the “requisite” button 556, a predetermined dialog is displayed, and the operator sets conditions for the test case that must be tested via the dialog.
Once the operator presses the “set” button 558 in the test case extraction dialog 550, the procedure advances to step S220, and the test case extracting portion 330 acquires various parameter values (values entered by the operator via the test case extraction dialog 550). Thereafter, the procedure advances to step S230, and the test case extracting portion 330 determines whether the test type selected by the operator via the test case extraction dialog 550 is “correction confirmation test” If the determination result finds that the test type is “correction confirmation test”, the procedure advances to step S240, or if not, advances to step S260.
In step S240, the test case extracting portion 330 performs the prioritizing process based on the total RI values for test cases included in the test case table 33 within the database 30. Note that the contents of the process will be described in detail below. After step S240, the procedure advances to step S250.
In step S250, the test case extracting portion 330 extracts test cases in descending order of priority based on the parameter values acquired in step S220. For the test cases extracted in step S250, data “unexecuted” is written into the field for indicating the current test result within the test case table 33. On the other hand, for test cases not extracted in step S250, data “deselected” is written into the field for indicating the current test result within the test case table 33. After step S250, the test case extraction process is completed.
In step S260, the test case extracting portion 330 performs the prioritizing process based on previous (test) performance results for the test cases included in the test case table 33 within the database 30. The test case table 33 contains previous performance results (“pass”, “fail”, “deselected”, “unexecuted”, “under test”, “untestable”) for each test case, and therefore the prioritizing process can be performed based on, for example, the number of “fails”. For example, the priority applied to each test case in step S260 is written into the field denoted by reference numeral “601” within a temporary table 37 as shown in
After step S260, the procedure advances to step S270, and the test case extracting portion 330 performs the prioritizing process based on function-specific importance of the test cases included in the test case table 33 within the database 30. The priority applied to each test case in step S270 is written into the field denoted by reference numeral “602” in the temporary table shown in
In step S290, as in step S250, the test case extracting portion 330 extracts test cases in descending order of priority based on the parameter values acquired in step S220. After step S290, the test case extraction process is completed.
Note that in the present embodiment, steps S210 and S220 constitute a parameter value entry accepting portion (step); steps S240 and S250 constitute a first test case extracting portion (step); and steps S260 to S290 constitute a second test case extracting portion (step). In addition, step S240 constitutes a first test case ranking portion (step), and step S250 constitutes a first extraction portion (step).
<4.3 Prioritizing Process Based on the Total RI Value>
After step S310, the procedure advances to step S320, and the total RI value acquired in step S310 is written into, for example, the field denoted by reference numeral “611” in a temporary table 38 as shown in
In step S340, the test case data stored in the temporary table 38 shown in
Note that in the present embodiment, step S310 constitutes a fault assessment value acquiring portion (step).
<4.4 Prioritizing Process Based on the Function-Specific Importance>
Here, the method for calculating the function-specific importance will be described with reference to
After step S410 in
In step S440, the test case data stored in the temporary table 39 shown in
According to the software development management system of the present embodiment, three fault assessment items (“importance”, “priority”, and “probability”) are provided for fault data, which is software fault-related information, and assessment is performed for each of the three assessment items in four grades. In addition, the fault data prioritizing portion 230 is provided for fault data prioritization, and the fault data prioritizing portion 230 performs fault data prioritization for each fault data piece based on assessment values for the three assessment items. Therefore, the fault data prioritization can be performed considering various factors as compared to the conventional art in which, for example, three-grade assessment is performed for each item. Thus, the priority order of addressing faults can be determined considering various factors.
In addition, the software development management system is provided with the customer profile data entry accepting portion 240 for accepting entries of data (customer profile data) by the operator indicating per customer the intensity of requirement or suchlike concerning the three assessment items. Furthermore, for each fault data piece, the customer-specific profile RI value is calculated, which is a value obtained by reflecting the intensity of requirement by the customer concerning the value for each assessment item. During the fault data prioritizing process, each customer provided with a faulty function is identified based on the requirement management table 34, and the total RI value is calculated to determine final priorities, based on the customer-specific profile RI values for only the identified customers. Therefore, the fault data prioritization can be performed considering the intensity of fault-related requirement by customers. Thus, it is possible to take countermeasures against faults reflecting requirement by customers, thereby increasing the level of customer satisfaction.
Furthermore, the customer profile data contains customer ranks each being a value indicating the importance of a customer to the user. During the fault data prioritizing process, the total RI value is calculated based on values each obtained through multiplication of the customer-specific profile RI value by the customer rank. Accordingly, the fault data prioritization can be performed considering the importance of customers to the user. Thus, for example, it is possible to preferentially address a fault which a customer important to the user desires to be addressed promptly.
In addition, according to the present embodiment, the software development management system is provided with the test case extracting portion 330 for extracting test cases based on the total RI value for fault data. Accordingly, test case extraction can be performed considering various fault-related factors which are the bases for the test cases. Thus, for example, it is possible to preferentially extract any test case corresponding to a fault having a greater impact.
Furthermore, according to the present embodiment, test cases for a fault correction confirmation test are extracted based on the total RI value for fault data, whereas in the case of any test other than the fault correction confirmation test, test case extraction is performed based on functional importance and previous test results, which are the bases for test cases. Thus, more appropriate test case extraction can be performed in accordance with the type of test to be executed.
The above-described software development management apparatus 7 is achieved based on programs 21 to 25 executed by the CPU 10 for creating tables and so on, under the presence of hardware, such as the memory 60 and the auxiliary storage device 70. Part or all of the programs 21 to 25 is provided, for example, via a computer-readable recording medium, such as a CD-ROM, on which the programs 21 to 25 are recorded. The user can purchase a CD-ROM as a recording medium of the programs 21 to 25, and load it into a CD-ROM drive (not shown), so that the programs 21 to 25 can be read from the CD-ROM and installed into the auxiliary storage device 70 of the software development management apparatus. As such, each step shown in the figures, such as
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Note that the present application claims priority to Japanese Patent Application No. 2008-22598, titled “SOFTWARE FAULT MANAGEMENT APPARATUS, TEST MANAGEMENT APPARATUS, AND PROGRAMS THEREFOR”, filed on Feb. 1, 2008, which is incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
P2008-22598 | Feb 2008 | JP | national |