The present application claims priority from Japanese application JP2021-086680, filed on May 24, 2021, the contents of which is hereby incorporated by reference into this application.
The present invention relates to a plan evaluation apparatus and a plan evaluation method, and is suitable when being applied to a plan evaluation apparatus and an operation plan evaluation method for assisting planning, evaluating a plan, and extracting information of a plan.
In creating operation plans such as a product production plan and an employee shift schedule, it is necessary to create a schedule in consideration of an evaluation index (key performance indicator (KPI)) of a plan, such as maximization of output, maximization of capacity utilization, and minimization of the number of workers while complying with constraint conditions regarding various items such as time, space, equipment, and resources such as people. Conventionally, employees (workers) having such advanced know-how have created schedules, but there are increasing cases where a plan creation apparatus using a computer is created and is made to perform automatic planning while considering the above conditions from the viewpoint of shortage of successors and improvement of operation efficiency.
On the other hand, constraint conditions and KPIs in planning of a schedule are extremely many, and affect each other in a complicated manner. Therefore, it sometimes takes a lot of man-hours to develop the plan creation apparatus, and there is a demand for reduction of the man-hours. One of causes of the increase in man-hours is that work time required to evaluate a planned schedule, find a drawback, and examine a revision. Evaluation methods include a quantitative method based on a numerical value of a KPI and the degree of violation of a constraint condition, and a method of visually confirming the schedule for finding an unintended bug or finding a condition that is not formulated as compared with a plan of a worker. However, in some cases, the schedule is extremely large with several thousand items or more so that it is sometimes difficult to visually confirm the schedule.
With regard to a technique for assisting evaluation of a schedule as described above, JP 2019-209796 A describes a method including: calculating an overall evaluation value of a schedule by superimposing a plurality of partial evaluation components; extracting a partial evaluation component having a large contribution degree in a direction of degrading the overall evaluation value; and extracting a component of the schedule that contributes to the overall evaluation value in the most undesirable direction, when the partial evaluation component is calculated by superimposing smaller partial evaluation components, by repeating the operation of extracting a partial evaluation component that greatly contributes in the similar way.
Further, a method of expressing a combination serving as a decision variable in each plan, such as producing/not producing a certain product, using a dummy variable of (0, 1) is described as a generic technology for numerically evaluating a schedule in Christian D. Hubbs, Can Li, Nikolaos V. Sahinidis, Ignacio E. Grossmann, John M. Wassick, “A deep reinforcement learning approach for chemical production scheduling,” Computers and Chemical Engineering, 106982, vol. 141, (2020).
Further, a method for calculating a contribution rate with respect to a prediction value of AI for each feature, which is a simple numerical sequence, using sets of a plurality of pieces of perturbation data generated by changing data to be evaluated and prediction values obtained by inputting the respective pieces of perturbation data to the AI in the field of AI models such as machine learning is described in Lundberg, Scott M., and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, pp. 4765-4774, (2017).
However, the technique described in JP 2019-209796 A has a problem that only a KPI allowing partial evaluation can be analyzed. The partial evaluation means that evaluation is performed individually after dividing a schedule. Further, it is necessary for an evaluator to designate a schedule division rule for the partial evaluation, which requires additional man-hours. Furthermore, an eventually obtained plan component is a component that directly degrades a value of the KPI, but there is also a component that indirectly affects the KPI by affecting other plan components without directly changing the KPI since the components of the schedule complicatedly affect each other. However, the technology of JP 2019-209796 A has a problem that it is difficult to extract a component that does not directly change the KPI.
Further, in a case where the schedule is described by the technology described in Christian D. Hubbs, Can Li, Nikolaos V. Sahinidis, Ignacio E. Grossmann, John M. Wassick, “A deep reinforcement learning approach for chemical production scheduling,” Computers and Chemical Engineering, 106982, vol. 141, (2020), it is necessary to create dummy variables for all possible combinations, and there is a problem that the schedule has a high-dimensional and sparse data structure and is very difficult to handle.
Further, the technique described in Lundberg, Scott M., and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, pp. 4765-4774, (2017) has a problem that the contribution rate with respect to the prediction of the feature, which is the simple numerical sequence, can be calculated, but is not directly applicable to the schedule evaluation.
The present invention has been made in view of the above points, and aims to propose a plan evaluation apparatus and a plan evaluation method capable of extracting, from a schedule, a component having a large influence degree directly or indirectly on even a KPI not allowing partial evaluation by expanding a method of calculating an influence degree of a feature in a machine learning model to evaluation of a schedule.
In order to solve such a problem, the present invention provides a plan evaluation apparatus, which evaluates a schedule planned by combining a plurality of plans, includes: a feature conversion unit that divides the schedule into plan components based on a predetermined conversion rule, and convert the divided plan components into features; a model learning unit that uses the features as an input and creates a machine learning model having a key performance indicator (KPI) of the schedule as an objective variable; a contribution rate calculation unit that calculates a contribution rate of each of the features with respect to the machine learning model; and an influence degree calculation unit that calculates an influence degree of influence, on the KPI of the schedule, of the plan component which is a conversion source of the feature, based on the contribution rate of the feature.
In order to solve such a problem, the present invention provides a plan evaluation method by a plan evaluation apparatus, which evaluates a schedule planned by combining a plurality of plans. The plan evaluation method including: a feature conversion step of dividing, by the plan evaluation apparatus, the schedule into plan components based on a predetermined conversion rule, and convert the divided plan components into features; a model learning step of receiving, by the plan evaluation apparatus, an input of the feature converted in the feature conversion step and creating a machine learning model having a key performance indicator (KPI) of the schedule as an objective variable; a contribution rate calculation step of calculating, by the plan evaluation apparatus, a contribution rate of each of the features with respect to the machine learning model created in the model learning step; and an influence degree calculation step of calculating, by the plan evaluation apparatus, an influence degree of influence, on the KPI of the schedule, of the plan component which is a conversion source of the feature, based on the contribution rate of the feature calculated in the contribution rate calculation step.
According to the present invention, in the evaluation of the schedule, it is possible to extract, from the schedule, the component having a large influence degree directly or indirectly on the KPI not allowing the partial evaluation.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
Hereinafter, only the evaluation of a personnel assignment plan for assigning a request to an appropriate employee will be described, but a plan evaluation method according to the present invention can be widely applied to a system in which it is necessary to create and change a plan by combining various evaluation viewpoints in a complex manner, such as an operational plan of a transportation, such as an aircraft, a bus, and a railway, a product manufacturing plan in a factory, and the like.
Further, the respective embodiments to be described hereinafter are merely examples for realizing the present invention and do not limit a technical scope of the present invention. Those skilled in the art can easily understand that specific configurations can be changed without departing from the spirit or gist of the present invention.
In the configurations of the invention to be described hereinafter, the same or similar configurations or functions will be denoted by the same reference signs, and redundant descriptions will be omitted. Further, some or all of the above-described configurations, functions, processing units, processing means, and the like described in the respective embodiments may be realized, for example, by hardware by designing with an integrated circuit and the like. Further, each of the above-described configurations, functions, and the like may also be realized by software by causing a processor to interpret and execute a program for realizing each of the functions. Information such as programs, tables, and files that realize the respective functions can be installed in a recording device such as a memory, a hard disk, and a solid state drive (SSD), or a recording medium such as an IC card, an SD card, and a DVD.
Further, positions, sizes, shapes, ranges, and the like of the respective components illustrated in the drawings and the like do not always indicate actual positions, sizes, shapes, ranges and the like in order to facilitate understanding of the invention. In the above drawings, control lines and information lines are considered to be necessary for the description have been illustrated, and it is difficult to say that all of the control lines and information lines required as a product are illustrated. It may be considered that most of configurations are practically connected to each other. Therefore, the invention is not limited to the positions, sizes, shapes, ranges, and the like disclosed in the drawings and the like.
The storage apparatus 1001 is a general-purpose apparatus that permanently stores data, such as a hard disk drive (HDD) and a solid state drive (SSD), and includes plan information 1010 and influence-degree-related information 1020. Note that the storage apparatus 1001 may be configured on a terminal similar to the other apparatuses constituting the plan evaluation apparatus 1000, or may be configured to exist on a cloud or an external server instead of on the same terminal as the other apparatuses described above, and refer to data via a network.
The plan information 1010 includes: operation-related data 1011 serving as an input to generate a schedule made by combining a plurality of plans; a constraint list 1012; an evaluation index list 1013 (see
Examples of the operation-related data 1011 include skill information and request assignable time information of an employee, time information on the date, start, and end of an assigned request, skill condition information, and other master information.
The constraint list 1012 is data storing constraint conditions that the schedule 1014 needs to follow, for example, whether a request is not assigned to an unassignable time of an employee, whether overtime is not a prescribed value or more, and the like, and is used to generate a plan.
The influence-degree-related information 1020 includes: a machine learning model 1021 that learns an influence relationship between the schedule 1014 and the KPI; a feature conversion rule 1022 (see
The processing apparatus 1002 is, for example, a general-purpose computer including a central processing unit (CPU), a memory, and the like. The processing apparatus 1002 includes a planning-related processing unit 1030, an influence degree evaluation processing unit 1040, a screen output unit 1050, and a data input unit 1060 therein in a form of storing these in a memory as software programs or the like.
The planning-related processing unit 1030 includes: a plan generation unit 1031 that receives an input of data of the storage apparatus 1001 and outputs the schedule 1014; and an evaluation index calculation unit 1032 that evaluates the obtained schedule 1014 and outputs a KPI value.
The influence degree evaluation processing unit 1040 includes a feature conversion unit 1041 that converts the schedule 1014 automatically or based on the feature conversion rule 1022 to obtain the plan feature table 1023, a model learning unit 1042 that generates the machine learning model 1021, a contribution rate calculation unit 1043 that obtains a contribution rate of a feature in the model (machine learning model 1021), and an influence degree calculation unit 1044 that extracts an influence portion of the schedule 1014 on the KPI.
The screen output unit 1050 has a function of outputting information for causing the output apparatus 1004 to display a predetermined screen. Specifically, for example, the screen output unit 1050 generates information for screen display based on the influence degree calculation result 1025 calculated by the influence degree calculation unit 1044, and transmits the information to the output apparatus 1004. As a result, the output apparatus 1004 displays an output screen based on the influence degree calculation result 1025 (see
The data input unit 1060 has a function of inputting a processing execution instruction and data according to an input operation with respect to the input apparatus 1003 performed by a developer (thereafter, user), and is utilized when the user makes a change to a schedule or sets a parameter. Specifically, for example, in a case where a user-specific feature conversion rule is designated, the data input unit 1060 receives the feature conversion rule from the input apparatus 1003 operated by the user, and stores the feature conversion rule in the storage apparatus 1001 as the feature conversion rule 1022.
The input apparatus 1003 is a general-purpose input apparatus for a computer, and is, for example, a mouse, a keyboard, a touch panel, or the like.
The output apparatus 1004 is an apparatus such as a display, and displays the output screen representing an evaluation result (for example, the influence degree calculation result 1025) obtained by the processing apparatus 1002 via the screen output unit 1050. In a case where it is unnecessary for a person to confirm an evaluation result obtained by the processing apparatus 1002 (for example, in a case where an evaluation result is directly delivered to a system that automatically makes a plan), the output apparatus 1004 is not necessarily provided in the plan evaluation apparatus 1000.
The schedule 1014 is configured as matrix data (table data), and is divided into fixed plan master information 31 common to all schedules and target items 32 unique to the schedules, respectively. Types of the schedule 1014 will be described later.
The plan master information 31 stores basic information of a schedule such as the date, start time and end time of a request, and a request number. Data of each item of the plan master information 31 is output from the operation-related data 1011 and stored in the schedule 1014.
The target item 32 stores information unique to the schedule. A combination of the target items 32 is determined by the plan generation unit 1031 so as to optimize a value in the evaluation index list 1013 while satisfying a condition in the constraint list 1012. For example, in a case where the plan evaluation apparatus 1000 evaluates a personnel assignment plan, the target item 32 is an employee number, and the evaluation of the personnel assignment plan is a combinatorial problem for determining the assignment of an employee to a predetermined request as illustrated in the example of
As described above, any problem in the schedule 1014 for obtaining a combination of certain items is subjected to the evaluation by the plan evaluation apparatus 1000. Further, the plan component described in the present invention refers to one row in the schedule 1014. Then, “calculating an influence degree of a schedule on a KPI” refers to “extracting a plan component that has greatly contributed to a change of a KPI in a certain schedule”.
Note that three types of the schedules 1014 including a target schedule, a historical schedule, and a reference schedule are used in the present embodiment. In all the schedules, the plan master information 31 is common, and the target items 32 contain different or identical items.
The target schedule is a schedule that is subjected to calculation and analysis of an influence degree, and is generated by S501 in
The historical schedule is a schedule generated in advance (generated in the past) in a different process from the target schedule. The process of generating the historical schedule is manual work by a worker, a plan generation algorithm different from that of the target schedule, or what is obtained by utilizing the randomness of the same algorithm as the target schedule, and it suffices that the generated historical schedule has the same plan master information 31 as the target schedule and the target item 32 having the same format as the target schedule. The historical schedule is a schedule useful to enhance the accuracy of calculation of an influence degree on the target schedule.
The reference schedule may be considered as a type of historical schedule, and is a schedule to be clearly compared with the target schedule. The reference schedule corresponds to, for example, a schedule as a sample created by a worker or a schedule created by a plan generation algorithm before performing certain correction work.
The KPI data 1015 is a database including: an index number 41 in which a number assigned to each index is stored; an index name 42 in which a name of an evaluation index related to the index number 41 is stored; and a value 43 of an evaluation result. The number stored in the index number 41 corresponds to the number of the index number 21 in the evaluation index list 1013 of
Hereinafter, the influence degree calculation process of calculating an influence degree of a target schedule will be described with reference to the drawings as appropriate.
According to
Next, the evaluation index calculation unit 1032 receives inputs of the evaluation index list 1013 and the target schedule (schedule 1014) output in step S501, calculates the KPI value of the target schedule, and stores the KPI data 1015 indicating the calculation result in the storage apparatus 1001 (step S502). In a case where the plan generation unit 1031 and the evaluation index calculation unit 1032 are configured by the same program or the same architecture, the processes of steps S501 and S502 are executed at the same time.
Next, when performing feature conversion to input each schedule to the model learning unit 1042, the user selects whether to introduce the user-specific feature conversion rule 1022 (step S503). If the introduction of the user-specific feature conversion rule 1022 has not been selected in step S503, it is assumed that an automatic rule for automatically converting a schedule into a feature is used in the feature conversion, and the process proceeds to step S505.
On the other hand, if the introduction of the user-specific feature conversion rule 1022 has been selected in step S503 (YES in step S503), the user operates the input apparatus 1003 to describe the feature conversion rule 1022 to be introduced (step S504). Details of a method for describing the feature conversion rule 1022 will be described later with reference to
In step S505, the feature conversion unit 1041 receives inputs of the target schedule generated in step S501, a historical schedule obtained in the past, and the KPI values (KPI data 1015) of the target schedule and the historical schedule, and executes a feature conversion process of executing, a plurality of times, conversion into a feature of the target schedule using the feature conversion rule 1022 or the automatic rule selected in steps S503 and S504 or a combination of the feature conversion rule 1022 and the automatic rule. Although a detailed processing procedure of the feature conversion process will be described later with reference to
Next, in step S506, the influence degree evaluation processing unit 1040 selects one of a plurality of the plan feature tables 1023 obtained in the feature conversion process in step S505, and starts machine learning loop processing.
In the machine learning loop processing, in step S507, the model learning unit 1042 first performs machine learning with the plan feature table 1023 selected in step S506 as an input and the KPI data 1015 (which may be read as a KPI 93 of the plan feature table 1023) corresponding to the plan feature table 1023 as an output (objective variable), thereby creating the machine learning model 1021 that has learned a relationship between each schedule and the KPI data 1015. The machine learning model 1021 created in step S507 is stored in the storage apparatus 1001. Note that the machine learning model 1021 created by the model learning unit 1042 is assumed to be based on a decision tree or a neural network, which is supervised learning in which processing on input data is executed to output a prediction value.
In the next step S508, the contribution rate calculation unit 1043 calculates a contribution rate of each feature based on the machine learning model 1021 created in step S507, the plan feature table 1023 selected in step S506, and the KPI data 1015 corresponding to the plan feature table 1023, and stores information indicating the calculation result (contribution rate calculation result 1024) in the storage apparatus 1001.
As a method for calculating the contribution rate by the contribution rate calculation unit 1043, it is possible to apply, for example, a method described in Lundberg, Scott M., and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, pp. 4765-4774, (2017). However, the invention is not limited to this method, and any method may be used as long as a contribution rate with respect to prediction of each feature in a machine learning model can be calculated. Here, it is desirable that an initial value used for the contribution rate calculation be a reference schedule.
In a case where there is no conflict in each schedule even after replacing the respective features in the plan feature tables 1023 obtained from each of the target schedule and the reference schedule, the contribution rate calculation unit 1043 may directly calculate the contribution rate (that is, the influence degree) by a method for calculating a Shapley value described in Lundberg, Scott M., and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, pp. 4765-4774, (2017) in step S508 without creating the machine learning model 1021 in step S507.
The above-described machine learning loop processing (steps S506 to S508) is repeatedly executed for all of the plurality of plan feature tables 1023 obtained by the feature conversion process in step S505 (step S509), and the process proceeds to step S510 if the processing is completed for all the plan feature tables 1023.
In step S510, the influence degree calculation unit 1044 executes an influence degree information aggregation process of distributing and integrating the contribution rate obtained in step S508 in the above loop processing and calculating an influence degree of a target schedule component on the KPI value. A detailed processing procedure of the influence degree information aggregation process will be described later with reference to
Next, the influence degree calculation unit 1044 creates the influence degree calculation result 1025 by aggregating the respective influence degrees calculated in the process of step S510, and the KPI values and target schedule used to calculate the respective influence degrees, and stores the created influence degree calculation result 1025 in the storage apparatus 1001 (step S511).
Finally, the screen output unit 1050 generates the information for screen display based on the influence degree calculation result 1025 created in step S511, and transmits the information for screen display to the output apparatus 1004. As a result, the output apparatus 1004 displays the output screen based on the influence degree calculation result 1025 (step S512). A specific example of the output screen is illustrated in
In a case where a schedule is directly into a feature that can be applied to machine learning, it is common to adopt a method of performing conversion into a dummy variable as described in Christian D. Hubbs, Can Li, Nikolaos V. Sahinidis, Ignacio E. Grossmann, John M. Wassick, “A deep reinforcement learning approach for chemical production scheduling,” Computers and Chemical Engineering, 106982, vol. 141, (2020). However, such a conversion into the dummy variable results in a high-dimensional and sparse data structure, which leads to a problem that handling is difficult. Therefore, a feature is obtained by compression and conversion of a schedule using one or more methods in the present embodiment. A compression method may utilize some column information of the schedule or may be randomly determined.
In the feature conversion rule 1022 illustrated in
According to
In the feature conversion loop processing, the feature conversion unit 1041 first determines whether to apply the feature conversion rule 1022 to feature conversion based on a designation by the user in step S503 of
On the other hand, if it has not been designated to apply the feature conversion rule 1022 to the conversion of the feature (NO in step S702), the feature conversion unit 1041 performs random division or designates any column of the plan master information 31 in the target schedule and automatically creates a feature conversion rule (automatic rule) for information of the column (step S704), and proceeds to step S705. Specifically, in step S704, the feature conversion unit 1041 automatically designates the feature conversion rule for the information of the column per category in a case where the information of the designated column is a category or based on a range such as a maximum value and a minimum value in a case where the information of the designated column is a numerical value. It is possible to use basic clustering algorithm or conditional branching at the time of designating such a conversion rule.
As is clear from the evaluation index list 1013 illustrated in
In step S705, the feature conversion unit 1041 determines whether to extract a difference of the schedule. The difference of the schedule is a difference between the target schedule and the reference schedule. There is a case where the comparison and model learning are facilitated by setting only the difference from the reference schedule as the target item 32 of the target schedule and evaluating this difference. Therefore, it is possible for the user to designate whether to extract the difference of the schedule in the present embodiment. The process proceeds to step S707 through step S706 if the difference of the schedule is to be extracted (YES in step S705), and directly proceeds to step S707 if the difference of the schedule is not to be extracted (NO in step S705).
In step S706, the feature conversion unit 1041 extracts only components (in the example of
In step S707, the feature conversion unit 1041 divides the schedule (target schedule and historical schedule) according to the rule obtained in step S703 or step S704.
Next, the feature conversion unit 1041 performs compression and conversion on each of schedules divided in step S707 to convert a value of an item (the target item 32) into a “feature” using a category or a numerical value per plan (step S708). As a method for the compression and conversion, conversion into a feature (category or numerical value) unique to each combination of schedules may be performed, or conversion into a numerical value may be performed using an existing distance scale such as a cosine distance. A specific example of the compression and conversion will be described later with reference to
Next, the feature conversion unit 1041 combines the feature per plan that has been obtained by the compression and conversion in step S708 and a corresponding KPI value (the value 43 of the KPI data 1015) to create the plan feature table 1023, and stores the plan feature table 1023 in the storage apparatus 1001 (step S709). Although a specific example will be described later with reference to
Then, the feature conversion unit 1041 repeats the processes of steps S702 to S709 until reaching a predetermined repeat upper limit (step S710). Note that the repeat upper limit may be designated based on the number of the feature conversion rules 1022 designated by the user, or time or the number of times of conversion (number of loops) may be explicitly designated in advance. However, when the feature conversion rule 1022 has been newly described by the user (see step S504 in
As described above, the feature conversion loop processing is performed so that a plurality of times of compression and conversion are performed in the feature conversion process. Thus, the plan evaluation apparatus 1000 can analyze the schedule from various viewpoints, and the reliability of the influence degree to be calculated can also be enhanced.
In the example of
In the compression and conversion according to the division rule of “per date”, numerical values are assigned to columns per day in the reference schedule 84, the target schedule 86, and the historical schedule 88 after being compressed and converted into features. These numerical values are obtained by compressing and converting employee numbers in the original schedule. Specifically, for example, a compression rule is adopted in which a combination appearing in the original reference schedule 81 of the plan A is set to “0”, and a different combination not appearing in the reference schedule 81 is set to another numerical value such as “1”.
Since the above compression rule is adopted, all values of the respective dates are naturally converted to “0” in the reference schedule 84 after the compression and conversion of the plan A.
Further, in the target schedule 86 after the compression and conversion of the plan B, employee numbers of “8/1” in the target schedule 82 form different combinations from those of the reference schedule 81 and, thus, a value of the date is converted to “1”. Further, employee numbers of “8/2” and “8/3” in the target schedule 82 form the same combinations as those of the reference schedule 81 and, thus, values of the dates are converted to “0”.
In the case of the plan C, an employee number of “8/2” in the historical schedule 83 forms the same combination as that of the reference schedule 81, and employee numbers of “8/1” and “8/3” in the historical schedule 83 form different combinations from those of the reference schedule 81. Here, when the compression and conversion is performed in the same manner as in the above-described plan B, in the historical schedule 88 after the compression and conversion of the plan C, a value of “8/2” is converted to “0”, and values of “8/1” and “8/3” are converted to “1”.
In the case of
Next, in the compression and conversion according to the division rule of “random”, each schedule is divided per row not by a clear rule, such as a value per column (for example, date), but by a combination of random columns, and a compression rule similar to the division rule of “per date” is applied for divided items. Thereby, a combination of employee numbers is converted into a feature based on a numerical value.
Specifically, for example, in a case where “first row and third row” are selected as the combination of columns in the division rule of “random”, a combination of employee numbers of the “first row and third row” in the historical schedule 83 (plan C) before the compression and conversion is “0001” and “0003”. Therefore, the feature conversion unit 1041 can determine a feature (value) of a corresponding item in the historical schedule 89 by comparing this combination of employee numbers with a similar combination of employee numbers in each of the reference schedule 81 (plan A) and the target schedule 82 (plan B).
Note that the above-described compression and conversion method is merely an example, and the compression and conversion of the feature in the present embodiment is not limited thereto. For example, the above-described compression rule uses a combination in the reference schedule 81 (plan A) as a baseline; however, the baseline is not limited to the reference schedule.
The plan feature table 1023 configured as described above is incorporated into the machine learning model 1021 in step S507 of
As illustrated in
According to
In the contribution rate integration loop processing, the influence degree calculation unit 1044 first extracts a set of the contribution rate calculation result 1024 and a target schedule (the schedule 1014) corresponding thereto (step S1102).
Next, the influence degree calculation unit 1044 starts contribution rate distribution loop processing of distributing a contribution rate to a plan component for each feature whose contribution rate is indicated in the contribution rate calculation result 1024 extracted in step S1102 (step S1103). The contribution rate is an influence degree with respect to prediction for each feature of a simple numerical sequence. However, the feature is a numerical value obtained by converting each plan component, and thus, it is difficult to obtain an influence degree of a plan component on a KPI only by the contribution rate. Therefore, the contribution rate in the machine learning model 1021 is applied to the schedule by executing the contribution rate distribution loop processing to distribute the contribution rate to the plan component in the present invention.
In the contribution rate distribution loop processing, the influence degree calculation unit 1044 selects one feature from the contribution rate calculation result 1024 extracted in step S1102 (step S1104). If the contribution rate calculation result 1024 in
Next, the influence degree calculation unit 1044 extracts plan components before conversion of the feature selected in step S1104 from a corresponding schedule extracted in step S1102 (step S1105).
Next, the influence degree calculation unit 1044 distributes the contribution rate 103 of the row extracted from the contribution rate calculation result 1024 in step S1104 to the plan components extracted in step S1105 (step S1106). A method for this distribution is not particularly limited, and for example, there are a method of evenly allocating the contribution rate to each of the plan components and a method of allocating the value (contribution rate) as it is. Further, for example, an algorithm such as a machine learning model may be used to determine the distribution of the contribution rate.
Further, when a contribution rate in another machine learning model 1021 already exists in the plan component, the influence degree calculation unit 1044 integrates the existing contribution rate and the newly distributed contribution rate in step S1106. A method for this integration is not particularly limited, and any method such as simple addition, weighting, and multiplication may be adopted.
Then, the influence degree calculation unit 1044 repeats the contribution rate distribution loop processing (steps S1104 to S1106) as many as the number of types of features illustrated in the contribution rate calculation result 1024 extracted in step S1102 (step S1107).
Then, after the contribution rate distribution loop processing ends, the influence degree calculation unit 1044 repeats the contribution rate integration loop processing (steps S1102 to S1107) for all the contribution rate calculation results 1024 (step S1108), and then ends the influence degree information aggregation process.
The influence degree calculation result 1025 is configured as matrix data (table data) similarly to the schedule 1014 illustrated in
Since the influence degree calculation result 1025 is created in a format as illustrated in
Furthermore, the influence degree 1203 indicated in the influence degree calculation result 1025 is the influence degree obtained from the result of learning the relationship between the schedule 1014 and the KPI value (KPI data 1015) by the machine learning model 1021. Thus, the plan evaluation apparatus 1000 according to the present embodiment can extract not only a component that directly affects the KPI value but also a component that is likely to indirectly affect the KPI value, which is different from the technique of JP 2019-209796 A in which only a component that directly changes a KPI value can be extracted.
In the case of
The output screen 1400 of
On the output screen 1400, whether to classify each plan component (request number in the this example) into the plan component 1403 having a small influence degree or the plan component 1404 having a large influence degree may be determined by any method based on a value of an influence degree in the influence degree calculation result 1025. For example, request numbers indicating a predetermined number of influence degrees from the top may be classified into the plan component 1404 having a large influence degree, or a request number indicating an influence degree exceeding a predetermined threshold may be classified into the plan component 1404 having a large influence degree.
In
In the above-described influence degree calculation process of
In
Next, the influence degree evaluation processing unit 1040 selects one from the plurality of plan feature tables 1023 obtained in step S1501 and starts machine learning loop processing (step S1502).
In the machine learning loop processing, first, the model learning unit 1042 first performs machine learning with the plan feature table 1023 as an input and the KPI data 1015 (which may be read as the KPI 93 of the plan feature table 1023) corresponding to the plan feature table 1023 as an output in the same manner as in step S507 in
Then, the machine learning loop processing of
In step S1505, the model learning unit 1042 integrates the plurality of machine learning models 1021 created by the machine learning loop processing. The integrated model is referred to as the integration model. As an integration method of the integration model in step S1505, an existing integration method may be adopted, and for example, a weighted averaging of outputs, a stacking method of stacking the machine learning models 1021, or the like can be used.
Next, the contribution rate calculation unit 1043 calculates a contribution rate of each feature in the integration model integrated in step S1505, and stores information indicating the calculation result (contribution rate calculation result 1024) in the storage apparatus 1001 (step S1506). As a method for calculating the contribution rate, an existing technique may be used in the same manner as described in step S508 of
Next, the influence degree calculation unit 1044 starts influence degree calculation loop processing per plan component of a target schedule (step S1507). When attention is paid to a certain plan component of the target schedule, the compression and conversion into one feature is performed for one machine learning model 1021 in the influence degree calculation process of
In the influence degree calculation loop processing, the influence degree calculation unit 1044 first selects one plan component set as a target of the calculation of an influence degree from the target schedule (step S1508).
Next, the influence degree calculation unit 1044 selects, from the integration model, features in which the plan component selected in step S1508 is included as a conversion source (step S1509).
Next, the influence degree calculation unit 1044 calculates a total value of contribution rates of the features selected in step S1509 (step S1510). The total value of the contribution rates calculated in step S1510 is adopted as an approximation of the influence degree of the target plan component. Therefore, as the machine learning models 1021 are generated and integrated from more plan feature tables 1023, more features can be compressed and converted for the target plan component, which leads to calculation of the influence degree of the feature. That is, as the machine learning models 1021 are generated and integrated from more plan feature tables 1023, pieces of information serving as sources of influence degrees increase, which leads to improvement of the reliability of the influence degree (total value of contribution rates) calculated in step S1510.
The above-described influence degree calculation loop processing (steps S1507 to S1510) is repeatedly executed for all the plan components included in the target schedule (step S1511), and the process proceeds to step S1512 when the influence degrees are calculated for all the plan components.
In step S1512, the influence degree calculation unit 1044 creates the influence degree calculation result 1025 by aggregating the respective influence degrees of the plan components calculated in the influence degree calculation loop processing, and KPI values and the target schedule used to calculate the respective influence degrees, and stores the created influence degree calculation result 1025 in the storage apparatus 1001.
Finally, the screen output unit 1050 generates information for screen display based on the influence degree calculation result 1025 created in step S1512, and transmits the information for screen display to the output apparatus 1004. As a result, the output apparatus 1004 displays an output screen based on the influence degree calculation result 1025 (step S1513).
An example of the creation of the influence degree calculation result 1603 in
As described above, according to the plan evaluation apparatus 1000 of the first embodiment, the method of calculating the influence degree of the feature in the machine learning model 1021 is expanded to the evaluation of the schedule (evaluation per plan component) and the evaluation results thereof are created and output. Thus, it is possible to extract, from the schedule, the component having a large influence degree directly or indirectly on not only the KPI allowing partial evaluation but also the KPI not allowing partial evaluation.
Further, the method of interpreting and comparing the evaluation results using the plan evaluation apparatus 1000 when one target schedule is set as the target of calculation of the influence degree has been described in the above first embodiment. However, there is a case where a plurality of plans (target schedules) are made at a time, for example, when the plan generation unit 1031 is an algorithm that performs iterative calculation.
The plan evaluation apparatus 1000 according to the present embodiment can be applied even in the case where the plurality of target schedules exist as described above. Specifically, the plan evaluation apparatus 1000 can evaluate and visualize a change of a plan component having a large influence degree or the like by determining one common reference schedule and calculating influence degrees of the respective target schedules. As a result, the user (developer) can have an insight, for example, that a plan part having a large influence degree had been obtained in the middle of the iterative calculation, but has been replaced with another item in the final output.
Further, the influence degree on the single KPI is calculated in the above description of the present embodiment, but the plan evaluation apparatus 1000 can calculate an influence degree of each plan component on a plurality of KPIs by repeating this process. Then, the plan evaluation apparatus 1000 can calculate the influence degree of each plan component on the entire KPI through addition that utilizes weights of the KPIs determined by any method.
Further, an operation of partially replacing a plan component of a target schedule and calculating an influence degree in the replaced schedule can also be executed by utilizing the data input unit 1060 in the plan evaluation apparatus 1000 according to the present embodiment. When such an operation is performed, it is possible to interactively or automatically perform evaluation when the schedule is revised, which can contribute to reduction of development man-hours.
A second embodiment of the present invention is a development of the first embodiment. In the first embodiment described above, the influence degrees of the plan components are calculated based on the machine learning model 1021, and thus, the plan components include both of the plan component that directly changes the KPI and the plan component that indirectly changes the KPI. That is, no attempt has been made to extract which part of the plan components has an indirect influence relationship in the first embodiment. Therefore, in the second embodiment, an indirect influence relationship between plan components (hereinafter referred to as indirect influence relationship) is further extracted from the influence degrees obtained in the first embodiment.
In the second embodiment, a first approach that is effective when there is randomness in plan generation, and a second approach that is effective when it is difficult to generate a plan a plurality of times can be adopted in order to extract an indirect influence relationship.
First, a process of extracting an indirect influence relationship according to the first approach will be described.
According to
Next, the indirect influence relationship calculation unit 1702 determines one plan component of interest from the influence degree calculation result 1025 read in step S1801 (step S1802). The following processes are performed to extract an indirect influence relationship for this plan component of interest. Therefore, it is preferable that the plan component selected in step S1802 be a plan component for which it is desired to perform detailed analysis, such as one having a large influence degree.
Next, the indirect influence relationship calculation unit 1702 extracts, from the historical schedules read in step S1801, a historical schedule in which the plan component of interest determined in step S1802 appears (step S1803).
Next, the indirect influence relationship calculation unit 1702 starts loop processing for searching for an indirect influence relationship (step S1804).
In the indirect influence search loop processing, the indirect influence relationship calculation unit 1702 first sets one “indirect influence candidate”, which is a candidate for a plan component having an indirect influence relationship with the plan component of interest, from the target schedule (step S1805). Although any method may be used as a method for setting an indirect influence candidate, a plan component that is important in the sense of indirect influence is highly likely to be included in a part having a large influence degree according to the machine learning model 1021. Thus, a method of setting indirect influence candidates in order from a plan component having a larger influence degree other than the plan component of interest and performing the following search is commonly used.
Next, the indirect influence relationship calculation unit 1702 extracts a historical schedule in which the indirect influence candidate set in step S1805 appears from the historical schedules read in step S1801, and determines whether there is a relationship with the historical schedule including the plan component of interest extracted in step S1803 (step S1806). The most common method for determining the relationship is a method of comparing each number of the respective historical schedules, but the relationship may be determined using a certain quantitative index such as a correlation.
In step S1806, if there is a relationship between the two schedules, for example, if the number of historical schedules including the indirect influence candidate is equal to the number of historical schedules including the plan component of interest (YES in step S1806), the indirect influence relationship calculation unit 1702 determines that there is an indirect influence relationship and records the indirect influence candidate set in step S1805 in a predetermined recording destination (step S1807). That is, the indirect influence candidate recorded in step S1807 corresponds to a plan component (indirect influence component) that indirectly affects a change of a KPI in relation to the plan component of interest.
On the other hand, in step S1806, if there is no relationship between the two schedules, for example, if the number of historical schedules including the indirect influence candidate is not equal to the number of historical schedules including the plan component of interest (NO in step S1806), the indirect influence relationship calculation unit 1702 determines that there is no indirect influence relationship and excludes the plan component set in step S1805 from the indirect influence candidates (step S1808).
Then, after step S1807 or step S1808, the indirect influence relationship calculation unit 1702 repeatedly executes the loop processing from steps S1804 to S1807 (or S1808) until reaching a predetermined search upper limit related to the search for the indirect influence candidate (step S1809), and proceeds to step S1810 after the end of the loop processing. Note that the predetermined search upper limit related to the search for the indirect influence candidate may be determined, for example, using a calculation time or a threshold of an influence degree of a candidate component.
In step S1810, the influence degree calculation result 1025 read in step S1801 is updated by adding a search result (specifically, the indirect influence candidate recorded in step S1807) of the indirect influence relationship by the above processing thereto, thereby ending the indirect influence relationship extraction process.
In the target schedule 1901, a row with the date “8/2” and a row with the date “8/3” are set as indirect influence candidates paying attention to a plan component in the first row. Here, when the other historical schedules 1902 and 1903 are confirmed, an indirect influence candidate 1 illustrated in
The influence degree calculation result 2000 is configured as matrix data (table data) similarly to the schedule 1014 illustrated in
In the example of
Next, a process of extracting an indirect influence relationship according to the second approach will be described.
According to
Next, in indirect influence search loop processing, the indirect influence relationship calculation unit 1702 replaces an indirect influence candidate in a target schedule with a target item existing in the same component in a reference schedule (step S2101).
Next, the indirect influence relationship calculation unit 1702 instructs the plan generation control unit 1701 such that a plan component of interest and the indirect influence candidate (that is, the target item of the reference schedule) replaced in step S2101 appear in a final output (that is, a schedule planned by the plan generation unit 1031) of the plan generation unit 1031, and the plan generation control unit 1701 causes the plan generation unit 1031 to make a simulated plan according to the above instruction (step S2102). Note that a control method related to the instruction and planning by the plan generation control unit 1701 depends on an algorithm of the plan generation unit 1031.
Next, the indirect influence relationship calculation unit 1702 confirms a schedule output in step S2102, and determines whether the plan has been made without any problem (step S2103). In the determination in step S2103, specifically, it is confirmed whether or not there occurs an abnormality that a program error has occurred without outputting the schedule, that a designated component has not been output, or that a plan not satisfying a constraint condition, which has been observed in usual outputs, has been output.
If the above-described abnormality has not been confirmed in step S2103 and a schedule as usual has been generated (YES in step S2103), it means that there is no problem (no influence) even if the indirect influence candidate replaced in step S2101 is removed, so that the indirect influence relationship calculation unit 1702 excludes the plan component set in step S1805 from indirect influence candidates (step S2104).
On the other hand, if the above-described abnormality has been confirmed in step S2103 (NO in step S2103), it means that the plan component of interest does not appear either if it is difficult to output the indirect influence candidate replaced in step S2101, so that the indirect influence relationship calculation unit 1702 determines that there is an indirect influence relationship and records the indirect influence candidate set in step S1805 in a predetermined recording destination (step S2105). That is, the indirect influence candidate recorded in step S2105 corresponds to the plan component (indirect influence component) that indirectly affects the change of the KPI with respect to the plan component of interest.
After step S2104 or step S2105, the indirect influence relationship calculation unit 1702 repeatedly executes the loop processing from steps S1804 to S2104 (or S2105) until reaching a predetermined search upper limit related to the search for the indirect influence candidate (step S2106), and proceeds to step S1810 after the end of the loop processing. Note that the predetermined search upper limit related to the search for the indirect influence candidate may be determined, for example, using a calculation time or a threshold of an influence degree of a candidate component.
In step S1810, the influence degree calculation result 1025 read in step S1801 is updated by adding a search result (specifically, the indirect influence candidate recorded in step S2105) of the indirect influence relationship thereto in the same manner as in step S1810 in
The indirect influence relationship extraction processes according to the first approach and the second approach have been described as above. In the second embodiment, it is possible to assist appropriate planning by selectively using these approaches according to a feature of a plan.
That is, in a case where there are many combinations of plan components and a schedule of a different pattern is planned each time a user (developer) presses a plan creation button, there is randomness in plan generation. In such a case, however, it is possible to obtain information for appropriate planning even though there is randomness by adopting the first approach is adopted and searching for an indirect influence relationship between plan components (indirect influence relationship) using a plan (historical schedule) that has been already made.
Further, in a case where it takes time to make one schedule due to a complicated configuration or the like, it is difficult to generate a plan a plurality of times. In such a case, the second approach is adopted, and an indirect influence relationship between the plan components (indirect influence relationship) is searched for using a schedule planned based on an explicit instruction, so that it is possible to extract a component essential to generate a plan component of interest (component that should not change randomly) and to contribute to appropriate planning with the analysis of the influence relationship. Note that the above explicit instruction may be given by the user (developer).
Number | Date | Country | Kind |
---|---|---|---|
2021-086680 | May 2021 | JP | national |