1. Field of the Invention
The present invention relates to a computing system, and particularly to a technology that supports a data management task.
2. Background Art
Because of the influence of system integration by a virtualization technology, a small number of operation managers need to manage a large number of apparatuses. Because of this, there is a growing need for a configuration management database (hereinafter referred to as CMDB) that promotes efficiency of configuration management. However, generally, CMDB does not define a workflow for operating data in a database. Therefore, it is necessary to prepare any of the following mechanisms in advance to retain the data in CMDB with high accuracy.
(1) The workflow of the data operation is defined beforehand, and the operation manager has to perform data operations according to the workflow.
(2) Software (e.g. plug-in software for CMDB) that automatically is caused to perform the data operation according to a configuration change is set up beforehand, and the operation manager has to perform the data operation through the software.
In the environment where the mechanisms described above are in place, errors in updating the data should be avoidable. However, an amount of data that has to be managed as the configuration information is increased with the introduction of a new apparatus and with the start of provision of a new service. There is a high risk that the update mistake will occur until the workflow operating such data is stabilized. Furthermore, even though the workflow is strictly determined at the time of the introduction of the new apparatus and service, the workflow that is not suitable for an actual situation tends to be ignored on the site. Because of this, unless the once-determined workflow is reviewed, there is a high likelihood that the update mistakes will occur.
There is proposed a method of generating the workflow based on the past task history in order to shorten working hours in reviewing the workflow. JP-A-2012-101088 is an invention relating to a method by which a new task procedure is generated from the task history of a medical imaging search system and a task template that is equivalent to the workflow defined beforehand. JP-A-2009-116673 is an invention relating to a method by which a procedure that has to be added or deleted is extracted as a notice procedure by comparing a model operator's exemplary log with an ordinary operator's performance log. Both methods are based on the assumption that a series of tasks are performed by one operator and the task histories can be distinguished for every task. JP-A-11-250153 is an invention relating to a method by which new rules are automatically combined from histories of tasks that are performed with a method deviating from a rule registered beforehand. The invention can support the tasks by the multiple operators by abstracting the operator, but does not assume that the histories of the multiple tasks are mixed.
Even though the methods in the well-known examples described above are applied to an update history of CMDB, the workflow with high accuracy cannot be recommended.
First, in the virtualized system, there is a case where the tasks relating to multiple requests are performed in parallel at the same time. For example, when construction of a web server for a customer A and construction of a database for a customer B are performed in parallel at the same time, the two types of histories of the data operation with respect to CMDB are present in a mixed manner. In this case, accuracy of the workflow detection is decreased in the well-known examples.
Then, if the system is of a large size, there is a case where the task relating to one request is shared among the multiple operation managers (for example, server managers and network managers). In this case, the accuracy of the workflow detection is decreased in the method in which the histories of the data operation are analyzed by each of the operation managers.
Finally, there is a case where date and time of data update in CMDB is different from date and time of configuration information update on a real-life apparatus. For example, the operation manager spends several days registering with CMDB while partly determining new VM information (an IP address and the like) and finally performs a task of constructing VM all at once. When the big gap in operation date and time occurs in this manner, the accuracy in the detection of the workflow is decreased in the well-known examples.
An object of the present invention is to provide a workflow generation server and a method of generating a workflow, which can extract a workflow of data operations relating to an individual request with high accuracy from histories of the data operations and can shorten task time for reviewing the workflow including the data operations, in a case where the data operations relating to multiple requests are performed on one database by multiple operators.
According to an aspect of the present invention, there is provided a workflow generation server including a processing unit, in which the processing unit quantifies a relation between histories of a data operation with respect to a database as a value, based on an external key to the database, and generates a workflow that is configured from the multiple data operations, using the quantified value.
According to another aspect of the present invention, there is provided a method of generating a workflow by a workflow generation server, including recording histories of a data operation with respect to a database, and causing the workflow generation server to quantify a relation between the recorded histories of the data operation, as a value, based on an external key to the database, and to generate the workflow that is configured from the multiple data operations, using the quantified value.
That is, according to a suitable embodiment of the present invention, when a history database for recording the history of the data operations with respect to the database in the related art is prepared, and an operator operates a target data group in the database through the use of database operation software, the database or the database operation software records the operation history in the history database, and the workflow generation server quantifies the relation between the multiple data operations that are recorded in the operation history, as a value. For this quantification, the workflow generation server uses a data definition of the database and a history of an operation schedule on the data in the database. Then, a new workflow is generated based on the quantified value.
Then, the workflow generation server compares the workflow generated with the method described above and an already-registered workflow in a workflow database, and performs recommendation of the workflow that has to be newly defined and recommendation of an amendment to the existing workflow, based on the result of the comparison. The amendment to the existing workflow is addition of a task of the data operation to the workflow, deletion of the task of the data operation from the workflow, and the like. Moreover, if the method described above is applied to CMDB, the database described above corresponds to the configuration management database, the target data group corresponds to the configuration information data group, and the operator corresponds to the operation manager.
The workflow generation server according to the present invention can extract the workflow of the data operations relating to an individual request with high accuracy and can shorten the task time for providing a new definition, in a case where the data operation relating to multiple requests is performed on one database.
Furthermore, if the histories of the data operations are recorded, not only by the database operation software, but also by a workflow system, because an amendment proposal can be generated for the existing workflow, the task time for making an amendment to the workflow can also be shortened.
Various embodiments according to the present invention are described below referring to the drawings. Moreover, in the present specification, a “relative score” means a numerical value indicating strength of a relation between two operation histories included in a pair of operation histories, that is, a value that quantifies the relation between the operation histories. Furthermore, an “operation schedule” means a prior announcement of data operation with respect to configuration information on a database.
A workflow generation server or a method of generating a workflow according to a first embodiment has a configuration in that the relation between the histories of data operations with respect to the database is quantified as a value, based on an external key to the database, and the workflow configured from the multiple data operations is generated using the value.
The configuration management database 3 according to the present embodiment is defined as being used for configuration management of multiple physical or virtual computing systems. Furthermore, data in the configuration management database 3 is defined as being operated by multiple operation managers. The administrative terminals 9 may be present equally to the number of the operation managers.
The configuration management database 3 is a database that stores the configuration information on the multiple physical or virtual computing systems, and data necessary to manage the configuration information. As the data necessary to manage the configuration information, the database stores data definition 3000 and user data 3100. Furthermore, the configuration management database 3 according to the present embodiment stores the configuration information in the form of a table. However, the present embodiment is not limited to the data in the form of a table. For example, if, even though the data is in a tree structure, the data equivalent to the data definition, described below, is present, the present embodiment can be applied.
The data definition 3000 are data indicating a reference relation between the information items included in each configuration information. If the configuration management database 3 is a relational database management system (RDBMS), the data definition is one part of the data that are created with a CREATE TABLE command.
The user data 3100 are data on a user who is allowed to operate data in the configuration management database, that is, an operation manager. The user data 3100 is created in advance by the operation manager.
Customer information 3200 to firewall configuration 3600, which are illustrated in
Furthermore, the customer information 3200 are data relating to customers that use the physical or virtual computing system.
Customer-VM information 3300 are data that indicate a relationship between customer information and VM information.
VM information 3400 are data relating to a virtual machine (VM) that operates on the virtual computing system.
Monitoring configuration 3500 is data on the monitoring configuration for a physical machine or the virtual machine.
Furthermore, the firewall configuration 3600 are data relating to setting of a firewall set up on the physical or virtual computing system. The customer information 3200 to the monitoring configuration 3500 are assigned a connection with any key, but only the firewall configuration 3600 is assigned a connection without a key. Because of this, according to the present embodiment, the workflow including the firewall configuration cannot be recommended with high accuracy. A countermeasure against this is described in the last portion of the present embodiment and a second embodiment.
The workflow database 4 is a database that stores data used by the workflow management server. The database stores workflow data 4000 and node data 4100. According to the present embodiment, a task is illustrated as a node, and the workflow is configured from the multiple nodes and a connection relationship in one direction between the nodes. The connection relationship in which the multiple nodes diverge from one node indicates task divergence or task parallelization.
The workflow data 4000 are data relating to the entire workflow.
The node data 4100 are data relating to the nodes included in the workflow. The upper portion of
Columns 4106 to 4108, illustrated on the lower portion of
The history database 5 in
The operation history 5000 is a history of the data operation with respect to the configuration information. According to the present embodiment, the addition, the update, and the deletion with respect to the configuration information are handled as the data operation. However, the example according to the embodiment of the present invention is not limited to this, and access to all data may be handled as the data operation. For example, reading of data may be handled as the data operation.
A column 5006 of the lower portion of
The recommendation database 6 in
The recommendation workflow data 6000 are data relating to the entire recommendation workflow.
The recommendation node data 6100 are data relating to the nodes included in the recommendation workflow. The nodes are hereinafter referred to as a recommendation node.
Columns 6105 to 6107 on the lower portion of
The workflow management server 7 in
The workflow display program 71 is a program that provides workflow data stored in the workflow database 4 in response to a request from the operation manager. According to the present embodiment, the operation manager reads the workflow data provided by the workflow display program 71 through a Web browser 92.
The workflow execution program 72 is a program that performs configuration change processing on a physical or virtual device, and performs operation on the configuration information in the configuration management database that is entailed by the configuration change processing, in response to the request from the operation manager. According to the present embodiment, the operation manager instructs the workflow execution program 72 to execute the processing, through the web browser 92. Furthermore, when operating the configuration information, the workflow execution program 72 records a history of the data operation in the history database 5. The recording is not essential in the present embodiment, but the workflow generation server can recommend an amendment to the existing workflow, by performing the recording.
The workflow generation server 8 in
The workflow generation program 81 is a program that generates the recommendation workflow, in response to the request from the operation manager, or periodically.
The workflow recommendation program 82 is a program that recommends new addition of the workflow, or amendment to the existing workflow, in response to the demand from the operation manager. According to the present embodiment, the operation manager reads the result of recommendation provided by the workflow recommendation program 82 through the web browser 92.
The administrative terminal 9 in
The database tool 91 is a program that provides the operation manager with a function of reading and editing the data in the configuration management database 3. When the operation manager operates the configuration information, the database tool 91 records the history of the data operation in the history database 5. The recording is essential in the present embodiment.
The web browser 92 is a program that operates as the interface for using the program which runs on the workflow management server and the workflow generation server, with the operation manager.
In the flow chart in
Next, the workflow generation program 81 creates a combination of the operation histories (hereinafter referred to as the pair of operation histories) obtained in S101 (S102). Then, in subsequent S103 to S113, the numerical value, that is, the value that quantifies the relation between the operation histories (hereinafter referred to as a relation score), is calculated that indicates the strength of the relation between the two operation histories included in the pair of operation histories. According to the first embodiment, the data definition is used as information that considerably governs magnitude of the relation score that is the value that quantifies the relation between the operation histories. An initial value of the relation score, which is the value that quantifies the relation between the operation histories, is set to RS0.
First, the workflow generation program 81 selects one non-inspection pair of operation histories. The selected pair of operation histories is hereinafter referred to as HP. Furthermore, the operation histories included in HP are referred to as H1 and H2 in chronological order of execution in terms of date and time (S104). Next, the workflow generation program 81 checks a difference in execution date and time between the two operation histories, the number of different data operations performed during the time between the two operation histories, and an organization of the user who performed the two operation histories. Then, if an elapsed time between H1 and H2 is too long, H1 and H2 are excluded from the subsequent analysis (S105 to S107). The reason for doing this is to avoid generation of the workflow that is different from an actual workflow. For example, even though the two data processing with respect to the same data are present, if the two data processing are considerably different in execution date and time, there is a high likelihood that the two data processing will belong to the different workflows. That is, when quantifying the relation between the two data operation histories, the workflow generation program 81 executed by the CPU of the workflow generation server 8 can generate the workflow more accurately by referring to the difference in execution date and time between the two data operations, the number of different data operations performed during the time between the two data operations, or the organization of the user who performed the two data operations.
Next, the workflow generation program 81 obtains the data definition relating to the relation between a target table of H1 and a target table of H2 from the configuration information database (S108). Specifically, the data definition is retrieved, with a target table name and a target column name, which are included in H1, being assigned to a reference source table and a reference source column name of the data definition, and with a target table name and a target column name, which are included in H2, being assigned to a reference destination table and a reference destination column name of the data definition. Furthermore, the same search is conducted, with H1 and H2 being exchanged with respect to each other. Then, in subsequent S109 to S113, inspection is performed on each data definition (hereinafter referred to as DD) obtained in S108.
The workflow generation program 81 inspects whether the columns indicated by DD are consistent in the post-operation data of H1 and the post-operation data of H2 (S111). At this time, if either of the post-operation data shows “absent,” the columns are not considered to be consistent. At this time, if either of the post-operation data shows “absent,” it is understood from this that if both are consistent, the record is created or updated in such a manner that the post-operation data of H1 and the post-operation data of H2 are connected with the defined external key. The post-operation data may be compared, with the reference destination table and column as H1 and the reference source table and column as H2, because, if the connection is created, the data which is referred to is normally first updated. However, if the relation between the reference destination and the reference source is a cross-reference, a case may also be inspected where H1 and H2 are reversed with respect to each other.
If both are not consistent, the workflow generation program 81 inspects whether the columns indicated by DD in pre-operation data of H1 and pre-operation data of H2 are consistent (S112). At this time, if either of the pre-operation data shows “absent,” the columns are not considered to be consistent. It is understood that if both are consistent, the pre-operation data of H1 and the pre-operation data of H2 are connected with the external key, but the record is updated or deleted in such a manner as to break the connection. The post-operation data may be compared, with the reference destination table and column as H2 and the reference source table and column as H1, because, if the connection is broken, the data which refers to is normally first updated. However, if the relation between the reference destination and the reference source is the cross-reference, the case may also be inspected where H1 and H2 are reversed with respect to each other.
If the condition in either of S111 and S112 is met, the workflow generation program 81 adds a relation score RS1 to a pair HP of operation histories, and a relation score (RS0+RS1) results (S113).
When calculation of the relation score that is the value that quantifies the relation between the operation histories is completed, the workflow generation program 81 creates a set of pairs of operation histories that retain the relation score that is threshold T3 or greater (S114). Here, the reason for providing the threshold is to reduce an amount of calculation performed by the workflow generation program. Because of this, a method may be also employed in which several pairs of operation histories that are high-ranked in relation score are left behind, without imposing any limitation on the threshold.
Then, the workflow generation program 81 selects the pair of operation histories as a node (point) or an edge (line) of a graph and creates the multiple graphs (S115). The graph is hereinafter referred to as an operation history graph. With regard to the graph created at this time, all the nodes included in that graph are defined as being able to reach all the other nodes through the other nodes and edges included in that graph. Moreover, when considering reachability, a direction of the edge may be ignored. The set that includes all the operation history graphs created here is hereinafter referred to as HGS.
According to the data definition in
Next, the workflow generation program 81 performs shaping processing on the individual operation history graph included in HGS (S116). The shaping processing refers to all the processing that changes a shape of operation history graph into a practical shape as the workflow. For example, the shaping processing includes processing that reduces the number of divergences included in the operation history graph and processing that cancels a loop that is present in the operation history graph. According to the present embodiment, the loop occurs in the operation history graph, in a case where the definition of the database is not normalized. That is, the workflow generation program 81 executed by the CPU of the workflow generation server 8 performs the processing that reduces the number of divergences included in the operation history graph in order to change the operation history graph of the workflow into the practical shape as the workflow, or performs the processing that cancels the loop included in the operation history graph in order to change the operation history graph of the workflow into the practical shape as the workflow.
As a method of reducing the number of divergences included in the operation history graph, there are present a method of exchanging the edge according to the execution date and time of the operation history (1003 in
As a method of canceling the loop that is present in the operation history graph, for example, there is a method of exchanging the edge in such a manner to line up according to the execution time, only with respect to the nodes that make up the loop. Furthermore, there is a method of deleting one part of the edge, based on the relation score. For example, there is the method of deleting the edges in decreasing order of the relation score until the loop is canceled within a range in which the node slipping out of the operation history graph does not occur.
Next, in
The post-abstraction operation history is hereinafter referred to as the data operation and the graph configured from the abstracted operation history is hereinafter referred to as the data operation graph.
The abstraction being mentioned here is the abstraction of an operation target value and the abstraction of an operation target column. For example, if the operation history “Update a value of an alive monitoring column to “absent” in the record of the monitoring configuration table” is present, when the operation target value is abstracted, the data operation “Update the value of the alive monitoring column of the record of the monitoring configuration table” is generated. Furthermore, when the operation target column is abstracted, the data operation “Update the record of the monitoring configuration table” is generated. Moreover, the pre-abstraction pair of operation histories is maintained as the pair of data operations. Furthermore, the information on the column that links the data operations is also maintained as information incident to the pair of data operations. The workflow generation program 81 adds all the data operation graphs generated as described above to a list of data operation graphs (hereinafter referred to as OGL) (S124).
Because the operation histories are all different, the same operation history graph is not present in HGS. However, because there is a likelihood that several data operations that are the abstracted operation histories will be consistent, there is a likelihood that the same data operation graph will be present in OGL. Therefore, the workflow generation program 81 removes redundancy of the data operation graph from OGL and creates a set of the data operation graphs (hereinafter referred to as OGS) (S125). Furthermore, the workflow generation program 81 dicovers a subgraph included in two or more data operation graphs in OGS and add the subgraph to OGS (S126). This processing is not essential and is necessary only when it is desired that the data operation that is commonly performed in the multiple workflows be extracted as the workflow. Then, the workflow generation program 81 removes the graph the number of nodes (the number of data operations) of which is threshold T4 or less, from OGL and OGS (S127). Here, the reason for providing the threshold is to reduce an amount of calculation performed by the workflow generation program.
In subsequent 5131 to 5143 illustrated in
Moreover, the workflow generation program 81 selects one non-inspection data operation graph from OGS (S133). The selected data operation graph is hereinafter referred to as OG1. In subsequent S134 to S143, each data operation graph included in OGL, and OGI are compared, and a value is added to the graph usage score of OG1 according to the result of the comparison.
First, the workflow generation program 81 selects the data operation graph that is not compared with OG1, from OGL (S135). Such data operation graph is hereinafter referred to as OG2. Then, if any of the following conditions is met, when comparing OG1 and OG2, a value is added to the graph usage score of OG1 (S136 to S143).
OG1 and OG2 are completely consistent with each other. A set of the data operations included in OG1 and a set of the data operations included in OG2 are completely consistent with each other.
OG1 is a subgraph of OG2. A set of data operations included in OG1 is a subset of a set of data operations included in OG2.
It is assumed that, as the extent of consistency becomes greater, the value added to the graph usage score becomes greater. That is, the following relationship is assumed to be established: GS1 GS2 GS3 GS4. However, if order of execution of the workflow (that is, whether or not the consistency is present as the graph) is considered to be important when generating the workflow, the following relationship may be possible: GS1 GS3 GS2 GS4.
Next, if the number of data operation graphs is too great, or too small that exceeds the threshold at which the data operation graph the graph usage score of which is threshold T5 or less is deleted from OGS (S144), the workflow generation program 81 may define the fixed number of data operation graphs as OGS, in increasing order of the graph usage score.
Finally, the workflow generation program 81 registers the data operation graph included in OGS with the recommendation data base 6, along with the graph usage score calculated in S133 to S143 (S145). At this time, information on all the data operation graphs is registered with the recommendation workflow data 6000, and information on the individual node included in the data operation graph is registered with the recommendation node data 6100. Moreover, at the time of the registration, the workflow generation program 81 automatically generates a name of the workflow, based on the target table, the target column, and the operation content of the recommendation node data. The name is used to show characteristics of the generated workflow to the operation manager. For example, a case is considered in which the workflow is created based on the operation history graph 1003 in
An outline of the processing is described above in which the workflow generation program 81 generates the recommendation workflow data according to the present embodiment.
The workflow recommendation program 82 first obtains all the workflow data and node data from the workflow database 4 (S201). Then, the individual workflow obtained in S201 is converted into the data operation graph (S202). Because in addition to the data operation, the nodes are also included in the workflow, when the conversion to the data operation graph is performed, such nodes are excluded. For example, when the relationship that the node A is followed by the node B, followed by the node C is present in a certain workflow, if only the nodes A and C are the data operations, such workflow is converted into the data operation graph in which the node A is followed by the node C. The set of the data operation graphs that is created in this manner is hereinafter referred to as WFGS.
Next, the workflow recommendation program 82 obtains all the recommendation workflow data and recommendation node data from the recommendation database 6 (S203). Then, the individual recommendation workflow obtained in S203 is converted into the data operation graph (S204). Such a set of data operation graphs is hereinafter referred to as RGS. Then, in subsequent S205 to S217, the inspection is performed on each data operation graph (hereinafter referred to as RG) included in RGS.
First, the workflow recommendation program 82 inspects whether or not the data operation graph that is consistent with RG is included in WFGS (S207). If the data operation graph that is consistent with RG is present, because it is not necessary to perform the recommendation based on RG, returning back to S204 takes place. If the data operation graph that is consistent with RG is not included in WFGS, the data operation graph is taken out of WFGS one by one and is compared with RG (S208 to 5217). The data operation graph obtained from WFGS is hereinafter referred to as WFG.
Subsequently, the comparison between RG and WFG is performed and the processing diverges out according to the result of the comparison.
First, the workflow recommendation program 82 inspects whether or not the set of the data operations included in WFG and the set of the data operations included in RG are consistent with each other (S210). If the consistency is present, the sorting of the data operations is recommended with respect to the workflow corresponding to WFG (S211).
If the above corresponding description is not present, the workflow recommendation program 82 inspects whether or not WFG is a subgraph of RG, or whether or not the set of data operations included in WFG is the subset of the set of data operations included in RG (S212). If either condition is met, the addition of the data operation that is included in RG and thus is not included in WFG is recommended with respect to the workflow corresponding to WFG (S213).
If the above corresponding description is not present, the workflow recommendation program 82 inspects whether or not RG is a subgraph of WFG, or whether or not the set of data operations included in RG is the subset of the set of data operations included in WFG (S214). If either condition is met, the deletion of the data operation that is included in WFG and thus is not included in RG is recommended with respect to the workflow corresponding to WFG (S215).
However, even though all the WFG and RG included in WFGS are compared, there is a case where the recommendation based on RG is not performed even one time. Such a thing occurs in a case where RG is entirely different from that in the existing workflow. Therefore, in such a case, the workflow recommendation program 82 recommends a new creation of the workflow configured from the data operations included in RG (S216 to S217).
Moreover, in 5212 and 5214, if the number of nodes included in the subgraph or the subset is threshold T9 or less, this may be defined as a failure in the determination. By adding such processing, the recommendation can be made in the form of “recommendation of new creation” with respect to the recommendation workflow that has low similarity to the existing workflow.
In
An outline of the processing is described above in which the workflow recommendation program 82 displays the result of recommending the workflow.
The processing in the case of the presence of the data definition is described above. However, in addition to the external key that is the data definition retained by the configuration management database 3, the workflow generation program 81 may use the data definition automatically generated by the workflow generation program, as the external key. An example of a method of automatically generating the data definition is described below. First and second methods are ones that use the configuration information actually stored, and third and fourth methods are ones that use a co-occurrence relationship between the data operations included in the existing workflow.
All the columns in the configuration information are inspected, and sets of data included in each column are created. Moreover, the sets of data in each column are compared. If a ratio at which a set of data in a certain column A and a set of data in a column B are consistent with each other exceeds a constant value, the data definition indicating that the relationship is present between the columns A and B is automatically generated as the external key.
A set of results of morphologically analyzing values included in each column is created in addition to the set in the first method. Moreover, the set of the results of the morphological analysis and the set in the first method are compared. If a ratio at which the set of data in a certain column A and a set of results of morphologically analyzing data in a certain column B are consistent with each other exceeds a constant value, the data definition indicating that the relationship is present between the columns A and B is automatically generated as the external key. For example, if a rule that a value of a column “physical host name” in a table Cis “host+numerical value,” and a value of a column “virtual host name” in a table D is “host+numerical value+numerical value” is found, the data definition that the column “virtual host name” refers to the column “physical host name” is automatically generated as the external key, using this method.
All the workflow data and node data are inspected and the columns that are operated at the same time in one node are defined as having a relationship between them. For example, if an operation of a table A and an operation of a table B are side by side in one node, the data definition that the column in the table B refers to the column in the table A (or vice versa) is automatically generated as the external key.
All the workflow data and node data are inspected and the columns that are operated in the consecutive nodes are defined as having the relationship between them. For example, if the table A is operated in a certain node, and the table B is operated in the following node, the data definition that the column in the table B refers to the column in the table A (or vice versa) is automatically generated as the external key.
As described above, the workflow recommendation program 82 can recommend the workflow that has to be newly created, and the workflow to which the amendment has to be made and a method of making the amendment to the workflow with respect to the operation manager. Since the recommendation is performed based on the data definition, even though the data operation relating to the multiple requests is performed on the configuration management database, the accuracy of the recommended workflow is increased more than in the related art.
Because of this, there is an effect that the operation manager can quickly perform the creation and the amendment on the workflow. Furthermore, it is possible to obtain an effect that by shortening the time that it takes to create the workflow and make the amendment to the workflow, frequent review of the workflow is possible and consequently a mistake is reduced in updating the configuration management database.
A second embodiment is described below referring to FIGS. to 22. A workflow generation server or a method of generating a workflow according to the second embodiment is characterized by employing a configuration in which the relation between the data operation histories is defined as a quantified value by using an operation schedule history that records the history of the operation schedule of the database, or a transaction processing history of the database.
According to the present embodiment, the workflow recommendation program of the workflow generation server can discover the relation from the operation schedule history, also with respect to the configuration information whose relation with other configuration information is not included in the data definition, and can increase the accuracy of the workflow being recommended more than in the related art, by performing the workflow recommendation, based on the discovered relation.
According to the present embodiment, prior announcement of the data operation with respect to the configuration information is referred to as an operation schedule. The operation schedule is data that indicates which configuration information is updated to which value. The operation manager shares the operation schedule relating to the data operation with other operation managers, in advance, in order to prevent the data operation performed by him/her from colliding with the data operation performed by other operation managers. According to the present embodiment, the history relating to the operation schedule is used to improve the accuracy of recommendation data flow. The history relating to the operation schedule is hereinafter referred to as an operation schedule history.
A column 5107 on the lower portion of
The operation schedule history 5100 is created or updated by the database tool 91 or the workflow execution program 72. When the operation manager performs the operation schedule through the programs, the new operation schedule history is added to the operation schedule history 5100. Values in columns 5101 to 5110 are input to the operation schedule history that is added at this time. Then, when the operation manager actually performs the data operation, based on such operation schedule history, the operation history relating to such data operation is registered with an operation schedule 5000, and thus values in columns 5111 to 5113 of such operation schedule history are input.
The workflow generation program 81 according to the second embodiment uses the operation schedule history in addition to the data definition, in order to calculate the relation score that is the value that results from quantifying the relation between the operation histories. Because of this, processing S301 to S313 in which the relation score is calculated using the data definition are almost the same as the processing S101 to S113 according to the first embodiment. However, after performing inspection on the data definition, the workflow generation program 81 according to the second embodiment proceeds to processing S314 and subsequent processing step in which the relation score is calculated using the operation schedule history.
First, the workflow generation program 81 obtains the operation schedule history corresponding to H1 (hereinafter referred to as SH1) and the operation schedule history corresponding to H2 (hereinafter referred to as SH2) (S314). The operation schedule history corresponding to H1 is the operation schedule history in which the column 5112 is consistent with the column 5001 in H1. If the operation schedule history is not present in either of H1 and H2, proceeding to inspection of the next pair of operation histories takes place (S315). If the operation schedule history is present in both H1 and H2, a difference in date and time of schedule registration between the two operation schedule histories, a difference in estimated date and time of schedule execution, and a difference in date and time of schedule execution are inspected (S316 to S318). The reason for doing this is to avoid the generation of the workflow that is different from the actual workflow.
If the inspection results in the pass, relation score RS2 is added to the pair HP of operation histories (S319). Moreover, based on the difference in date and time inspected in S316 to S318, the difference may be added to the relation score that is added here. By causing the difference in the relation score, for example, if the sets of pairs of operation histories created in S320 described below are too many, the refinement of the pair of operation histories is possible.
When the calculation of the relation score is completed, the workflow generation program 81 creates the set of pairs of operation histories that retain a relation score of threshold T3 or more (S320). Such processing is the same as S114 according to the first embodiment. Then, the workflow generation program 81 selects the pair of operation histories as the node (point) or the edge (line) on a graph and creates the multiple operation history graphs (S321).
Next, the workflow generation program 81 performs the shaping processing on the individual operation history graph included in HGS (S121). The bi-directional edge is easier to occur in the second embodiment than in the first embodiment. As the method of canceling the loop present in the operation history graph, for example, there are a method in which the bi-directional edge is changed to the unidirectional edge according to order of date and time data (schedule registration date and time, estimated date and time of schedule execution, or schedule execution date and time) included in the operation schedule history (1303 in
As described above, the workflow recommendation program 82 can discover the relation from the operation schedule history, also with respect to the configuration information whose relation with other configuration information is not included in the data definition. The accuracy of the workflow being recommended is increased more than in the related art, by performing the workflow recommendation, based on the discovered relation.
Furthermore, if the data operation in the database is performed as the transaction processing, the information relating to the transaction processing may be substituted for the schedule operation history. For example, with regard to the multiple data operations that are performed with the same transaction processing, the database tool 91 creates the operation schedule history in which values in the columns from the schedule registration date and time (the column 5107) to the schedule execution user (the column 5112) are the same. The use of such method lowers the accuracy, when compared with the case where the operation manager explicitly creates the schedule, but makes it possible to discover the relationship between the items of configuration information without depending on the data definition.
The embodiments according to the present invention are described above in detail referring to the drawings, but the specific configuration is not limited to the embodiments and includes a design and the like within a scope not deviating from a gist of the present invention. For example, the embodiments described above are described in detail in order to provide a better understanding of the present invention, but the present invention is not necessarily limited to including the entire configuration as described above.
Furthermore, one part of a configuration of a certain embodiment can be replaced with a configuration of a different embodiment, and furthermore, a configuration of a different embodiment can be added to a configuration of a certain embodiment. Furthermore, one part of a configuration of each embodiment can be added to or replaced with a different configuration, or can be deleted.
Furthermore, each configuration, function, processing unit and the like are described above with the focus placed on the case where they are realized with software, for example, by writing a program that realizes some of, or all of, them, but they may be realized with hardware, for example, by designing them into an integrated circuit.
Number | Date | Country | Kind |
---|---|---|---|
2012-188340 | Aug 2012 | JP | national |