This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2013-132543, filed on Jun. 25, 2013, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein relate to a method for requesting control and also to an information processing apparatus for the same.
Enterprise information and communications technology (ICT) systems and data centers are operated with software programs designed to automate operations management (e.g., power management of servers) to alleviate the burden of such tasks. The term “management server” will be used herein to refer to a server that offers operations management capabilities by using such automation software, while the servers under the control of this management server will be called “managed servers.” For example, the management server operates the entire system by remotely controlling managed servers in accordance with a specific process defined as a workflow.
With the growing scale of computer systems, the management server takes care of an increasing number of managed servers. The management server may sometimes need to execute two or more workflows at the same time. Such concurrent execution of multiple workflows across a large number of managed servers would overwhelm the management server with an excessive load and thus result in a noticeable delay of its control operations. A computer system may encounter this type of problematic situation when, for example, it is informed that a scheduled power outage is coming soon. The management server now has to stop the operation of many managed servers before the power is lost, but the available time may be too short to close all servers.
As one technique to avoid excessive load on a management server, it is proposed to execute a series of processing operations by successively invoking one or more software components deployed in a plurality of servers. This technique includes making some estimates about the total amount of computational load caused by a group of software components, assuming that each execution request of software components is sent to one of possible servers. Based on the estimates, one of those possible servers is selected for each software component, as the destination of an execution request. Also proposed is a computer system configured to perform task scheduling with a consideration of the network distance between computers so as to increase the efficiency of the system as a whole. See, for example, the following
As a possible implementation of a computer system, several servers may be assigned the role of controlling managed servers in the system. The system determines which server will take care of which managed servers. Conventional techniques for this determination, however, do not consider the performance of communication between controlling servers and controlled servers (or managed servers). As a result of this lack of consideration, an inappropriate server could be assigned to a group of managed servers in spite of its slow network link with those managed servers. In other words, there is room for improvement of efficiency in the technical field of automated operations management of servers.
The above examples have assumed that a plurality of managed servers are controlled via networks. It is noted, however, that other kinds of devices may be controlled similarly, and that the same problems discussed above may apply to them as well.
According to one aspect of the embodiments, there is provided a non-transitory computer-readable medium storing a computer program that causes a computer to perform a process including: selecting a controller apparatus from a plurality of controller apparatuses to control a controlled apparatus, based on transmission rates of communication links between the controlled apparatus and each of the plurality of controller apparatuses; and requesting the selected controller apparatus to control the controlled apparatus.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
Several embodiments will be described below with reference to the accompanying drawings. These embodiments may be combined with each other, unless they have contradictory features.
The controller apparatuses 3 to 5 manipulate the controlled apparatuses 6 to 8 in accordance with requests from the information processing apparatus 10. For example, the controller apparatus [A] 3 may start and stop some particular functions in the controlled apparatus [a] 6. The information processing apparatus 10 causes the controller apparatuses 3 to 5 to execute such manipulations for the controlled apparatus 6 to 8 in a distributed manner.
One thing to consider here is that controller apparatuses 3 to 5 may work with controlled apparatus 6 to 8 in various combinations, at various distances, and with various communication bandwidths over the network 2. This means that the efficiency of control operations depends on the decision of which controller apparatus to assign for which controlled apparatus. The first embodiment is therefore designed to make an appropriate assignment based on a rule that a controlled apparatus be combined with a controller apparatus that is able to communicate with the controlled apparatus at a higher speed than others.
The information processing apparatus 10 causes the controller apparatuses 3 to 5 to execute processing operations in a distributed manner, including control operations for some controlled apparatuses. The information processing apparatus 10 has thus to select a controller apparatus that can efficiently execute such control operations. To this end, the information processing apparatus 10 includes a storage unit 11, a data collection unit 12, a selection unit 13, and a requesting unit 14. Each of these components will be described below.
The storage unit 11 stores definition data 11a that defines a process of operations for manipulating a plurality of controlled apparatuses 6 to 8. For example, the definition data 11a gives the following three numbered operations. The first operation (#1) is to control one controlled apparatus [a] 6. The second operation (#2) is to control another controlled apparatus [b] 7. The third operation (#3) is to control yet another controlled apparatus [c] 8.
The data collection unit 12 collects information about transmission rates of communication links between each controller apparatus 3 to 4 and each controlled apparatus 6 to 8. The data collection unit 12 stores the collected information in a memory or other storage devices.
The selection unit 13 selects one of the controller apparatuses 3 to 5 for use with each of the controlled apparatuses 6 to 8, based on the transmission rates of communication links between the controller apparatuses 3 to 4 and the controlled apparatuses 6 to 8. For example, the selection unit 13 selects a particular controller apparatus to control a particular controlled apparatus when that controller apparatus has the fastest communication link to the controlled apparatus. The selection unit 13 may also consult definition data 11a stored in the storage unit 11 and select controller apparatuses depending on the individual operations defined in the definition data 11a.
The requesting unit 14 requests one of the controller apparatuses that has been selected by the selection unit 13 to control a particular controlled apparatus. For example, the requesting unit 14 follows the order of operations specified in the definition data 11a. With the progress of defined operations, the requesting unit 14 sends an execution request to the next controller apparatus selected by the selection unit 13.
The proposed system enables the controller apparatuses 3 to 5 to efficiently share their work of control operations for a number of controlled apparatuses 6 to 8. For example, operation #1 seen in the first step of the definition data 11a controls one apparatus 6. The information collected by the data collection unit 12 indicates that the controller apparatus [A] 3 has a faster communication link with the controlled apparatus 6 than other controller apparatuses 4 and 5. Based on this information, the selection unit 13 selects the controller apparatus [A] 3 as the destination of an execution request for operation #1. The requesting unit 14 then sends the execution request to the selected controller apparatus [A] 3. In response, the controller apparatus [A] 3 controls the controlled apparatus 6. The selection unit 13 also handles other operations #2 and #3 in a similar way, thus selecting controller apparatuses 4 and for controlled apparatuses 7 and 8, respectively, as being the fastest in terms of the transmission speeds. The requesting unit 14 sends an execution request for operation #2 to the controller apparatus [B] 4, as well as an execution request for operation #3 to the controller apparatus [A] 5. The controller apparatuses 3 to 5 thus execute a series of operations defined in the definition data 11a efficiently in a distributed fashion.
The information processing apparatus 10 may be configured to put a plurality of operations into a single group when they are a continuous series of operations for which the selection unit 13 has selected a common controller apparatus. The requesting unit 14 requests execution of all those operations in a group collectively to the selected common controller apparatus. This grouping feature reduces the frequency of communication events between the information processing apparatus 10 and controller apparatuses, thus contributing to more efficient execution of operations.
The definition data may include two or more operation sequences that are allowed to run in parallel with one another, each made up of a plurality of operations to be executed sequentially. It is more efficient if such parallel operation sequences are delegated to different controller apparatuses so as to take advantage of distributed processing. The information processing apparatus 10 thus subjects these operation sequences to the grouping mentioned above by, for example, producing one group from each different operation sequence. The information processing apparatus 10 issues execution requests to multiple controller apparatuses to initiate efficient distributed execution of parallel groups of operations.
The definition data may further include another kind of operation sequences each made up of a plurality of operations to be executed sequentially. Unlike the parallel ones discussed above, one of these operation sequences is selectively executed according to the decision of a conditional branch. The information processing apparatus 10 seeks a group in such conditional operation sequences as follows. For example, the information processing apparatus 10 checks the beginning part of one operation sequence to find one or more operations whose selected controller apparatuses are identical to the controller apparatus selected for a preceding operation immediately before the conditional branch. When such a match is found at the beginning part of the operation sequence, the information processing apparatus 10 then forms a group from the found operations and the preceding operation immediately before the conditional branch. This feature makes it possible to produce a larger group of operations and further reduce communication events between the information processing apparatus 10 and controller apparatuses.
Each issued execution request for a specific operation is supposed to reach its intended controller apparatus. The controller apparatus may, however, happen to be down at the time of arrival of a request due to some problem. In this situation, the selection unit 13 may reselect a controller apparatus for the failed operation, as well as for each of other pending operations subsequent thereto. The controller apparatus that has failed to receive the execution request is excluded from the reselection. The requesting unit 14 then requests the reselected controller apparatuses to execute the failed operation and other pending operations, respectively. In spite of the problem with a controller apparatus during the course of processing operations, the noted features of the selection unit 13 and requesting unit 14 permit the information processing apparatus 10 to perform a quick fail-over of controller apparatuses and continue the execution as defined in the definition data.
It is noted that the information processing apparatus 10 may itself be a controller apparatus. That is, the information processing apparatus 10 may include the functions for manipulating controlled apparatuses 6 to 8. Suppose, for example, the case in which the information processing apparatus 10 has a faster communication link to a particular controlled apparatus than any of the controller apparatuses. In this case, the management server 100 is advantageous over the controller apparatuses because of its shorter time of communication with that controlled apparatus. For an enhanced efficiency of processing, it is therefore a better choice to use the information processing apparatus 10 as a controller apparatus, rather than transferring the control to other controller apparatuses.
The information processing apparatus 10 is, for example, a computer having a processor and a memory. The above-described data collection unit 12, selection unit 13, and requesting unit 14 may be implemented as part of the functions of the processor in the information processing apparatus 10. Specific processing steps executed by the data collection unit 12, selection unit 13, and requesting unit 14 are encoded in the form of computer programs. The processor executes these programs to provide the functions of the information processing apparatus 10. The foregoing storage unit 11, on the other hand, may be implemented as part of the memory in the information processing apparatus 10.
It is also noted that the lines interconnecting functional blocks in
Cloud computing is widely used today, and the second embodiment discussed below provides a solution for operations management of an ICT system in this cloud age. The conventional operations management methods use a management server to manage a set of servers connected to a single network or deployed in a single data center. In other words, the conventional methods assume a moderately-sized system of servers.
ICT systems in the cloud age are, however, made up of various environments depending on their purposes, such as public cloud systems, private cloud systems, and on-premises computing systems. With the trend of globalization, data centers deploy their managed servers across the world. Constant effort has also been made to unify the management functions and enhance the efficiency of overall system operations. These technological trends result in a growing number of managed servers per system, which could exceed the capacity that a single management server can handle. An overwhelming load of operations management makes it difficult for the management server to ensure the quality of processing at every managed server.
One solution for the difficulties discussed above is that the management server delegates a part of its operations management tasks of an automated flow to a plurality of execution servers. This solution may, however, not always work well with a cloud computing environment in which a large number of networks are involved to connect servers located at dispersed places. The tasks of operations management are executed in an automated way by manipulating managed servers on the basis of workflow definitions. The management servers respond to such manipulations, but their responsiveness may vary with their respective network distances, as well as depending on the performance of intervening networks, which could spoil the stability of operations management services. For this reason, the above-noted solution of partial delegation of management tasks, when implemented, has to take into consideration the physical distance of each execution server from managed servers.
More specifically, a workflow is made up of a plurality of tasks (individual units of processing operations), and each of these tasks manipulates one of various managed servers scattered in different sites. While it is possible to distribute the entire workflow to execution servers, some of the execution servers could consume a long time in manipulating assigned managed servers if their communication links to the managed servers perform poorly. For example, collecting log files from each managed server is one of the manipulations performed as part of a workflow. Execution servers may work together to execute log file collection, but the conventional way of distributed processing based on the load condition of CPU and memory resources would not suffice for this type of manipulation because its performance heavily depends on the bandwidth of communication links.
In view of the above, the second embodiment is configured to determine which servers to execute each particular processing operation for manipulating managed servers, with a total consideration of network distances between the servers, including: distance between the management server and each managed server, distance between the management server and each execution server, and distances between execution servers and managed servers.
The management server 100 is a computer configured to control a process of operations management based on automated flows. Automated flows are pieces of software each representing a sequence of operations in workflow form. Every single unit of operation in an automated flow is expressed as a node, and the operations of these nodes may be executed by different servers. The following description will use the term “process definition” to refer to a data structure defining an automated flow. The following description will also use the term “manipulation component” to refer to a software program for implementing an operation corresponding to a specific node.
The management server 100 determines which servers to assign for the nodes in an automated flow so as to efficiently execute the automated flow as a whole. Possible servers for this assignment of node operations include the management server 100 itself and execution servers 200, 200a, 200b, 200c, and so on.
The execution servers 200, 200a, 200b, 200c, are computers configured to execute operations of the nodes that the management server 100 specifies from among those in a given automated flow. The execution servers 200, 200a, 200b, 200c, remotely manipulate managed servers via network links in accordance with programs corresponding to the specified nodes. The managed servers 41, 41a, . . . , 42, 42a, . . . , 43, 43a, . . . , 44, 44a, are devices under the management of an automated flow.
In operation of the system of
It is noted that the management server 100 is an example of the information processing apparatus 10 discussed in
The memory 102 serves as a primary storage device of the management server 100. Specifically, the memory 102 is used to temporarily store at least some of the operating system (OS) programs and application programs that the processor 101 executes, in addition to other various data objects that it manipulates at runtime. The memory 102 may be formed from, for example, random access memory (RAM) devices or other volatile semiconductor memory devices.
Other devices on the bus 109 include a hard disk drive (HDD) 103, a graphics processor 104, an input device interface 105, an optical disc drive 106, a peripheral device interface 107, and a network interface 108.
The HDD 103 writes and reads data magnetically on its internal platters. The HDD 103 serves as a secondary storage device in the management server 100 to store program and data files of the operating system and applications. Flash memory and other non-volatile semiconductor memory devices may also be used for the purpose of secondary storage.
The graphics processor 104, coupled to a monitor 21, produces video images in accordance with drawing commands from the processor 101 and displays them on a screen of the monitor 21. The monitor 21 may be, for example, a cathode ray tube (CRT) display or a liquid crystal display.
The input device interface 105 is connected to input devices such as a keyboard 22 and a mouse 23 and supplies signals from those devices to the processor 101. The mouse 23 is a pointing device, which may be replaced with other kind of pointing devices such as touchscreen, tablet, touchpad, and trackball.
The optical disc drive 106 reads out data encoded on an optical disc 24, by using laser light. The optical disc 24 is a portable data storage medium, the data recorded on which can be read as a reflection of light or the lack of the same. The optical disc 24 may be a digital versatile disc (DVD), DVD-RAM, compact disc read-only memory (CD-ROM), CD-Recordable (CD-R), or CD-Rewritable (CD-RW), for example.
The peripheral device interface 107 is a communication interface used to connect peripheral devices to the management server 100. For example, the peripheral device interface 107 may be used to connect a memory device 25 and a memory card reader/writer 26. The memory device 25 is a data storage medium having a capability to communicate with the peripheral device interface 107. The memory card reader/writer 26 is an adapter used to write data to or read data from a memory card 27, which is a data storage medium in the form of a small card. The network interface 108 is linked to a network 30 so as to exchange data with other computers (not illustrated).
The processing functions of the second embodiment may be realized with the above hardware structure of
The management server 100 and execution servers 200, 200a, 200b, 200c, . . . provide various processing functions of the second embodiment by executing programs stored in computer-readable storage media. These processing functions are encoded in the form of computer programs, which may be stored in a variety of media. For example, the management server 100 may store program files in its HDD 103. The processor 101 loads the memory 102 with at least part of the programs stored in the HDD 103 and executes the programs on the memory 102. Such programs for the management server 100 may be stored in an optical disc 24, memory device 25, memory card 27, or other kinds of portable storage media. Programs stored in a portable storage medium are installed in the HDD 103 under the control of the processor 101, so that they are ready to execute upon request. It may also be possible for the processor 101 to execute program codes read out of a portable storage medium, without installing them in its local storage devices.
The following description will now explain each functional component implemented in the management server 100 and execution servers 200, 200a, 200b, 200c, and so on.
The configuration data collection unit 110 communicates with execution servers or managed servers to collect information about their total system configuration, which is referred to as “configuration data.” The configuration data collection unit 110 stores this configuration data in a CMDB 120. The CMDB 120 is a database configured to manage system configuration data. For example, this CMDB 120 may be implemented as part of storage space of the memory 102 or HDD 103.
The process definition storage unit 130 stores process definitions. For example, this process definition storage unit 130 may be implemented as part of, for example, storage space of the memory 102 or HDD 103. The analyzing unit 140 parses a process definition to determine how to organize the nodes into groups and produces grouping information that describes such groups of nodes. The analyzing unit 140 then calculates communication performance of each server that may be able to execute operations of a particular node group. The execution control unit 150 determines which servers to use for execution of operations, based on the data of communication performance calculated by the analyzing unit 140. The flow execution unit 160 executes the operation of a specified node in an automated flow when so commanded by the execution control unit 150.
The execution server 200 illustrated in
The configuration data collection unit 210 collects configuration data of managed servers that can be reached from the execution server 200 and sends the collected data to the management server 100. The process definition storage unit 220 stores process definitions. For example, this process definition storage unit 220 may be implemented as part of, for example, storage space of a memory 102 or HDD 103 in the execution server 200. The flow execution unit 230 executes the operation of a specified node in an automated flow when so commanded by the execution control unit 150 in the management server 100.
As seen in
It is noted that the process definition storage unit 130 is an example of the storage unit 11 discussed in
The following section will now provide more details of process definitions.
The execution of the above automated flow 51 starts from its start node 51a, and each defined processing operation is executed along the connection path of nodes until the end node 51g is reached.
The configuration data stored in the CMDB 120 is updated before starting execution of such an automated flow in process definitions.
(Step S101) The configuration data collection unit 110 in the management server 100 collects configuration data of execution servers. For example, the configuration data collection unit 110 communicates with the configuration data collection unit in each execution server to collect configuration data of managed servers. The collected configuration data is entered to the CMDB 120 in the management server 100. The collected data includes, for example, the host names and Internet Protocol (IP) addresses of managed servers and execution servers. Also included is information about the transmission rates (B/s) between each combination of execution and managed servers.
For example, the configuration data collection unit 110 in an execution server measures its transmission rates by sending some appropriate commands (e.g., ping) to each managed server on the same network where the execution server resides. The configuration data collection unit 110 similarly measures transmission rates with remote managed servers that are reached via two or more networks. When ping commands are used for the measurement, the following process enables calculation of transmission rates:
<Step-1> The execution server issues a ping command addressed to a particular managed server as in:
<Step-2> The execution server repeats the ping command of Step-1 five times and calculates their average response time.
<Step-3> The execution server performs the following calculation, thus obtaining a transmission rate value.
65000×2/(average response time)=transmission rate (B/s)
(Step S102) The configuration data collection unit 110 measures transmission rates of communication links between the management server 100 and each managed server. For example, the configuration data collection unit 110 starts measuring the transmission rates (B/s) of communication links between the management server 100 and managed servers, upon entry of the above-described configuration data to the CMDB 120 at step S101. The configuration data collection unit 110 uses the same measurement method discussed above for the execution servers. The resulting measurement data is entered to the CMDB 120. More specifically, the configuration data collection unit 110 populates the CMDB 120 with records of transmission rates, each of which corresponds to a particular managed server and indicates the transmission rates measured from its communication with the management server 100 and each execution server. In each such record, the configuration data collection unit 110 sorts the list of management server and execution servers in descending order of transmission rates.
The configuration data collection unit 110 further measures transmission rates between the management server 100 and each execution server and enters the records into the CMDB 120.
The above-described updating of configuration data may be performed at regular intervals (e.g., once every day) depending on the system's operation schedules. This regular update keeps the CMDB 120 in the latest state, with an enhanced accuracy of information content. Alternatively, the CMDB 120 may be updated at the time of addition, removal, or other changes in the managed servers and network devices.
The data structure of the CMDB 120 will now be described below.
Element Name represents the name of a stored element (referred to herein as “the element in question”). Parent Element indicates the name of a parent element of the element in question. When this parent element name is different from the element name noted above, the element in question is a child element of the named parent element. A child element has information about its parent element. When the parent element name is identical with the child element name, then it means that the element in question is at the highest level.
Element Description is a character string that explains the element in question. For example, the description may read: “Server node information,” “Network performance,” and “Performance data.”
Component Name indicates the name of information (component) contained in the element in question. One element may include a plurality of components, and those components may include child elements.
Component Type indicates the type of a component. For example, the component type field takes a value of “Attribute” meaning that the component is a piece of attribute information on a pertinent element, or “Element” meaning that the component is a child element.
Component Description contains a character string that describes the component. The description may read, for example, “Unique identifier,” “Host name,” “Representative IP address,” or “Server-to-server performance.”
Data Type indicates the type of component data. For example, this data type field takes a value of “String” to indicate that the data in question is a character string.
Symbol # denotes “the number” and the value of “# of” in the CMDB 120 indicates how many pieces of data are registered in the component.
The CMDB 120 stores data of components constituting each managed element in the way described above. This CMDB 120 permits the management server 100 to obtain, for example, the host name, IP address, communication performance, and other items of a specific execution server. Such data is stored in the format of, for example, the Extensible Markup Language (XML).
The next section will now describe how the management server 100 executes an automated flow.
(Step S111) The analyzing unit 140 parses a process definition. For example, the analyzing unit 140 consults the process definition storage unit 130 to retrieve a process definition for execution of an automated flow. The analyzing unit 140 then sorts the nodes of the automated flow into groups, depending on the content of the retrieved process definition, as well as based on the data of server-to-server transmission rates which is stored in the CMDB 120. The details of this parsing step will be described later with reference to
(Step S112) The analyzing unit 140 conducts a performance analysis, assuming that the load of node operations in the automated flow is distributed across multiple servers. Details of this performance analysis will be described later with reference to
(Step S113) The execution control unit 150 determines which server (e.g., management server or execution server) to assign for each node in the automated flow so as to attain a higher performance. The execution control units 150 is configured to select a single server to execute operations of multiple nodes when these nodes belong to the same group. Details of this server assignment will be described later with reference to
(Step S114) The execution control unit 150 executes operations defined in the automated flow. Details of this execution will be described later with reference to
The above description of
(a) Process Definition Parsing
(Step S121) The analyzing unit 140 obtains a process definition from the process definition storage unit 130 and identifies which managed server is to be manipulated at each node. For example, process definitions may include an IP address or a host name associated with each node, which indicates a managed server to be manipulated at the node. The analyzing unit 140 uses such IP addresses or host names of nodes to determine which managed server to manipulate at each node.
(Step S122) The analyzing unit 140 obtains a list of execution servers that are capable of communicating with the managed server identified above. For example, the analyzing unit 140 searches the CMDB 120 for managed servers by using the obtained IP addresses or host names (see S121) as search keys. When pertinent managed servers are found, the analyzing unit 140 then retrieves their configuration data from the CMDB 120, which includes a list of execution servers capable of remotely manipulating those managed servers, as well as information about transmission rates of links between each execution server and managed servers.
(Step S123) The above step S121 has identified execution servers as being capable of remotely manipulating pertinent managed servers. The analyzing unit 140 now selects one of those execution servers that has the highest communication performance with a particular managed server to be manipulated and associates the selected execution server with that managed server. For example, the analyzing unit 140 compares different active execution servers in the list obtained at step S122 in terms of the transmission rates of their links to a particular managed server. The analyzing unit 140 then singles out an execution server with the highest transmission rate for the managed server and registers their combination in a node-vs-server management table.
(Step S124) The analyzing unit 140 sorts the nodes into groups. Details of this node grouping will be described later with reference to
The above steps permit the analyzing unit 140 to parse a process definition. Step S123 in this course produces a node-vs-server management table discussed below.
The node name field of each record contains the name of a node included in an automated flow, and the execution server field contains the name of an execution server supposed to execute the operation of that node. The nodes in an automated flow may include those that manipulate managed servers and those that do not. Included in the latter group is, for example, a node that processes data given as a result of some other operation. Since any servers can execute such nodes, the execution server field of these nodes is marked with a special symbol (e.g., asterisk in
The node type field describes the type of a node, which may be, for example, “Start,” “End,” “Manipulation component,” and “Multiple conditional branch.” Start node is a node at which the automated flow starts. End node is a node at which the automated flow terminates. Manipulation component is a node that causes a server to execute a certain processing operation. Multiple conditional branch is a manipulation component that tests conditions for choosing a subsequent branch.
The above node-vs-server management table 141 is used in node grouping. Think of, for example, an environment where servers are dispersed over the network as in a cloud computing system. Since it is desirable in such an environment to reduce the frequency of server communication as much as possible, the analyzing unit 140 sorts the nodes into groups so that a group of nodes can be executed by a single server. Processing operations of the nodes are then assigned to execution servers on a group basis, so that the managing node communicates with execution servers less frequently.
For example, process definitions include information about the order of execution of nodes in an automated flow. The analyzing unit 140 examines this information to extract nodes that can be executed successively by a single server, and puts them all into a single group. The following description provides a detailed process of node grouping, assuming that the nodes are numbered in the order of their execution, as in “node (n)” denoting the nth node and “node (n+1)” denoting the (n+1)th node, where n is an integer greater than zero.
To discover which nodes are groupable in this automated flow 52, node (n) is compared with node (n+1) in terms of their associated execution servers. When the two nodes' execution servers coincide with each other, or when one of the two nodes does not care about selection of its execution server, node (n) and node (n+1) are put into a group. The group is then associated with the common server of node (n) and node (n+1).
For example, the aforementioned group management table may have an existing record of group for node (n). In that case, node (n+1) is added to the same record as a new member of the group. When that is not the case, a new group is produced from node (n) and node (n+1) and added to the group management table.
When grouping nodes in this automated flow 53, the analyzing unit 140 consults information about node (n+1) to determine to which group node (n) belongs. Referring to the example of
When the synchronization node 54c is encountered as node (n) in the course of node grouping, the analyzing unit 140 does not put the synchronization node 54c to any groups for the following reasons. The synchronization node 54c is preceded by a plurality of nodes (n−1), i.e., nodes 54a and 54b in the present example of
Which branch route to take in the illustrated automated flow 55 depends on the result of an operation at the preceding node 55a. Taking this into consideration, the analyzing unit 140 performs the following things when grouping the nodes in the automated flow 55.
The analyzing unit 140 first obtains information about node (n−1) and nodes (n+1) from the node-vs-server management table 141. It is noted here that there are a plurality of nodes (n+1) to which the process flow may branch from node (n). When the group management table has an existing group including node (n−1) as a member, the analyzing unit 140 obtains information about the execution server associated with that existing group and compares the obtained information with information about each execution server associated with nodes (n+1). When a match is found with one of those nodes (n+1), the analyzing unit 140 puts node (n) and that node (n+1) to the existing group of node (n−1). When no match is found, the analyzing unit 140 abandons the current grouping attempt and proceeds to each node (n+1) to seek new groups for them.
When, on the other hand, the group management table includes no existing groups for node (n−1), the analyzing unit 140 obtains information about the execution server associated with node (n−1) and compares the obtained information with information about each execution server associated with nodes (n+1). When a match is found with one of those nodes (n+1), the analyzing unit 140 produces a new group from node (n−1), node (n) and that node (n+1). When no match is found, the analyzing unit 140 abandons the current grouping attempt and proceeds to each node (n+1) to seek new groups for them.
Then after the branching, the analyzing unit 140 tests each successive nodes as to whether the node in question is to be executed by the same server of its preceding node. When they are found to share the same server, the analyzing unit 140 adds the node in question to the group of the preceding node. This is similar to the foregoing case of successive manipulation components.
The above-described process of node grouping may be presented in a flowchart described below.
(Step S131) The analyzing unit 140 initializes a variable n to one, thus beginning the grouping process at the start node of the given automated flow.
(Step S132) The analyzing unit 140 retrieves information about node (n) from the node-vs-server management table 141.
(Step S133) The analyzing unit 140 checks the node type field value of node (n) in the retrieved information. The analyzing unit 140 now determines whether node (n) is a start node. When node (n) is found to be a start node, the process skips to step S142. Otherwise, the process advances to step S134.
(Step S134) The analyzing unit 140 also determines whether node (n) is a synchronization node. A synchronization node permits a plurality of parallel operations in different branch routes to synchronize with each other and join together into a single route of operations. When node (n) is found to be a synchronization node, the process skips to step S142. Otherwise, the process advances to step S135.
(Step S135) The analyzing unit 140 further determines whether node (n) is a manipulation component. When node (n) is found to be a manipulation component, the process proceeds to step S136. Otherwise, the process advances to step S137.
(Step S136) For the manipulation component node (n), the analyzing unit 140 calls a grouping routine for manipulation component nodes. Details of this grouping routine will be described later with reference to
(Step S137) The analyzing unit 140 further determines whether node (n) is a parallel branch node. When node (n) is found to be a parallel branch node, the process proceeds to step S138. Otherwise, the process advances to step S139.
(Step S138) For the parallel branch node (n), the analyzing unit 140 calls a grouping routine for parallel branch. Details of this grouping routine will be described later with reference to
(Step S139) The analyzing unit 140 further determines whether node (n) is a conditional branch node. When node (n) is found to be a conditional branch node, the process proceeds to step S140. Otherwise, the process advances to step S141.
(Step S140) For the conditional branch node (n), the analyzing unit 140 calls a grouping routine for conditional branch. Details of this grouping routine will be described later with reference to
(Step S141) The analyzing unit 140 further determines whether node (n) is an end node. When node (n) is found to be an end node, the grouping process of FIG. 15 is closed. Otherwise, the process goes back to step S142.
(Step S142) The analyzing unit 140 increments n by one and moves the process back to step S132 to process the next node.
The above steps permit the node grouping process to invoke appropriate routines depending on the type of nodes. The following description will provide details of each node type-specific grouping routine.
Described in the first place is a grouping routine for manipulation component nodes.
(Step S151) The analyzing unit 140 obtains information about node (n+1) from the node-vs-server management table 141.
(Step S152) The analyzing unit 140 compares node (n) with node (n+1) in terms of their associated execution servers. When these two execution servers are identical, the process advances to step S153. When they are different servers, the process skips to step S156. It is noted that when either or both of the two nodes do not care about selection of their execution servers, the analyzing unit 140 behaves as if their execution servers are identical.
(Step S153) Since the two nodes are found to share the same execution server, the analyzing unit 140 now consults the group management table to determine whether there is an existing group including node (n) as a member. When such an existing group is found, the process advances to step S154. When no such groups are found, the process proceeds to step S155.
(Step S154) The analyzing unit 140 adds node (n+1) to the group of node (n) in the group management table. The process then proceeds to step S156.
It is noted here that node (n) may belong to two or more groups in some cases. See, for example,
(Step S155) The analyzing unit 140 produces a group from node (n) and node (n+1) and registers this new group with the group management table. The process then proceeds to step S156.
(Step S156) The analyzing unit 140 increments n by one and exists from the grouping routine for manipulation component nodes.
The following description will now explain another grouping routine called to handle parallel branches.
(Step S161) The analyzing unit 140 assigns the value of n+1 to another variable m, where m is an integer greater than zero.
(Step S162) The analyzing unit 140 substitutes m for n.
(Step S163) There is a plurality of nodes (n) originating different branch routes, which are subject to the following steps. At step S163, the analyzing unit 140 selects one of these pending branch routes. For example, the analyzing unit 140 is configured to select such routes in ascending order of the number of nodes included in their parallel sections. The analyzing unit 140 consults the node-vs-server management table 141 to obtain information about node (n) in the selected route.
(Step S164) The analyzing unit 140 subjects node (n) in the selected route to a process of node grouping by calling the foregoing grouping routine for manipulation component nodes.
(Step S165) The analyzing unit 140 determines whether it has finished processing of the last node in the selected route. When the last node is done, the process advances to step S166. When there remains a pending node in the selected route, the process goes back to step S164.
(Step S166) The analyzing unit 140 determines whether there is any other parallel branch route to select. When a pending route is found, the process goes back to step S162. When there are no pending routes, the analyzing unit 140 exits from the grouping routine for parallel branch.
The following description will now explain yet another grouping routine called to handle conditional branches.
(Step S171) The analyzing unit 140 assigns the current value of n to another variable m.
(Step S172) The analyzing unit 140 obtains information about node (n−1), i.e., the node immediately before the conditional branch node, from the node-vs-server management table 141. This node is referred to herein as “node W.”
(Step S173) There is a plurality of routes following the conditional branch node, which are subject to the following steps. At step S173, the analyzing unit 140 selects one of these pending routes. For example, the analyzing unit 140 is configured to select such routes in ascending order of the number of nodes included in them.
(Step S174) The analyzing unit 140 consults the node-vs-server management table 141 to obtain information about node (n+1) in the selected route. It is noted that the nodes concerned in this step S174 include, not only the nodes right on the selected route, but also the rejoining node (e.g., node 59m in
(Step S175) The analyzing unit 140 determine whether the group management table has an existing group including node W as its member. When such a group is found, the process advances to step S176. Otherwise, the process proceeds to step S180.
(Step S176) The analyzing unit 140 consults the group management table to retrieve information about the execution server associated with the existing group of node W.
(Step S177) The analyzing unit 140 compares the execution server associated with the group of node W with the execution server associated with node (n+1). When these two execution servers are identical, the process advances to step S178. When they are different servers, the process proceeds to step S184. It is noted that when either or both of the compared group and node (n+1) do not care about selection of their execution servers, the analyzing unit 140 behaves as if their execution servers are identical, and thus advances the process to step S178. It is further noted that the analyzing unit 140 determines that their execution servers are different when step S174 has failed to obtain information about node (n+1).
(Step S178) The analyzing unit 140 adds node (n+1) to the group of node W as a new member.
(Step S179) The analyzing unit 140 increments n by one and moves the process back to step S174.
(Step S180) Since there is no existing group that includes node W, the analyzing unit 140 consults the group management table to retrieve information about the execution server associated with node W.
(Step S181) The analyzing unit 140 compares the execution server associated with node W with the execution server associated with node (n+1). When these two execution servers are identical, the process advances to step S182. When they are different servers, the process proceeds to step S184. It is noted that when either or both of the two nodes do not care about selection of their execution server, the analyzing unit 140 behaves as if their execution servers are identical. It is further noted that the analyzing unit 140 determines that they are different servers when step S174 has failed to obtain information about node (n+1).
(Step S182) The analyzing unit 140 produces a group from node W and node (n+1).
(Step S183) The analyzing unit 140 increments n by one and moves the process back to step S174.
(Step S184) This step has been reached because the execution server associated with node W or the group including node W is found to be different from the one associated with node (n+1). The analyzing unit 140 increments n by one and moves the process back to step S185.
(Step S185) The analyzing unit 140 determines whether it has finished processing as to the selected route. For example, the analyzing unit 140 determines whether node (n) is the last node of the selected route. If this test returns true, it means that all processing of the route has been finished. When this is the case, the process advances to step S187. Otherwise, the process proceeds to step S186.
(Step S186) The analyzing unit 140 subjects the nodes in the selected route to a grouping routine for manipulation component nodes (see
(Step S187) The analyzing unit 140 determines whether it has finished all routes derived from the conditional branch node. When all routes are done, the analyzing unit 140 exits from the current grouping routine. When there is a pending route, the process proceeds to step S188.
(Step S188) The analyzing unit 140 substitutes m for n, thus resetting node (n) to the conditional branch node. The process then returns to step S173.
The above-described grouping routines permit the analyzing unit 140 to sort the nodes in an automated flow into groups. The results of these routines are compiled into a group management table and saved in the memory 102 or the like.
The group ID field contains a group ID for uniquely identifying a particular group, and the node name field enumerates the nodes constituting the group. The execution server field contains the name of an execution server that is associated with these member nodes.
Referring now to
The analyzing unit 140 in this case calls the foregoing grouping routine for manipulation component nodes (see
The analyzing unit 140 calls the foregoing grouping routine for manipulation component nodes (see
Operations of three nodes 58d to 58f are executed by an execution server [A]. Operations of two nodes 58g and 58h are executed by another execution server [B]. Operation of two nodes 58i and 58k are executed by yet another execution server “C”.
Since the automated flow 58 includes two parallel branch routes in the middle of its execution, the analyzing unit 140 calls the process of
Operations of three nodes 59b and 59d to 59g are executed by an execution server [A]. Operations of two nodes 59j and 59k are executed by another execution server [B]. Operations of four nodes 59h, 59i, 59l and 59n are executed by yet another execution server [C].
The above automated flow 59 makes a conditional branch in the middle of its execution, and which route the automated flow 59 takes is not known until the branch node is reached at runtime. In view of this, the analyzing unit 140 compares the execution server of the pre-branch node 59b with the execution server of each post-branch node 59d, 59g, and 59j. If a match is found, the analyzing unit 140 produces a group from the pertinent nodes.
Referring to the example of
Referring again to the second route, two nodes 59h and 59i share the same execution server and thus form their own group [G8]. Similarly, two nodes 59j and 59k on the third route form their own group [G9] since they share the same execution server.
The node 59n next to the rejoining point of the above three routes is assigned an execution server [C], which matches with the execution server of the last node 59i of the second route. Accordingly, the analyzing unit 140 adds this node 59n to the same group [G8] of the last node 59i of the second route. The execution server of the node 59n also matches with that of the last node 59l on the third route. Accordingly, the analyzing unit 140 further produces a group [G10] from these two nodes 59l and 59n. It is noted that the node 59n belongs to two groups [G8] and [G10]. The rejoining node 59m may be included in, for example, the group of its subsequent node 59n.
The resultant groups in the example of
As can be seen from the above description, the proposed management server 100 is configured to produce a group from nodes that are assigned the same server for their execution. The execution servers work more efficiently because of their less frequent communication with the management server 100.
(b) Performance Analysis
This section provides the details of performance analysis.
(Step S201) The analyzing unit 140 obtains a communication count value with respect to each manipulation component in a specific automated flow. The analyzing unit 140 assigns the obtained count value to a variable i.
For example, the analyzing unit 140 has a communication count previously defined for each type of manipulation components to indicate the number of communication events between the management server 100 and a managed server pertinent to the processing operation of that manipulation component. The definitions are previously stored in, for example, the memory 102 or HDD 103 of the management server 100. The analyzing unit 140 identifies the type of a manipulation component in question and retrieves its corresponding communication count value from the memory 102 or the like. The user is allowed to set up the communication counts specific to manipulation component types. For example, the user may create his or her own manipulation components and define their corresponding communication counts.
(Step S202) The analyzing unit 140 consults the CMDB 120 to obtain data of transmission rates at which the management server 100 communicates with managed servers to manipulate them in an automated flow. These transmission rates, Sa, are used when the management server 100 is assigned to managed servers for their manipulation.
(Step S203) The analyzing unit 140 consults the CMDB 120 to obtain data of transmission rates at which execution servers communicate with managed servers to manipulate them in an automated flow. These transmission rates, Sb, are used when execution servers are assigned to managed servers for their manipulation.
(Step S204) The analyzing unit 140 further obtains data from the CMDB 120 as to the transmission rates Sc of links between the management server and execution servers.
(Step S205) For each node and each group, the analyzing unit 140 calculates communication performance in the case where the management server 100 directly manipulates managed servers. This calculation applies to all managed servers to be manipulated in the automated flow. More specifically, the following formulas give the communication performance:
(i) Performance of Solitary Node
Solitary nodes are nodes that are not included in any groups. Let X represent the length of processing packets of a manipulation component corresponding to such a node. The communication performance of this solitary node in question is then calculated with the following formula (1):
X/Sa×i (1)
where Sa represents the transmission rate of a communication link between the management server and the managed server to be manipulated at the node in question.
(ii) Performance of Node Group
Let k, an integer greater than zero, be the number of nodes in a group, and {X1, X2, . . . , Xk} represent the lengths of processing packets of manipulation components corresponding to k nodes in the group in question. The communication performance of the group is calculated with the following formula (2):
{X1/Sa×i}+{X2/Sa×i}+ . . . +{Xk/Sa×i} (2)
where Sa represents the transmission rates of communication links between the management server and the managed servers to be manipulated in the group of nodes in question.
(Step S206) The analyzing unit 140 calculates communication performance in the case where execution servers manipulate managed servers after receiving the control to execute the automated flow. This calculation is performed for every combination of the management server and a managed server. More specifically, the following formulas give the communication performance:
(i) Performance of Solitary Node
Let Y be the length of a flow execution request packet to an execution server associated with a node, and Z be the length of a flow completion packet from the execution server back to the management server. The communication performance of the node is then calculated with the following formula (3):
Y/Sc+X/Sb×i+Z/Sc (3)
where Sb represents the transmission rate of a communication link between the execution server associated with the node in question and the managed server to be manipulated at that node, and Sc represents the transmission rate of a communication link between the management server and the noted execution server.
(ii) Performance of Node Group
The communication performance of a group is calculated with the following formula (4):
{Y/Sc}+{X1/Sb×i}+{X2/Sb×i}+ . . . +{Xn/Sb×i}+{Z/Sc} (4)
where Sb represents the transmission rates of communication links between the execution server associated with the group in question and managed servers to be manipulated in that group, and Sc represents the transmission rate of a communication link between the management server and the noted execution server.
The nodes in a group are executed by a single execution server to which the management server passes its control. This means that the management server has only to send one flow execution request to the execution server and receive one flow completion notice from the same, in spite of multiple operations executed during the flow.
The above-described calculation of communication performance uses packet length parameters that have previously been measured and recorded. For example, packet lengths may be measured from the following packets: processing packets specific to each manipulation component, flow execution request packets to execution servers, and flow completion packets from execution servers to management server. It is also possible to update existing records of packet lengths with new values measured during the course of manipulations performed by the system. This dynamic in-service update of packet lengths improves the accuracy of performance analysis.
The above description has explained how the communication performance is calculated. Referring to formulas (1) to (4) discussed above, smaller output values of the formulas suggest higher communication performance. The calculated values of communication performance are recorded in the memory 102 in the form of, for example, a communication performance management table.
(c) Server Assignment
With the communication performance calculated by the analyzing unit 140, the execution control unit 150 determines which server to assign for execution of each node or group of nodes in the automated flow of interest.
(Step S301) The execution control unit 150 consults the communication performance management table 143 to obtain data of communication performance.
(Step S302) The execution control unit 150 selects either the management server 100 or execution server, whichever exhibits a higher communication performance. The execution control unit 150 makes this selection for each node or group listed in the communication performance management table 143 and assigns the selected servers to their pertinent nodes or groups. The assigned servers will execute the processing operations of nodes or groups and are thus referred to herein as the “operation-assigned servers.”
For example, the execution control unit 150 compares the management server 100 with the execution server associated with the node or group in question in terms of their communication performance. When the execution server has a higher communication performance (or smaller calculated value), the execution control unit 150 assigns the execution server to the node or group as its operation-assigned server. When the management server 100 has a higher communication performance (or smaller calculated value), the execution control unit 150 assigns the management server 100 to the node or group as its operation-assigned server. The execution control unit 150 then records this determination result in the memory 102 in the form of, for example, an operation-assigned server management table.
(d) Execution of Automated Flow
This section describes a process of executing an automated flow. The execution of an automated flow may actually be either or both of: (1) execution by the execution control unit 150 in the management server 100, and (2) execution by execution servers to which the management server 100 has passed the control.
(Step S401) The execution control unit 150 consults the process definition storage unit 130 to obtain information about the next node it executes in the automated flow.
(Step S402) The execution control unit 150 determines whether the node in question is an end node. When it is an end node, the current process of automated flow execution is closed. Otherwise, the process advances to step S403.
(Step S403) The execution control unit 150 consults the operation-assigned server management table 144 and group management table 142 to obtain information about the operation-assigned server assigned to the node in question. The act of this step actually depends on whether the node in question is a solitary node or a member node of a group. In the former case, the execution control unit 150 consults the operation-assigned server management table 144 to see the execution server or management server specified in the operation-assigned server field relevant to the node. The latter case (i.e., when the next node belongs to a group) is known to the execution control unit 150 through the group management table 142. That is, the execution control unit 150 identifies a group containing the node and obtains its group ID from the group management table 142. The execution control unit 150 uses the group ID to look up its associated operation-assigned server (management server or execution server) in the operation-assigned server management table 144.
(Step S404) The execution control unit 150 currently owns the control in the process of automated flow execution. The execution control unit 150 now determines whether to pass this control to an execution server. For example, the execution control unit 150 determines to pass the control to an execution server when that server is assigned to the node as its operation-assigned server. The process thus advances to step S406. When, on the other hand, the management server 100 is assigned as the operation-assigned server, the execution control unit 150 holds the control and advances the process to step S405.
(Step S405) Inside the management server 100, the execution control unit 150 requests the flow execution unit 160 to execute operation corresponding to the node. In the case where the node belongs to a group, the execution control unit 150 causes the flow execution unit 160 to execute all nodes in that group. The flow execution unit 160 executes such node operations as instructed by the execution control unit 150. The process then returns to step S401.
(Step S406) The execution control unit 150 requests the assigned execution server to execute the operation of the current node or group. For example, the execution control unit 150 inserts an extra node next to the current node or group of nodes in the automated flow, so that the execution server will return the control to the management server 100 when the requested operation is finished. The execution control unit 150 requests the execution server to execute from the current node until the added control node is reached.
(Step S407) The execution control unit 150 determines whether the processing at step S406 has successfully communicated with the execution server. When the communication is successful, the process advances to step S408. When the execution control unit 150 is unable to reach the execution server, the process branches to step S410.
(Step S408) The execution control unit 150 waits for a completion notice from the execution server.
(Step S409) The execution control unit 150 receives a completion notice from the execution server. The process then goes back to step S401.
(Step S410) Unable to reach the execution server, the execution control unit 150 executes the foregoing process definition analysis of
(Step S411) The execution control unit 150 executes the foregoing performance analysis of
(Step S412) The execution control unit 150 executes the foregoing server assignment of
The above-described steps permit the proposed system to execute operations of each node in a given automated flow by using efficient servers. The management server 100 delegates some node operations to execution servers when they are expected to perform more efficiently, and the assigned execution servers execute requested operations accordingly.
(Step S421) The flow execution unit 230 receives an execution request from the management server 100. This request includes information about a special node that defines a process of returning the control back to the management server 100. This returning node has been inserted in the automated flow, and its information includes the position of the inserted node. The flow execution unit 230 keeps this information in its local memory.
(Step S422) The flow execution unit 230 reads a pertinent automated flow out of the process definition storage unit 220 and locates a node to be executed. The flow execution unit 230 executes the operation of that node.
(Step S423) Upon execution of one node at step S422, the flow execution unit 230 determines whether the next node is to return the control to the management server 100. When the node in question is found to be the returning node, the process skips to step S426. Otherwise, the process advances to step S424.
(Step S424) The flow execution unit 230 determines whether the next node is a conditional branch node (i.e., a node at which the work flow takes one of a plurality of routes). When the node in question is a conditional branch node, the process advances to step S425. Otherwise, the process returns to step S422.
(Step S425) The flow execution unit 230 determines to which destination node the conditional branch node chooses. When the chosen destination node is to be executed by the execution server 200 itself, the process returns to step S422. Otherwise, the process advances to step S426.
(Step S426) The execution server 200 is here at step S426 because the returning node has been reached at step S423, or because the destination node of the conditional branch is found at step S425 to be executed by some other server. The flow execution unit 230 thus returns the control to the management server 100 by, for example, sending a completion notice for the requested operation to the management server 100.
The completion notice may include a unique identifier of the destination node of the conditional branch when this is the case, thereby informing the management server 100 which node in the automated flow has returned the control. The unique identifier may be, for example, an instance ID, which has previously been assigned for use during execution of the automated flow.
(e) Advantages of Second Embodiment
As can be seen from the above description, the proposed system executes an automated flow in an efficiently distributed manner. This is because the second embodiment determines which servers to execute processing operations, considering not only the transmission rates between the management server 100 and execution servers, but also the transmission rates between execution servers and managed servers, so that the automated flow is executed more efficiently.
The enhancement of efficiency based on consideration of transmission rates is expected to work well, particularly when a large amount of data (e.g., log files) has to be transported over networks. In such cases, the throughput of the system may vary depending on its communication performance.
The total processing time of an automated flow may be reduced by distributing its workload across multiple servers. In view of, however, the significance of communication performance noted above, the expected effect of load distribution is limited if it only takes into consideration the load condition of CPU and memory resources in the servers. The second embodiment is therefore configured to distribute the workload of processing with a consideration on communication performance of servers, so that the processing operations in an automated flow can be executed efficiently even if they include transfer of massive data.
The second embodiment is also capable of combining a plurality of successive nodes into a single group and requesting execution of these nodes collectively, when it is efficient for a single execution server to execute such node operations. This feature reduces communication events between the management server and execution servers, thus contributing to more efficient processing operations.
The completion notice from the execution server 200a causes the management server 100 to advance to a subsequent node 56d in the automated flow 56, at which the management server 100 sends an execution request for the node 56d to another execution server 200b. The requested execution server 200b then executes operations defined at the node 56d, including manipulation of a managed server 45c, and returns a completion notice to the management server 100.
The completion notice from the execution server 200b causes the management server 100 to advance the execution to a subsequent node 56e in the automated flow 56, at which the management server 100 sends an execution request for group [G2] to yet another execution server 200c to delegate the operations of two nodes 56e and 56f. The requested execution server 200c then executes operations including manipulation of managed servers 45d and 45e and returns a completion notice to the management server 100.
As can be seen from the above, the management server 100 sends an execution request to execution servers 200a, 200b, and 200c and receives a completion notice from them, each time it performs a new set of processing operations. Without node grouping, the illustrated automated flow 56 would produce five execution requests and five completion notices transmitted back and forth. The node grouping of the second embodiment reduces them to three execution requests and three completion notices. The reduced transmissions lead to a shorter total processing time of the automated flow.
The second embodiment is designed to rebuild the server assignment schedule automatically upon detection of a failure of connection with an assigned execution server. This feature helps in the case where, for example, a distributed processing system is scheduled to run at nighttime. That is, even if some of the originally scheduled execution servers are failed, the system would be able to finish the automated flow by the next morning.
Various embodiments and their variations have been discussed above. According to one aspect of the embodiments, the proposed techniques enable more efficient control of devices.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2013-132543 | Jun 2013 | JP | national |