The present invention relates to a technique for optimizing a model variable serving as a machine learning target.
In recent years, it has been actively performed to extract meaningful information from data by using a framework of machine learning such as deep learning. In the framework of machine learning, usually, a model is learned after obtained data are collected in one place.
However, data used for learning may not be collected in one place. (1) For example, when the data used for learning has high privacy and confidentiality such as data related to medical care, and the data cannot be outputted to the outside, the data cannot be collected in one place. Further, (2) for example, when the number of devices for storing data used for learning is too large, such as data inherent in a car or a smart phone, there is a possibility that a network is compressed by data transfer, and in this case, data cannot be collected in one place. Furthermore, (3) there has been an example in which severe regulations are imposed on the handling of data, such as the general data protection regulation (GDPR) in the EU, and in some cases, data cannot be collected in one place. That is, from the viewpoint of privacy protection, increase of data amount, and legal regulation, an era in which learning is performed in a distributed situation will come in the future.
Therefore, at present, a concept such as edge computing capable of learning even when data cannot be collected in one place has been studied (see NPL 1). In the edge computing, a model is learned in a state where data used for learning is accumulated in each of computers (also referred to as nodes) distributed on the network. Basic requirements for the edge computing are the followings.
(1) A model equivalent to a model learned after all data are collected in one place (hereinafter referred to as a global model) is obtained.
In this case, it is required for the edge computing to satisfy the following three requirements.
(2) An arbitrary network structure can be used so that the scale of data processing can be extended to an arbitrary scale. As the network structure, there are, for example, a distributed network provided with a server and a network of P2P type communication.
(3) Even if statistically biased data (referred to as non-uniform data) is accumulated in each node, learning is stably performed.
(4) Learning is stably performed without synchronous communication of all nodes constituting the network. The object of communication in this case is not data accumulated in each node, but auxiliary information such as a model update difference.
Learning in the edge computing will be described below. Consider a network in which N nodes (N is an integer of 2 or more) are connected. Since the network structure can be an arbitrary structure, it is described by using a graph, and a network for performing edge computing is represented as a graph G (N, ε). Here, N={1, 2, . . . , n} represents an index set representing nodes constituting the network, ε={1, 2, . . . , E} (E is an integer of 1 or more) is an index set representing edges constituting the network. Note that each node is connected to one or more nodes. That is, it is assumed that there is no isolated node. Also, if εi={j∈N|(i, j)∈(j≠i)} is satisfied, εi represents an index set of nodes to which the i-th node is connected.
Hereinafter, the model variable learned at the i-th node is defined as wi. Further, a learning data set which is a set of learning data accumulated in the i-th node is defined as xi.
The model to be optimized has the same structure and dimension for all nodes from the requirement (1). That is, model variables wi(i=1, 2, . . . , n) are learned so as to satisfy wi=wj with respect to arbitrary i, j (i≠j). Also, from the requirement (3), the learning data set xi and the learning data set xj (i≠j) are generally different in terms of the number of data and the statistical property of data.
It is assumed that the calculation capability of each node may be different. However, in the following description, for simplification of the description, auxiliary information such as a variable update difference is exchanged once at each edge including a specific node while the variable is updated K times by the specific node in learning. Hereinafter, the unit of the series of update and exchange processing will be referred to as a round. However, the exchange timing may be random. It is assumed that the round is executed R times.
Hereinafter, a set representing the number of times of execution of the round is represented as {1, 2, . . . , R}, and a set representing the number of times of execution of the update processing is represented as {1, 2, . . . , K}. Further, εir,k (r∈{1, 2, . . . , R}, k∈={1, 2, . . . , K}) represents an index set of nodes in which the i-th node is a communication target in the k-th update processing in the r-th round.
In the learning by the edge computing, under a condition that wi=wj is satisfied for any i, j (i≠j), that is, model variables of all nodes match, a model minimizing a cost function f is searched.
Hereinafter, the cost function will be described. In the assumption that the neural network structure and the definition of the cost function are common to all nodes, and if fi is a cost function in the i-th node, fi=fj is satisfied for arbitrary i and j (i≠j). Tin addition, a parameter of the cost function fi is the model variable wi to be learned. The cost function fi is appropriately designed for each application field such as image classification, noise removal, image generation, voice recognition, and abnormality detection, and it is generally possible to use an arbitrary differentiable function as the cost function fi. That is, if the cost function fi is a differentiable function, it may be a convex function or a non-convex function.
Here, as a specific example, a cost function using deep learning will be described. The design of the cost function using the deep learning has been widely used in recent years, and the cost function using the deep learning is a differentiable non-convex function. This will be described by way of an example of image classification. The image classification model is a model (that is, a function) for obtaining an output value yi from an input image xi through a neural network (that is, a combination of a multi-layer linear transformation and a non-linear transformation). The output value yi is a C-dimensional one-hot vector representing an existence probability for each class in the case where there are C classes to be classified. Note that the one-hot vector indicates that it is not the relevant class as an element value of the vector is close to 0, and that it is the relevant class as the element value of the vector is close to 1, and normally, the class having the maximum value is used as the classification result. Then, a function for outputting a scalar evaluation value is defined as an evaluation function, and a function obtained by combining the evaluation function with the function of the image classification model is defined as the cost function fi. In a classification problem including image classification, a cross entropy function is often used as an evaluation function.
w=[wiT, . . . , wnT]T is set, and the cost function f is defined by the following expression.
In addition, when gi=∇fi(wi) is set, gi is a function representing a gradient of the cost function fi. When the gradient of the cost function fi is calculated for the mini-batch selected as a part of the learning data set xi accumulated in the i-th node, the gradient becomes a probability variable. Therefore, gi may be referred to as a stochastic gradient of the cost function fi. In the deep learning, a mini-batch is selected for generalization of the non-convex function, and learning is advanced in a small step size while fluctuating the gradient.
A method of solving an optimization problem for minimizing the cost of the following expression will be described below.
As the method for solving this optimization problem, there are, for example, methods called (1) average consensus building and (2) stochastic variance reduced gradient method (SVRG). Examples of average consensus building include DSGD, Gossip SGD and FedAvg. In addition, examples of stochastic variance reduced gradient method include SCAFFOLD and GT-SVR.
First, the DSGD will be described as an example of average consensus building.
Next, as an example of the stochastic variance reduced gradient method, a description will be given of the SCAFFOLD.
Here, −ci is a global control variable which is an expected value of the stochastic gradient of the i-th node and the node group connected to the node, and ci is a local control variable which is an expected value of the stochastic gradient of the I-th node, and are defined by the following expressions, respectively.
That is, the expression (1) is an expression for correcting a direction of learning toward a global model by adding a difference between the global control variable −ci and the local control variable ci to the stochastic gradient gi.
Note that as can be seen from
The average consensus building such as DSGD operates stably when the learning data set accumulated in each node is approximately statistically homogeneous. On the other hand, there is a problem in that it often does not operate well in the case where (1) there is a statistical deviation in the learning data set accumulated between nodes or (2) communication between nodes is asynchronous and sparse. This problem is because the stochastic gradient takes a considerably different value for each node, so that the direction of learning is not corrected so as to approach the global model.
In addition, since the variable is updated to reduce the variance of the stochastic gradient in the stochastic variance reduced gradient method such as SCAFFOLD, the probability variance reduction gradient method has a strong resistance to (1) the statistical deviation of the data set for learning accumulated between nodes or (2) the case where communication between nodes is asynchronous and sparse compared with the averaging agreement formation. On the other hand, due to the update rules of the two control variables, the expected value of the stochastic gradient may not be estimated with high accuracy, and in such a case, there is a problem that the learning cannot be stably advanced and the global model cannot be learned.
Then, an object of the present invention is to provide a technique for stably optimizing a variable of model so as to conform the learning data set in the case where the learning data set distributed and accumulated in a plurality of nodes has the statistical deviation or the communication between nodes is asynchronous and sparse.
An aspect of the present invention is a variable optimization system that constituted by n (n is an integer 2 or more) nodes and optimizes a model variable by using a learning data set that is a learning data set accumulated in each node, in which N={1, . . . , n} is an index set of nodes, i∈N is set, wi is a model variable in the i-th node, xi is a learning data set in the i-th node, fi(wi) is a cost function in the i-th node, εi is an index set of nodes to which the i-th node is connected, −ci is a global control variable in the i-th node, ci|i is a local control variable in the i-th node, yi|j and zi|j(j∈si) are dual variables in the i-th node corresponding to the j-th node, respectively, Ai|j(j∈si) is a parameter matrix defined by the following expression,
R and K are integers of 1 or more, respectively, {1, 2, . . . , R} is a set representing the number of times of execution of a round, {1, 2, . . . , K} is a set representing the number of times of execution of update processing, εir,k(r∈{1, 2, . . . , R}, k∈{1, 2, . . . , K}) is an index set of nodes to be communicated by the i-th node in the k-th update processing in the r-th round, the variable optimization system includes, a model variable update unit that updates a value of the model variable wi in the i-th node by the following expression,
An aspect of the present invention is a variable optimization system that constituted by n (n is an integer 2 or more) nodes and optimizes a model variable by using a learning data set that is a learning data set accumulated in each node, in which N={1, . . . , n} is an index set of nodes, i∈N is set, wi is a model variable in the i-th node, xi is a learning data set in the i-th node, fi(wi) is a cost function in the i-th node, εi is an index set of nodes to which the i-th node is connected, yi|j and zi|j(j∈εi) are dual variables in the i-th node corresponding to the j-th node, respectively, Ai|j(j∈εi) is a parameter matrix defined by the following expression,
R and K are integers of 1 or more, respectively, {1, 2, . . . , R} is a set representing the number of times of execution of a round, {1, 2, . . . , K} is a set representing the number of times of execution of update processing, εir,k(r∈{1, 2, . . . , R}, k∈{1, 2, . . . , K}) is an index set of nodes to be communicated by the i-th node in the k-th update processing in the r-th round, the variable optimization system includes, a model variable update unit that updates a value of the model variable wi in the i-th node by the following expression,
An aspect of the present invention is a variable optimization system that constituted by n (n is an integer 2 or more) nodes and optimizes a model variable by using a learning data set that is a learning data set accumulated in each node, in which N={1, . . . , n} is an index set of nodes, i∈N is set, wi is a model variable in the i-th node, xi is a learning data set in the i-th node, fi(wi) is a cost function in the i-th node, εi is an index set of nodes to which the i-th node is connected, yi|j and zi|j(j∈εi) are dual variables in the i-th node corresponding to the j-th node, respectively, Ai|j(j∈εi) is a parameter matrix defined by the following expression,
R and K are integers of 1 or more, respectively, {1, 2, . . . , R} is a set representing the number of times of execution of a round, {1, 2, . . . , K} is a set representing the number of times of execution of update processing, εir,k(r∈{1, 2, . . . , R}, k∈{1, 2, . . . , K}) is an index set of nodes to be communicated by the i-th node in the k-th update processing in the r-th round, the variable optimization system includes, a model variable update unit that updates a value of the model variable wi in the i-th node by the following expression,
According to the present invention, even when there is a statistical deviation in a learning data set distributed and accumulated in a plurality of nodes, or communication between nodes is asynchronous and sparse, a variable of model can be stably optimized so as to conform to the learning data set.
The following describes an embodiment of the present invention in detail. Note that constituent units having the same function are denoted with the same number, and overlapping descriptions thereof are omitted.
A notation method used in this specification will be described before each embodiment is described.
A “{circumflex over ( )}” (caret) indicates a superscript. For example, xy{circumflex over ( )}z indicates that yz is a superscript to x, and xy{circumflex over ( )}z indicates that yz is a subscript to x. In addition, _(underscore) indicates a subscript. For example, xy_z indicates that yz is a superscript to x, and xy_z indicates that yz is a subscript to x.
Superscripts “{circumflex over ( )}” and “˜” for a certain letter x, as in “{circumflex over ( )}x” and “˜x”, should originally be written directly above “x”, but are written as “{circumflex over ( )}x” and “˜x” due to the restrictions of the descriptive notation of the specification.
As a method for solving the optimization problem, there are methods using a primal-dual form update rule in addition to the average consensus building and the stochastic variance reduced gradient method. As an example of a method using the primal-dual form update rule, for example, PDMM, Edge-Consensus Learning (ECL) are available. Here, the ECL will be described with reference to reference NPL 1.
The optimized model variables conform to the sum of learning data sets in all nodes as much as possible, and it is preferable that a consensus regarding the model variables is taken between the nodes. That is, the model variables are learned so as to reduce the cost function, and the model variables are learned so as to be substantially coincident with each other. Therefore, the optimization problem to be solved with respect to the model variables can be formulated as a cost minimization problem with a linear constraint such as the following expression.
Note that a sign of an identity matrix Ai|j of the expression (3) may be expressed as sign (Ai|j).
When the cost function f is a differentiable non-convex function, the following optimization problem in which the cost function f of the expression (2) is replaced by an upper bound function q in a quadratic form may be solved instead of solving the optimization problem of the expression (2).
Here, μ is a step size, and in the case of deep learning, it is set to a sufficiently small value such as 0.001. Further, wir,k represent a value of the model variable wi in the k-th update processing in the r-th round.
Solving a dual problem instead of solving the optimization problem of the expression (4) will be considered.
Specifically, a dual variable is defined as λ=[λ1T, . . . , λnT]T (where λi=[λi|ε_i(i)T, . . . , λi|ε_i(E_i)T]T is satisfied, and λi|j=λj|i for arbitrary i and j), and the dual problem related to the dual variable λ of the following expression is solved.
All the elements of the permutation matrix P are 0 or 1, and PP=I is satisfied.
Note that q* represents the convex conjugate function of the function q, and Lker(I-P) represents an indicator function.
It is considered to derive a variable update rule for solving the optimization problem of expression (5). The dual variable λ satisfying the expression (5) is obtained when a subdifferential of the cost function q*(JTATλ)+ιker(I-P) (λ) includes 0.
When the dual variables y and z of the same dimension as the dual variable λ are introduced and the expression (6) is transformed by using monotone operator splitting, the following variable update rule can be obtained.
The expression (7) is the update rule of the model variable w, expression (8) is the update rule of the dual variable y, and expression (9) is the update rule of the dual variable z. Also, the PDMM-SGD of expression (9) is the variable update rule obtained based on Peaceman-Rachford (P-R) type monotone operator splitting (PRS), and the ADMM-SGD is the variable update rule obtained based on Douglas-Rachford (D-R) type monotone operator splitting (DRS).
When the variable update rules of expression (7), (8) and (9) are split into the variable update rules for each node, an algorithm shown in
Since the update rule of the primal-dual form such as ECL minimizes the cost function while restricting the model of each node to be equivalent, resistant is strong against (1) the case where there is a statistical deviation in the accumulated learning data set and (2) the case where communication between nodes is asynchronous and sparse, similarly to the stochastic variance reduced gradient method. However, if the step size q and the intensity p of the regularization term are not appropriately selected, the learning may not converge well. In fact, the ECL has a problem that the values of q and p are empirically determined, so that the learning cannot be stably advanced.
Therefore, in order to stably advance learning by ECL, the following method is used.
The above three methods can be introduced into the ECL independently or in combination of two or more.
Note that, here, the local control variable is defined as ci|i.
In addition, the value of the global control variable −ci is updated by the following expression.
This expression indicates that the value of the global control variable is updated by using the value of the variable received from the node connected to the i-th node. Note that {i, εi} represents a sum set of the set {i} and the set εi. In addition, |εi| represents the number of elements of the set εi, that is, the concentration of the set εi.
Further, the value of the local control variable ci|i is updated by the following expression.
This equation indicates that the value of the local control variable ci|i is updated by using the value of the variable of its own node.
The value of the stochastic gradient gi is calculated by using the mini-batch εi.
The ratio πi|j is used as a weight βi|j in the i-th node corresponding to the j-th node, and the value of the model variable wi is updated. That is, in the ECL, the value of the model variable wi is updated by simple averaging without weighting, but in the method (3), the value of the model variable wi is updated by performing weighting by using πi|j and averaging. Further, when the method (3) is used in combination with the method (1), the value of the global control variable −ci is also updated by using πi|j.
As described above, two or more methods among the method (1) to (3) can be combined. As an example of the combination,
By introducing the methods (1) to (3) appropriately into the ECL, it was confirmed by numerical experiments that the learning can be stably performed without being affected by the difference in the number of elements and statistical properties of the learning data set accumulated in each node compared with the ECL. Among them, it was experimentally confirmed that the effects of the methods (1) and (2) were particularly high.
The variable optimization system 10 will be described below with reference to
N={1, . . . , n} is defined as an index set of nodes, i∈N. wi is a model variable in the i-th node, xi is a learning data set in the i-th node, fi(wi) is a cost function in the i-th node, and εi is an index set of nodes to which the i-th node is connected. −ci is a global control variable in the i-th node, and ci|i is a local control variable in the i-th node. yi|j and zi|j(j∈εi) are dual variables in the i-th node corresponding to the j-th node, respectively, and Ai|j(j∈εi) is a parameter matrix defined by the following expression.
R, K are integers of 1 or more, respectively, {1, 2, . . . , R} is a set representing the number of times of execution of the round, {1, 2, . . . , K} is a set representing the number of times of execution of the update processing, and εir,k(r∈{1, 2, R}, k∈{1, 2, . . . , K}) is an index set of nodes to be communicated of the i-th node in the k-th update processing in the r-th round. Hereinafter, r is referred to as a round execution number counter, and k is referred to as an update processing execution number counter. Note that r and k are simply referred to as counters.
An operation of the node 100 will be described in accordance with
In S120, the variable optimization unit 120 optimizes the model variable wi to be optimized by a predetermined procedure by using the learning data set xi, and outputs a result as an output value. At that time, the variable optimization unit 120 appropriately receives predetermined data from the j-th node (where j satisfies j∈εi) by using the transmission/reception unit 180, and optimizes the model variable wi. Note that the data received by the i-th node from the j-th node will be described later.
Hereinafter, the variable optimization unit 120 will be described with reference to
The operation of the variable optimization unit 120 of the i-th node will be described in accordance with
In S121, the initialization unit 121 performs initialization processing required for optimizing the model variable wi. The contents of initialization processing will be described below. The initialization unit 121 initializes counters r and k. The initialization unit 121 initializes the counter r by setting r←1. Similarly, the initialization unit 121 initializes the counter k by setting k←1. In addition, the initialization unit 121 initializes the model variable wi, the temporary variable ui|j and the dual variable zi|j (j∈εi) in the i-th node corresponding to the j-th node. The initialization unit 121 uses, for example, a random number to set an initial value of the model variable wi, an initial value of the temporary variable ui|j, and an initial value of the dual variable zi|j. Further, the initialization unit 121 initializes the global control variable −ci, the local control variable ciii, and the local control variable ci|j in the i-th node corresponding to the j-th node. The initialization unit 121 uses, for example, a random number to set an initial value of the global control variable −ci, an initial value of the local control variable ciii, and an initial value of the local control variable ci|j.
In S1221, the model variable update unit 1221 updates the value of the model variable wi by the following expression.
Also, when the distribution ψ (that is, the ratio of each class), which is the type of learning data accumulated in the N nodes, is known in advance, a mini-batch generated from the learning data set xi in accordance with the distribution ψ can be used as the mini-batch xi,MB.
The weight βi|j may be set by the initialization unit 121 in S121, for example. For example, βi|j=1/|εi| may be simply set. In addition, when the number of data di of the learning data set xi is known in advance, the weight βi|j may be the ratio πi|j occupied by the number of data accumulated in the j-th node connected to the i-th node with respect to the number of data accumulated in all nodes connected to the i-th node and the i-th node is calculated by the following expression.
In S1222, the first dual variable update unit 1222 updates a value of dual variable yi|j with respect to the index j satisfying j∈εi by the following expression.
In S1223, the second dual variable update unit 1223 receives a value of the model variable wj and a value of dual variable yj|i with respect to the index j satisfying j∈εir,k form the j-th node, and update a value of dual variable zi|j by the following expression, and updates a value of the global control variable −ci by the following expression.
Here, the expression used by the second dual variable update unit 1223 for updating a value of the dual variable zi|j is
The constant α may be set by the initialization unit 121 in S121, for example.
In S1224, when execution of update processing in the r-th round is the K-th (that is, when a counter k satisfies k=K) the local control variable update unit 1224 updates, by the following expressions, a value of the local control variable ci|i, and a value of the temporary variable ui|i in the i-th node.
In S123, when the counter k satisfies k=K, the counter update unit 123 initializes the counter k (that is, k←1 is set), and increment the counter r by 1 (that is, r←r+1 is set), and on the other hands, in other cases, the counter k is incremented by 1 (that is, k←k+1 is set).
In S124, when the counter r has reached a predetermined update count R (that is, the counter r satisfies r=R), the end condition judgement unit 124 outputs a value of the model wi at that time as an output value and ends the processing, and on the other hands, in other cases, the processing returns to S1221. That is, the variable optimization unit 120 outputs a value of the variable wi at that time when a predetermined termination condition is satisfied, and in other cases, the processing of S1221 to S124 is repeated.
According to an embodiment of the present invention, even when there is the statistical deviation in the learning data set distributed and accumulated in the plurality of nodes, or communication between nodes is asynchronous and sparse, the model variable can be stably optimized so as to conform to the learning data set.
In the first embodiment, the variable optimization unit 120 updates the model variable using the value obtained by adding the difference between the two control variables to the stochastic gradient, but the variable optimization unit 120 may update the model variable simply using the value of the stochastic gradient. Such an embodiment will be described below. The first embodiment and the second embodiment are different from each other only in a configuration and operation of the variable optimization unit 120.
Hereinafter, the variable optimization unit 120 will be described with reference to
An operation of the variable optimization unit 120 of the i-th node will be described in accordance with
In S121, the initialization unit 121 performs initialization processing required for optimizing the model variable wi. The initialization unit 121 initializes counters r and k. In addition, the initialization unit 121 initializes the model variable wi, the temporary variable ui|j and the dual variable zi|j(j∈εi) in the i-th node corresponding to the j-th node.
In S2221, the model variable update unit 2221 updates the model variable wi by the following expression.
Also, when the distribution ψ (that is, the ratio of each class), which is the kind of learning data accumulated in the N nodes, is known in advance, a mini-batch generated from the learning data set xi in accordance with the distribution ψ can be used as a mini-batch xi,MB.
The weight βi|j may be set by the initialization unit 121 in S121, for example. For example, βi|j=1/|εi| may be set. In addition, when the number of data di of the learning data set xi is known in advance, the weight βi|j may be the ratio πi|j occupied by the number of data accumulated in the j-th node connected to the i-th node with respect to the number of data accumulated in all nodes connected to the i-th node and the i-th node is calculated by the following expression.
In S2222, the first dual variable update unit 2222 updates a value of the dual variable yi|j with respect to the index j satisfying j∈εi by the following expression.
In S2223, the second dual variable update unit 2223 receives a value of the model variable wj and a value of the dual variable yj|i with respect to the index j satisfying j∈εir,k from the j-th node, update a value of the dual variable zi|j by a predetermined expression, and update a value of the temporary variable ui|j by the following expression.
Here, the expression used by the second dual variable update unit 2223 for updating a value of the dual variable zi|j is
The constant α may be set by the initialization unit 121 in S121, for example.
In S123, when the counter k satisfies k=K, the counter update unit 123 initializes the counter k (that is, k←1 is set), and increments the counter r by 1 (that is, r←r+1 is set), and on the other hands, in other cases, the counter k is incremented by 1 (that is, k←k+1 is set).
In S124, when the counter r has reached a predetermined update count R (that is, the counter r satisfies r=R), the end condition judgement unit 124 outputs a value of the model variable wi at that time as an output value, and end the processing, on the other hands, in other cases, the processing returns to S2221. That is, the variable optimization unit 120 outputs a value of the variable wi at that time when a predetermined end condition is satisfied, and in other cases, the processing of S2221 to S124 is repeated.
According to the embodiment of the present invention, even when there is the statistical deviation in the learning data set distributed and accumulated in the plurality of nodes, or communication between nodes is asynchronous and sparse, it is possible to stably optimize the variable of model so as to comfort to the learning data set.
Here, examples to which each embodiment of the present invention can be applied will be described.
In an environment in which as represented by a connected car an automobile and an automobile, or an infrastructure, etc. are connected to each other, the automobile and the infrastructure are regarded as nodes in each embodiment. Information from various sensors mounted on the automobile or the infrastructure, such as an image, an acoustic signal, an acceleration is accumulated in each node. Each embodiment of the present invention may be used when the accumulated data is used as learning data and one cost function is optimized. In this case, the cost function can be designed by using an index corresponding to the purpose such that, for example, the arrival time is minimized, the total amount of energy used is minimized, and the physical distance between nodes is made equal to or greater than a certain value.
In a situation where a plurality of digital twins affects each other, the digital twins are regarded as nodes in each embodiment. Learning data are accumulated in a form of being distributed to each digital twin. Each embodiment of the present invention may be used when one cost function is optimized without sharing the accumulated learning data with other nodes. For example, when the food loss is to be solved by using digital twin, individual persons and individual stores can be configured as digital twin, and a cost function having a cost term for minimizing the food amount of all stores can be used. Further, by using index values expressing the individual happiness, a cost function may be designed to minimize the food amount in all stores while maximizing the total sum of the index values.
The device of the present invention includes, for example, as a single hardware entity, an input unit to which a keyboard or the like can be connected, an output unit to which a liquid crystal display or the like can be connected, a communication unit to which a communication device (for example, a communication cable) capable of communicating with the outside of the hardware entity can be connected, a central processing unit (CPU; which may also include a cache memory, registers, etc.), a RAM or ROM which is a memory, an external storage device which is a hard disk, and a bus that connects the input unit, the output unit, the communication unit, the CPU, the RAM, the ROM, and the external storage device such that data can be exchanged therebetween. Also, as necessary, the hardware entity may be provided with a device (drive) capable of reading and writing a recording medium such as a CD-ROM. A general-purpose computer or the like is an example of a physical entity including such hardware resources.
The external storage device of the hardware entity stores a program that is needed to realize the above-mentioned functions and data needed for the processing of this program (not limited to the external storage device, and for example, the program may also be stored in a ROM, which is a read-only storage device). Also, the data and the like obtained through the processing of these programs is appropriately stored in a RAM, an external storage device, or the like.
In the hardware entity, each program stored in the external storage device (or the ROM, etc.) and the data needed for processing of each program are loaded to the memory as needed, and the CPU interprets, executes, and processes them as appropriate. As a result, the CPU realizes a predetermined function (each constituent unit represented by the above, unit, . . . means, etc.).
The present invention is not limited to the above-described embodiment, and appropriate changes can be made without departing from the spirit of the present invention. Further, the processing described in the embodiments are not only executed in time series in the described order, but also may be executed in parallel or individually according to a processing capability of a device that executes the processing or as necessary.
As described above, when the processing function in the hardware entity (device of the present invention) described in the above-described embodiments is realized by the computer, the processing contents of the function to be included in the hardware entity are described by the program. Then, by executing this program on the computer, the processing function in the above-described hardware entity is realized on the computer.
The program describing the processing contents can be recorded in a computer-readable recording medium. Any computer-readable recording medium may be used, such as a magnetic recording device, an optical disc, a magneto-optical recording medium, or a semiconductor memory. Specifically, for example, a hard disk device, a flexible disk, a magnetic tape, or the like can be used as the magnetic recording device, a digital versatile disc (DVD), a DVD-random access memory (DVD-RAM), a compact disc read only memory (CD-ROM), CD-recordable/rewritable (CD-R/RW), or the like can be used as the optical disc, a magneto-optical disc (MO) or the like can be used as the magneto-optical recording medium, and an electronically erasable and programmable-read only memory (EEP-ROM) or the like can be used as the semiconductor memory.
In addition, the program is distributed, for example, by sales, transfer, or lending of a portable recording medium such as a DVD or a CD-ROM on which the program is recorded. Further, the distribution of the program may be performed by storing the program in advance in a storage device of a server computer and transferring the program from the server computer to another computer via a network.
The computer executing such a program first stores, for example, the program recorded on the portable recording medium or the program transferred from the server computer in its own storage device. When executing the processing, the computer reads the program stored in its own storage device, and executes the processing in accordance with the read program. As another execution form of the program, the computer may directly read the program from the portable recording medium and execute processing in accordance with the program, each time the program is transferred from the server computer to the computer, processing in accordance with the received program may be executed sequentially. In addition, by a so-called application server provider (ASP) type service which does not transfer the program from the server computer to the computer and realizes the processing function only by the execution instruction and the result acquisition, the above-mentioned processing may be executed. Note that it is assumed that the program in this form includes information to be used for processing by the computer and equivalent to the program (data that is not a direct command to the computer but has the property of defining the processing of the computer, etc.).
Further, although the hardware entity is configured by a predetermined program being executed on the computer in the present embodiment, at least a part of the processing contents of the hardware entity may be realized in hardware.
The above description of the embodiments of the present invention is presented for the purpose of illustration and description. There is no intention to be exhaustive and there is no intention to limit the invention to a disclosed exact form. Modifications or variations are possible from the above-described teachings. The embodiments are selectively represented in order to provide the best illustration of the principle of the present invention and in order for those skilled in the art to be able to use the present invention in various embodiments and with various modifications so that the present invention is suitable for deliberated practical use. All of such modifications or variations are within the scope of the present invention defined by the appended claims interpreted according to a width given fairly, legally and impartially.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/020341 | 5/28/2021 | WO |