The present invention relates to a neural network system, a neural network learning method system, and a neural network learning program system.
A neural network is, for example, configured to multiply a plurality of inputs by respective weights, input a value obtained by adding together the plurality of resultant multiplication values in an activation function of a neuron in an output layer, and output an output of the activation function. Such a neural network having a simple configuration is called a simple perceptron. Furthermore, a neural network that has a plurality of layers of the above simple configuration and inputs the output of one layer to another layer is called a multilayer perceptron. Also, a deep neural network has a plurality of hidden layers between the input layer and the output layer, as a multilayer perceptron. Hereinafter, neural network will be abbreviated to NN (Neural Network).
A NN optimizes parameters such as the aforementioned weights by learning using a large amount of training data. While the accuracy of a model can be enhanced by using more training data, there is a problem in that this increases the number of learning iterations and increases the computation time required for learning.
As a method of shortening the computation time required for learning by a NN, data-parallel distributed learning that divides the training data and executes computational operations for learning in parallel with a plurality of computational nodes has been proposed. Data-parallel distributed learning is described, for example, in the following patent literatures.
Patent Literature 1: Japanese Patent Application Publication No. 2018-120470
Patent Literature 2: Japanese Patent Application Publication No. 2012-79080
In data-parallel distributed learning, training data is divided by the number of computational nodes, and the plurality of computational nodes execute computational operations for learning based on respective training data to calculate the gradient of an error function of the output of the NN for the parameters of the NN, and calculate update amounts of the parameters obtained by multiplying the gradient by a learning rate. Thereafter, the computational nodes calculate the average of the update amounts of the nodes, and all the computational nodes update the parameters with the average of the update values. Due to the plurality of computational nodes performing computational operations for learning in parallel with respective training data, the computation time required for learning can be shortened compared with a single computational node performing learning computations with single piece of training data.
However, in order to calculate the average of the update amounts respectively calculated by the plurality of computational nodes, it is necessary to aggregate the update amounts calculated by each of the plurality of computational nodes through addition or the update amounts, and to share the aggregate addition value with the plurality of computational nodes. At the time of aggregation and the time of sharing, data communication processing is performed between the plurality of computational nodes.
As a result, although data-parallel distributed learning shortens the computation time required for learning by performing computational operations on training data in parallel, but the effect of shortening the computation time is reduced due to the time taken for communication processings between the computational nodes that is performed every learning process.
A first aspect of disclosed embodiment is a neural network system including a memory; and a plurality of processors configured to access the memory, wherein in each of a plurality of iterations of learning, the plurality of processors each executes a computational operation of a neural network based on an input of training data and a parameter within a neural network to calculate an output of the neural network, and calculates a gradient of a difference between the calculated output and supervised data of the training data or an update amount based on the gradient, in a first case in which a cumulative of the gradient or update amount is not less than a threshold value, the plurality of processors execute first update processing for transmitting, to the other processors among the plurality of processors, a cumulative of a plurality of the gradients or update amounts respectively calculated thereby to aggregate the cumulatives of the plurality of gradients or update amounts, receiving the aggregated cumulatives of the gradients or update amounts, and updating the parameter with the aggregated cumulatives of the gradients or update amounts, and in a second case in which the cumulative of the gradient or update amount is less than the threshold value, the plurality of processors execute second update processing for updating the respective parameters with the gradients or update amounts which the plurality of processors respectively calculates, without aggregating the cumulative of the plurality of gradients or update amounts through transmission.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
[Neural Network System of Present Embodiment]
Furthermore, the NN system 1 has auxiliary storage devices 20 to 26 that are mass storages, and the auxiliary storage devices store an NN learning program 20, an NN program 22, training data 24 and a parameter w 26. The NN learning program 20 is executed by the processors 10 and 14 to perform processing for learning using training data. Also, the NN program 22 is executed by the processors 10 and 14 to execute computational operations of an NN model. The training data 24 is plural pieces of data each having an input and a label which is supervised data. The parameter 26 is a plurality of weights within the NN optimized by learning, for example.
The main processor 10 executes the NN learning program 20 to cause the plurality of sub-processors 14 to execute the computational operations for NN learning that uses the plural pieces of training data in distributed and in parallel. The four sub-processors 14 are computational nodes composed of processor chips and are configured to be communicable with one another via a bus 28.
The NN system 1 is able to provide a NN platform to client terminals 30 and 32 via the network NET. Other than the configuration of
[Learning NN]
The following is an outline of the learning processing of the NN. The processor 10 or 14 reads one piece of training data having input data and a label from the training data 24 in the storages, and inputs the input data to the input layer IN_L. The processor executes the NN learning program to execute a computational operation of the first neuron layer NR_L1 using the input training data, and input the computation result to the second neuron layer NR_L2. Furthermore, the processor executes a computational operation of the second neuron layer NR_L2 using that computation result and inputs the computation result to the third neuron layer NR_L3. And finally the processor executes a computational operation of the third neuron layer NR_L3 using that computation result and outputs the computation result. The three neuron layers NR_L1 to NR_L3 executing respective computational operations in order on the input of the input layer IN_L is called forward propagation processing FW.
On the other hand, the processor computes a difference E3 between the label (correct data) of the training data and the output of the third neuron layer NR_3, and differentiates the difference E3 with a parameter w in the neuron layer NR_L3 to obtain a gradient ΔE3. The processor calculates a difference E2 on the second neuron layer NR_L2 from the difference E3, and differentiates the difference E2 with a parameter w in the neuron layer NR_L2 to obtain a gradient ΔE2. The processor then calculates the difference E1 on the first neuron layer NR_L1 from the difference E2, and differentiates the difference E1 with a parameter w in the neuron layer NR_L1 to obtain a gradient ΔE1. Furthermore, computing the gradients ΔE3, ΔE2 and ΔE1 at the layers NR_L3, NR_L2 and NR_L1 in order while propagating the difference E3 between the output of the third neuron layer NR_L3, which is the output layer, and the label of the correct value to the second and first neuron layers is called back propagation processing BW.
Generally, in the computational processing of each neuron layer, the respective computational operations are performed on the input to the layer and a plurality of parameters w. In supervised learning, the plurality of parameters w are updated by a gradient method in each learning iteration, such that the difference between the output estimated by the NN based on the input data of the training data and the label (supervised data) is minimized. The parameter update amount is calculated by differentiating the difference E by the parameter w and multiplying the derived gradient ΔE by a learning rate η.
[Data Partition-based Distributed Learning]
With NNs, especially deep NNs (hereinafter referred to as “DNNs”), the accuracy of the NN or DNN can be improved by more training data being used and by the iterations of learning using training data being increased. However, as the amount of training data increases, the learning time of the NN system increases accordingly. In view of this, data-parallel distributed learning, which is performed by distributing learning processing among a plurality of computational nodes, is effective in shortening the learning time.
Data-parallel distributed learning causes a plurality of computational nodes to execute learning respectively using plural pieces of training data in distributed manner. That is, a plurality of computational nodes execute forward propagation processing FW and back propagation processing BW using the respective training data to calculate the gradients ΔE corresponding to the parameters w of the differences E of the NN of the respective computational nodes. The plurality of nodes then calculate respective gradients ΔE or parameter update amounts Δw obtained by multiplying the gradients by the learning rate and share the calculated gradients ΔE or parameter update amounts Δw with the plurality of nodes. Furthermore, the plurality of nodes acquire the average of the gradients ΔE or the average of the update amounts Δw, and update the respective parameters w of the NN with the average of the update amounts Δw.
As in the above learning method, processing in which a plurality of nodes each execute forward propagation processing and back propagation processing using one piece of training data to calculate a gradient or update amount, and update the parameter w of each node based on the average of the update amounts is processing corresponding to a mini-batch method that takes training data of the number of nodes as a mini-batch unit. The case where a plurality of nodes each execute forward propagation and back propagation processing in a plurality of processes using training data of the number of processes corresponds to a mini-batch method that takes, as a mini-batch, training data of a number obtained by multiplying the number of nodes by the number of processes of each node.
The above processing for calculating the average of the gradients or update amounts includes Reduce processing and Allreduce processing that is included in an MPI (Message Passing Interface), which is a standard specification of parallel computing. In the Reduce processing and Allreduce processing the respective parameters of a plurality of computational nodes are aggregated (e.g. added), and the plurality of computational nodes all acquires the aggregate value. This processing requires data communication between the plurality of computational nodes.
The four computational nodes correspond to the four sub-processors 14 in
The four computational nodes ND_1 to ND_4 perform the following processing by executing the NN learning program. Initially, each of the four computational nodes ND_1 to ND_4 inputs data corresponding thereto from data D1 to D4 of the training data (S10). The data D1 to D4 are respectively input to the first neuron layer NR_L1 of the computational nodes. The computational nodes then execute forward propagation processing FW and execute the computational operations of the neuron layers (S11). In
Next, the computational nodes respectively calculate differences E1 to E4 of the output OUT of the NN and the label LB which is supervised data. In the computational nodes ND_1 and ND_2, the outputs OUT is “5” and “3” and the labels LB are “6” and “2”. Here, the differences E1 to E3 are the sum of squares of the difference between the output OUT and the label LB. More specifically, the NN is a model for estimating handwritten numbers, and the third neuron layer, which is the output layer, outputs the respective probabilities that numbers in the input data correspond to numbers 0 to 9. In the probabilities of numbers 0 to 9 of the supervised data, the probability of the label LB being the indicated number is “1”, and the probability of the label LB being a different number is “0”, with respect to the output of these ten probabilities. The computational node then calculates the sum of squares of the differences of the respective probabilities as the difference E.
Each computational node then propagates the respective difference E through each neuron layer (S13), and differentiates the propagated difference E by the parameter w of each neuron layer to calculate the gradient E. Furthermore, each computational node multiplies the gradient ΔE by the learning rate η to calculate the update amount Δw of the parameter w(S14).
Here, the four computational nodes ND_1 to ND_4 communicate the respective update amounts Δw with one another via the bus 28 between the computational nodes, and a given computational node performs Reduce processing for aggregating the update amounts Δw1 to Δw4 of the parameters w1 to w4 of all the nodes. The aggregation is addition (or extraction of the maximum value), for example. The four computational nodes then perform Allreduce processing for receiving an aggregate addition value Δw_ad via the bus 28 and sharing the aggregate addition value with all the computational nodes (S15).
Next, each computational node calculates an average value Δw_ad/4 of the update amounts Δw by dividing the aggregate addition value Δw_ad by the number of computational nodes “4”, and adds the average value to the existing parameters w1 to w4 to update the respective parameters (S16). One learning iteration by mini-batching thereby ends. When the learning ends, the parameters of the NN of the computational nodes are updated to the same value. The computational nodes then return to the processing of S10 and executes the next iteration of learning.
Next, Reduce processing and Allreduce processing will be described.
Next, in the Allreduce processing, the computational node ND_1 transmits the aggregate value f(y1,y2,y3,y4) to the other computational nodes ND_2 to ND_4 via the bus 28 and shares the aggregate value with all the computational nodes.
In the example in
Also, as an alternative processing method, in the case where the four computational nodes ND_1 to ND_4 respectively possess array data (w1,x1,y1,z1), (w2,x2,y2,z2), (w3,x3,y3,z3), and (w4,x4,y4,z4), the computational nodes respectively transmit data w, data x, data y and data z to the computational node ND_1, the computational node ND_2, the computational node ND_3 and the computational node ND_4. Then, the computational node ND_1 calculates an aggregate value f(w1,w2,w3,w4)=w1+w2+w3+w4,
the computational node ND_2 calculates an aggregate value f(x1,x2,x3,x4)=x1+x2+x3+x4,
the computational node ND_3 calculates an aggregate value f(y1,y2,y3,y4)=y1+y2+y3+y4, and
the computational node ND_4 calculates an aggregate value f(z1,z2,z3,z4)=z1+z2+z3+z4.
The processing to this point is the Reduce processing. Next, the computational nodes transmit the respectively calculated aggregate values to the other computational nodes via the bus 28, and all the computational nodes acquire and share the aggregate values. This processing is the Allreduce processing.
Next, in the data partition-based distributed learning, the computational nodes ND_1 to ND_4 each divide the addition value Δw_ad by the number of computational nodes “4” to calculate an average value Δw_av of the update amount Δw in averaging processing Average. The computational nodes ND_1 to ND_4 then respectively add the average value Δw_av of the update amount to the existing parameters w1 to w4 in update processing Update.
In the mini-batch method, according to the above data-parallel distributed learning, plural pieces of training data of a mini-batch are distributed to a plurality of computational nodes, and the plurality of computational nodes perform computational operations in parallel up to calculating the gradient ΔE or update amount Δw of the parameters, and the parameters w of the computational nodes are updated with the average value of the plurality of update amounts Δw1 to Δw4 respectively calculated with the plural pieces of training data. Accordingly, even if the update amount Δw calculated with a given piece of training data is an exceptional value that deviates greatly from the update amounts calculated with other pieces of training data, the parameters w of the NN of all the computational nodes are updated with the average value of the plurality of update amounts, thus enabling the adverse effect on learning caused by the exceptional update amount to be suppressed.
On the other hand, due to Reduce processing and Allreduce processing that include communication processing between the computational nodes in each learning iteration being executed, a problem arises in that the processing time of learning increases due to communication processing.
[Data-Parallel Distributed Learning According to First Embodiment]
Here again it is assumed that the four computational nodes each perform learning using one piece of training data, and each computational node optimizes one parameter w within the NN. Although a NN typically has a large number of parameters w, description will first be given with an example of one parameter w of a NN, before later describing how processing is performed on a plurality of parameters w in a NN.
In this distributed learning, Reduce processing and Allreduce processing are not executed in all learning iterations for the gradients ΔE or update amounts Δw respectively calculated by the plurality of computational nodes. In other words, if the gradients or update amounts calculated by the plurality of computational nodes are less than a threshold value, the processor does not perform Reduce processing or Allreduce processing, and updates the parameter w of each computational node using the gradient or update amount calculated by that computational node. And, if the gradients or update amounts calculated by the plurality of computational nodes are not less than the threshold value, the processor performs Reduce processing and Allreduce processing, and updates the parameter w of each computational node using the average value of the aggregate gradient or update amount.
The NN system of the present embodiment thereby shortens the overall processing time of learning by occasionally skipping the communication processing of Reduce processing and Allreduce processing, while suppressing adverse effects caused by the gradients or update amounts of exceptional values according to the mini-batch method.
Note that when Reduce processing and Allreduce processing are performed and the parameters w of the NN of the plurality of computational nodes are updated with the average update amount, after learning in which Reduce processing and Allreduce processing are not executed has been continuously performed, the parameters w of all the computational nodes need to be reset to the same value. In view of this, the computational nodes each accumulate the gradients or update amounts of learning in which Reduce processing and Allreduce processing are not executed, and update the parameters of the computational nodes respectively are updated with the cumulative value of those update amounts, when Reduce processing and Allreduce processing are performed. Thus, each computational node stores a cumulative Er or Δwr of the gradient E or update amount Δw calculated when learning is performed.
In the following description, the gradient or update amount is simplified to the update amount Δw. Note that Reduce processing and Allreduce processing may be performed for the gradient ΔE instead of the update amount Δw.
The learning processing of the computational nodes will now be described in line with the flowchart in
Next, the computational nodes determine whether the respective cumulative update amounts Δwr1 to wr4 are less than a threshold value TH (S21). If the respective cumulative update amounts Δwr1 to wr4 for all the computational nodes are less than the threshold value TH (YES in S21), the computational nodes update the parameters by respectively adding the update amounts Δw1 to Δw4 to the parameters w1 to w4 (S16A). As a result, the computational nodes update the respective parameters w1 to w4 with the respectively calculated update amounts Δw1 to Δw4.
On the other hand, if respective cumulative update amounts Δwr1 to wr4 for all the computational nodes are not less than the threshold value TH (NO in S21), the computational nodes ND_1 to ND_4 transmit/receive the respectively calculated cumulative update amounts Δwr1 to Δwr4, aggregate the cumulative update amounts Δwr1 to Δwr4 of all the computational nodes through addition or the like (Reduce processing), and share an aggregate cumulative update amount Δwr_ad with all the nodes (Allreduce processing) (S15). Specifically, one of the computational nodes ND_1 to ND_4, such as the computational node ND_1, for example, receives and adds the cumulative update amounts Δwr2 to Δwr4 from the other computational nodes ND_2 to ND_4, and transmits the added aggregate cumulative update amount Δwr_ad to the other computational nodes ND_2 to ND_4. NO in the above determination S21 means that not all the cumulative update amounts Δwr1 to wr4 are less than the threshold value TH, and that at least one cumulative update amount is not less than the threshold value TH. YES in the above determination S21 means that all the cumulative update amounts Δwr1 to wr4 are less than the threshold value TH.
The computational nodes ND_1 to ND_4 then each divides the aggregated cumulative update amount Δwr_ad by the number of computational nodes “4” to derive an average value Δwr_ad/4 of the cumulative update amounts, and add the average value Δwr_ad/4 of the cumulative update amounts to the values of the parameters w1 to w4, which are before accumulation of the update amounts of respective parameters w1 to w4 was started, to update the parameters to a common value (S16). Along with this, the computational nodes reset the respective cumulative update amounts Δwr1 to Δwr4 to 0 (S22).
The computational nodes repeat the above learning processing for the duration that the total number of learning iterations is less than N (S23).
According to the above learning processing, if the cumulative update amounts Δwr1 to Δwr4 of the parameters calculated by the computational nodes ND_1 to ND_4 are all less than the threshold value (YES in S21), the computational nodes ND_1 to ND_4 do not perform Reduce processing or Allreduce processing, thus eliminating the time required for communication between the computational nodes in the Reduce and Allreduce processing and enabling the overall time required for learning to be shortened. If the computational nodes respectively successively update the parameters w1 to w4 with the update amounts Δw1 to Δw4 without performing Reduce processing and Allreduce processing, the computational nodes calculate and record the cumulative update amounts Δwr1 to Δwr4.
Thereby, if the cumulative update amounts Δwr1 to Δwr4 of the parameters calculated by the computational nodes ND_1 to ND_4 are all not less than the threshold value (NO in S21), the computational nodes aggregate the respective cumulative update amounts Δwr1 to Δwr4, share the aggregate cumulative update amount Δwr_add, and update the pre-accumulation parameters w1 to w4 with the average value Δwr_add/4 thereof. If the cumulative update amounts of all the computational nodes are less than the threshold value, the variability of the cumulative update amounts of the computational nodes will be relatively small, and, even if Reduce processing and Allreduce processing are omitted, the values of the parameters will not deviate greatly between the computational nodes. However, if the cumulative update amounts of all the computational nodes are not less than the threshold value and at least one cumulative update amount is greater than or equal to the threshold value, the deviation of the values of the parameters increases, and thus Reduce processing and Allreduce processing are performed to aggregate the cumulative update amounts of the computational nodes respectively and update and reset the pre-accumulation parameters of all the computation nodes with the average value of the aggregate cumulative update amount and reset the cumulated update amount Δwr to 0.
The above learning is premised on the computational nodes each optimizing one parameter w within the NN. In this case, the cumulative update amounts Δwr1 to Δwr4 of the parameters w1 to w4 of the computational nodes are each compared with a threshold value. However, because the update amounts Δw1 to Δw4 of the parameters may be positive or may be negative, desirably the absolute values of the cumulative update amounts Δwr1 to Δwr4 of the parameters are compared with a given threshold value TH (TH is positive).
Next, how the plurality of parameters w of an NN are optimized will be described. As a first method, in the determination step S21, the computational nodes individually compare respective cumulative update amounts of the plurality of parameters of the NN with the threshold value TH, and determine whether the respective cumulative update amounts of all the parameters of the NN are less than the threshold value TH, together with determining whether these cumulative update amounts are all less than the threshold value TH in all the computational nodes.
As a second method, the computational nodes group the plurality of parameters of the NN into a plurality of parameters w1, w2, . . . wn of each layer within the NN, and determine, in the determination step S21, whether the maximum value of the absolute values of the cumulative update amounts of the plurality of parameters w1, w2, . . . wn of each layer is less than the threshold value TH. The computational nodes then determine whether the determinations S21 of the plurality of layers of the NN are all less than the threshold value TH, together with determining whether the determinations S21 are all less than the threshold value TH in all the computation nodes. Because only the maximum value is compared with the threshold value TH, the throughput of the determination step S21 is improved.
As a third method, the computational nodes group the plurality of parameters of the NN into a plurality of parameters w1, w2, . . . wn of each layer within the NN, and, in the determination step S21, determine whether an Lp norm (p is a positive integer) of the absolute values of the cumulative update amounts of the plurality of parameters w1, w2, . . . wn of each layer are less than the threshold value TH. The computational nodes then determine whether the determinations of the plurality of layers of the NN are all less than the threshold value TH, together with determining whether the determinations are all less than the threshold value TH in all the computational nodes.
For example, as shown in the following formula, an L1 norm is the sum of the absolute values of the cumulative update amounts of the plurality of parameters w1, w2, . . . wn, and an L2 norm is the square root of sum of squares of the absolute values of the cumulative update amounts of the plurality of parameters w1, w2, . . . wn. The Lp norm is −pth power root of the sum of pth power of the absolute values of the cumulative update amounts of the plurality of parameters w1, w2, . . . wn raised to the power of p.
Δwr=(Δwr1,Δwr2, . . . ,Δwrn)
L1 norm of Δwr=|Δwr1|+|Δwr2|+ . . . +|Δwrn|
L2 norm of Δwr=√{square root over (|Δwr1|2+|Δwr2|2+ . . . +|Δwrn|2)} [Math. 1]
In the third method, values obtained by converting the cumulative update amounts of the plurality of parameters of each layer into L1 norm, L2 norm and so on are compared with a threshold value, thus improving throughput of the determination step S21.
[Data-Parallel Distributed Learning According to Second Embodiment]
Generally, immediately after the start of the learning process, the gradients ΔE of the parameters are large and the update amounts Δw1 to Δw4 are also large. On the other hand, near the end of the learning process, the gradients ΔE of the parameters are small and the update amounts Δw1 to Δw4 are also small. Thus, in the data-parallel distributed learning of the first embodiment, immediately after the start of the learning process, it may be determined every time in the determination step S21 that the cumulative update amounts are not less than the threshold value TH, and Reduce processing and Allreduce processing may be performed every time. On the other hand, as the end of the learning process approaches, in the determination step S21, it may be determined every time in the determination step S21 that the cumulative update amounts are less than the threshold value TH, and Reduce processing and Allreduce processing may not be performed at all.
In order to alleviate the above problems, in the second embodiment, the following processing is performed.
(1) In a D−1th iteration (D is a positive integer) from the beginning of N learning iterations in total, the computational nodes update the respective parameters of the NN with respective update amounts, without performing Reduce processing or Allreduce processing, regardless of the comparison determination with the threshold value TH.
(2) If, from the Dth iteration to the U−1th iteration (U is a positive integer greater than D), Reduce processing and Allreduce processing are not performed in the case where all of the cumulative update amounts Δwr1 to Δwr4 are less than the threshold value TH as in the first embodiment, Reduce processing and Allreduce processing are performed to update the respective parameters of the NN with the average value of the cumulative update amounts in the case where at least one of the cumulative update amounts Δwr1 to Δwr4 is not less than the threshold value TH.
(3) Furthermore, when Reduce processing and Allreduce processing are successively not performed due to the cumulative update amounts Δwr1 to Δwr4 being less than the threshold value TH until Uth iteration, in the Uth iteration, the computational nodes perform Reduce processing and Allreduce processing regardless of the comparison determination with the threshold value TH, and the respective parameters of the NN are updated with the average value of the cumulative update amounts.
(4) The computational nodes repeat the update cycle of the parameters of (1) to (3) described above until a total number of learning iterations N is reached.
According to the above processing, firstly, even immediately after the start of the learning process, the computational nodes do not perform Reduce processing or Allreduce processing in the initial D−1th iteration within the update cycle of (1) to (3), thus enabling the communication frequency to be reduced. Also, in or after the Dth iteration within the update cycle, the number of iterations in which Reduce processing and Allreduce processing are successively not performed increases as the cumulative update amount is smaller, and, conversely, the number of iterations in which Reduce processing and Allreduce processing are successively not performed decreases as the cumulative update amount is larger, based on the comparison determination between the cumulative update amounts of the parameters and the threshold value TH.
On the other hand, secondly, when the end of the learning process approaches, the computational nodes are, in a sense, forced to perform Reduce processing and Allreduce processing, when the number of learning iterations in which Reduce processing and Allreduce processing are successively not performed reaches the Uth iteration within the update cycle of (1) to (3), and all the corresponding parameters of the NN of all the computational nodes are updated to the same value with the average value of the same cumulative update amounts.
The flowchart in
In the second embodiment, all the computational nodes count a common learning iteration counter value i and a successive non-communication counter value j. Also, similarly to the first embodiment, the computational nodes cumulatively add the update amounts of the parameters calculated every time learning is performed and stores the cumulative update amounts Δwr1 to Δwr4. Also, the computational nodes execute the processing of S30, S31 to S32, and S33, in addition to the processing of the flowchart in
As initialization processing, the computational nodes respectively reset the learning iteration counter value i and the successive non-communication counter value j to “0”, and reset the cumulative update amounts Δwr1 to Δwr4 of the respective parameters to “0”. Next, the computational nodes perform data input of the training data, forward propagation processing and back propagation processing to calculate the update amounts Δw1 to Δw4 of the respective parameters (S10-S14). The computational nodes then respectively add one to counter values i and j, and respectively add the calculated update amounts Δw1 to Δw4 to the cumulative update amounts Δwr 1 to Δwr4 of the parameters to update the cumulative update amounts (S31).
(1) If the successive non-communication counter value j is less than a first reference frequency D (YES in S32), the computational nodes respectively update the parameters w1 to w4 with the respective update amounts Δw1 to Δw4 (S16A). The computational nodes repeat the processing of S10 to S14, S31 to S32, and S16A until the successive non-communication counter value j is no longer less than the first reference frequency D. If the first reference frequency D is “2”, for example, the computational nodes do not necessarily perform Reduce processing and Allreduce processing, in the first iteration of learning in the update cycle of (1) to (3).
(2) When the successive non-communication counter value j is no longer less than the first reference frequency D (NO in S32), the computational nodes determine whether the cumulative update amounts Δwr1 to Δwr4 of the parameters are all less than the threshold value TH in all the computational nodes (S21).
If less than the threshold value TH (YES in S21) and if the successive non-communication counter value j is less than a second reference frequency U (>D) (YES in S33), the computational nodes respectively update the parameters w1 to w4 with the respective update amounts Δw1 to Δw4 (S16A).
If not less than the threshold value TH (NO in S21), the computational nodes ND_1 to ND_4 perform Reduce processing and Allreduce processing (S15), and respectively update the parameters w1 to w4 with the average value Δwr_add/4 of the cumulative update amounts (S16). The computational nodes then reset the successive non-communication count value j to “0” and respectively reset the cumulative update amounts Δwr 1 to Δwr4 to “0” (534A). In this case, the update cycles j=D—U of (1) to (3) is reset.
(3) If less than the threshold value TH (YES in S21), and when the successive non-communication counter value j is no longer less than the second reference frequency U (>D) (NO in S33), the computational nodes execute Reduce processing and Allreduce processing (S15), update the parameters with the average value of the cumulative update amounts (S16), and reset the successive non-communication count value j and the cumulative update amounts to “0” (S22A). The update cycle is thus reset.
In the case of updating the plurality of parameters w of the NN of the computational nodes, similarly to the first embodiment, the computation nodes, in the determination step S21, may perform a determination, such as determining whether the absolute value of the cumulative update amount of each parameter w is less than a threshold value TH, whether the maximum value of the cumulative update amounts of the plurality of parameters of each layer is less than a threshold value TH, or whether the L1 norm or L2 norm of the absolute values of the cumulative update amounts of the plurality of parameters of each layer is less than a threshold value TH.
Update processing UP2 in
Update processing UP3 in
In the example in
According to the second embodiment, the computational nodes do not execute Reduce processing and Allreduce processing CM in all iterations of learning, thus enabling the overall computation time of learning to be suppressed by not executing the processing CM.
In the above embodiments, the cumulative update amounts Δwr of the parameters are aggregated in the Reduce processing and Allreduce processing, and the parameters of the NN are updated with the average value Δwr_ad/4. However, the Reduce processing and Allreduce processing may be performed for the gradients ΔE of the differences instead of the update amounts of the parameters. This is because the update amounts Δw of the parameters are calculated by multiplying the gradients ΔE of the differences by the learning rate η, and, accordingly, the cumulative update amounts Δwr can be calculated by multiplying cumulative gradients by the learning rate. In that case, if the computational nodes do not perform Reduce processing and Allreduce processing, accumulation of the gradients ΔE of the differences is updated, and if Reduce processing and Allreduce processing are performed, the cumulative gradients ΔEr of the computational nodes are aggregated, the aggregate value (addition value) of the cumulative gradients ΔEr is shared with all the computational nodes, and the pre-accumulation parameters w are updated with the average value ΔEr_ad/4 of the cumulative update amounts obtained by multiplying the average value ΔWr_ad/4 of the aggregate value (addition value) of the cumulative gradients ΔEr by the learning rate η.
The above embodiments described an example in which each computational node executes computational operations of an NN for one piece of training data in each learning iteration. However, each computational node may perform computational operations of an NN for plural pieces of training data in each learning iteration. In that case, the number of pieces of training data per batch is obtained by multiplying the number of pieces of training data of each computational node by the number of computational nodes (4 in the above example). The computational nodes then respectively update the parameters of the NN, using the average value of the update amounts Δw of the parameters or the gradients ΔE of the plurality of differences E respectively calculated in the plurality of processes. Also, in Reduce processing and Allreduce processing, the plurality of computational nodes aggregate the cumulative values of the update amounts or gradients, share the average of the aggregate values with all the computational nodes, and respectively update the parameters of the NN with the average of the aggregate value.
The above embodiments can be applied to learning of a NN such as a simple perceptron or a multilayer perceptron, a deep NN, which is a NN with a deep hierarchy, or the like. Deep NNs include, for example, convolutional NNs having a plurality of convolutional layers, pooling layers and fully connected layers, autoencoder NNs in which the input layer and output layer have nodes of the same size, and recurrent NNs.
According to the first aspect, the throughput of data-parallel distributed learning is improved.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This application is a continuation application of International Application Number PCT/JP2020/000644 filed on Jan. 10, 2020 and designated the U.S., the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2020/000644 | Jan 2020 | US |
Child | 17832733 | US |