This application is based upon and claims the benefit of priority of the prior Japanese Patent application No. 2021-174063, filed on Oct. 25, 2021, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to a computer-readable recording medium having stored therein a machine learning program, a method for machine learning, and an information processing apparatus.
NNs (Neural Networks), which are used for AI (Artificial Intelligence) tasks such as image processing, tend to achieve high performance (e.g., high inference accuracy) with complex configurations. On the other hand, the complex configurations of NNs may increase the number of times of calculation in executing the NNs by calculators and the size of memory used in executing the NNs by the calculators.
As a method for reducing the number of times of calculation, in other words, shortening calculation durations (speeding up), and for reducing the size of memory, in other words, downsizing machine learning models of NNs, “pruning” has been known.
The pruning is a method for reducing the data size of the machine learning models and for reducing the calculation durations and communication durations by reducing (pruning) at least one type of elements among edges (weights), nodes, and channels of NNs.
Excessive pruning causes degradation of inference accuracy of NNs. Therefore, it is important to perform pruning of NNs while maintaining the inference accuracy or while keeping the degraded level of inference accuracy at a predetermined level.
For example, in pruning, a known method selects a layer that does not significantly affect the inference accuracy of NNs. This method, for example, determines a channel of a convolutional layer to be pruned based on parameters used in a Batch Normalization (BN) layer that follows a convolutional layer.
[Patent Document 1] U.S. Patent Publication No. 2019/0057308
[Patent Document 2] U.S. Patent Publication No. 2018/0232640
[Patent Document 3] Japanese Laid-open Patent Publication No. 2021-47854
[Patent Document 4] Japanese Laid-open Patent Publication No. 2020-123269
[Patent Document 5] Japanese Laid-open Patent
Publication No. 2021-22050
[Patent Document 6] U.S. Patent Publication No. 2019/0080238
[Patent Document 7] U.S. Patent Publication No. 2019/0205759
[Patent Document 8] U.S. Patent Publication No. 2020/0184333
The method for selecting the layer that does not significantly affect the inference accuracy of NNs is applied to the convolutional layer to which the BN layer is connected, but is not assumed to be applied to other layers such as the convolutional layers to which no BN layer is connected or fully connected layers.
According to an aspect of the embodiments, a computer-readable recording medium has stored therein a machine learning program for causing a computer to execute a process including: calculating thresholds of errors in tensors between before and after reduction one for each element of a plurality of layers in a trained model of a neural network including the plurality of layers; selecting reduction ratio candidates to be applied one to each of the plurality of layers based on a plurality of the thresholds and errors in tensors between before and after reduction in cases where the elements are reduced by each of a plurality of reduction ratio candidates in each of the plurality of layers; and determining reduction ratios to be applied one to each of the plurality of layers based on inference accuracy of the trained model and inference accuracy of a reduced model after machine learning, the reduced model being obtained by reducing each element of the plurality of layers in the trained model according to the reduction ratio candidates to be applied.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
Hereinafter, an embodiment of the present disclosure will now be described with reference to the drawings. However, the embodiment described below is merely illustrative and there is no intention to exclude the application of various modifications and techniques that are not explicitly described in the embodiment. For example, the present embodiment can be variously modified and implemented without departing from the scope thereof. In the drawings used in the following description, the same reference numerals denote the same or similar parts unless otherwise specified.
As depicted in
The calculator executes scaling 102 for the multiple channels 112 (#1 to #n). For example, in the scaling 102, in accordance with the following equation (2), the calculator multiplies each of the multiple channels 112 by the scaling factor γ, and adds a bias β to the multiplication result to output multiple channels 113 (#1 to #n) that represent distribution scaled by the parameters γ and β. In the following equation (2), zout represents the channels 113. The parameters γ and β may be optimized by machine learning.
[Equation 2]
z
out
=γz
mid+β (2)
At this step, the output is almost eliminated for the channel 113 (channel #n in the example of
For example, the calculator searches for a small (diminishing) γ by applying L1 regularization learning to γ. The L1 regularization learning is a machine learning technique known to be capable of making a parameter to be learned “sparse” by performing machine learning while adding a regularizer of L1 to a loss function calculated by the NN at the output.
As illustrated in
[Equation 3]
L=Σ
(x,y)
l(f(x,W),y)+λΣγ∈Γg(γ) (3)
The L1 regularization learning causes each parameter of the vector 123 to indicate (dichotomize) whether each parameter of the vector 121 becomes zero or non-zero. By using such L1 regularization learning, the calculator can identify a channel(s) in which γ becomes zero (close to zero) as the channel of the pruning target.
The identification of the pruning target using the L1 regularization learning depicted in
In view of the above, one embodiment describes a method for realizing downsizing of an NN by determining a pruning rate for each layer regardless of the type of layers.
<1-1> Example of Functional Configuration of Server According to One Embodiment
The memory unit 11 is an example of a storage area, and stores various data to be used by the server 1. As illustrated in
The obtaining unit 12 obtains the untrained model 11a and the data 11b for machine learning, and stores them in the memory unit 11. For example, the obtaining unit 12 may generate one of or both the untrained model 11a and the data lib for machine learning in the server 1, or may receive them from a computer outside the server 1 via a non-illustrated network.
The untrained model 11a may be a model of the NN including the untrained parameters before machine learning. The NN may include various layers and may be, for example, a DNN (Deep NN). The NN may include, for example, a convolutional layer to which no BN layer is connected or a fully connected layer, or may include a convolutional layer to which a BN layer is connected, and may be, as an example, the NN 130 illustrated in
The data 11b for machine learning may be, for example, a data set for training to be used for machine learning (training) of the untrained model 11a. For example, when machine learning is performed on an NN for realizing image processing, the data 11b for machine learning may include, for example, multiple pairs of labeled training data that includes training data such as image data and a ground truth label for the training data.
In the machine learning phase, the machine learning unit 13 executes a machine learning process that performs machine learning on the untrained model 11a based on the data 11b for machine learning. For example, the machine learning unit 13 may generate the trained model 11c by the machine learning process of the untrained model 11a. The trained model 11c may be an NN model including a trained parameter(s).
The trained model 11c may be obtained by updating a parameter included in the untrained model 11a, and may be regarded as, for example, a model as a result of a change from the untrained model 11a to the trained model 11c through the machine learning process. The machine learning process may be implemented by various known techniques.
The calculating unit 14 calculates the pruning rates 11d by executing a pruning rate calculation process for the trained model 11c, and stores them into the memory unit 11.
For example, the calculating unit 14 may include a threshold calculating unit 14a that calculates a threshold for selecting one of pruning rate candidates for each layer, and a determining unit 14b that determines, based on inference accuracy of the model pruned at the pruning rate candidates, the pruning rates 11d to be adopted.
The outputting unit 15 outputs output data based on the pruning rates 11d generated (obtained) by the calculating unit 14. The output data may include, for example, the pruning rates 11d themselves, the down-sized model 11e, or both.
The down-sized model 11e is data of a down-sized model of the trained model 11c, which is obtained by execution of pruning on the trained model 11c based on the pruning rates 11d. For example, in cooperation with the machine learning unit 13, the outputting unit 15 may acquire the down-sized model 11e by execution of pruning and re-learning on the trained model 11c while applying the pruning rates 11d, and may store the acquired model into the memory unit 11. The down-sized model 11e may be, for example, generated separately from the trained model 11c, or may be the updated data of the trained model 11c obtained through pruning and re-learning.
In outputting the output data, the outputting unit 15 may, for example, transmit (provide) the output data to another non-illustrated computer, or may store the output data into the memory unit 11 and manage the output data to be acquirable from the server 1 or another computer. Alternatively, in outputting the output data, the outputting unit 15 may display information indicating the output data on an output device such as the server 1, or may output the output data in various other manners.
<1-2> Example of Pruning Rate Calculation Process
Next, an example of the pruning rate calculation process by the calculating unit 14 of the server 1 will be described. In the following description, a calculation target of the pruning rate is assumed to be a weight matrix W which is an example of a parameter of a layer.
The calculating unit 14 determines the pruning rate regardless of the type of layers by using errors in tensors for each layer, which errors are generated by pruning. As an example, the calculating unit 14 may calculate the pruning rate according to the following procedures (i) to (iii).
(i) The calculating unit 14 (threshold calculating unit 14a) determines (calculates), for each layer, the pruning rate that can guarantee the accuracy.
The term “guarantee the accuracy” means, for example, to guarantee that accuracy of inference (inference accuracy) using the down-sized model 11e obtained by pruning the trained model 11c exceeds a predetermined criterion.
Here, the pruning rate is an example of a ratio for reducing (reduction ratio) an element(s) of a layer and indicates a ratio for rendering the pruning target in the trained model 11c “sparse”. In the example of
As illustrated in
For example, the threshold calculating unit 14a obtains an error in tensors between before and after pruning in cases where the pruning is performed for each pruning rate candidate, and determines the maximum pruning rate candidate among the pruning rate candidates with errors smaller than a threshold Tw. In the example of
The threshold Tw is a threshold of the error in the tensors between before and after the pruning, and is an upper limit of the pruning rate that can guarantee the accuracy. For example, the threshold calculating unit 14a may calculate the threshold Tw for each layer by expressing the loss function at the time of pruning the pruning target by an approximate expression such as a first-order Taylor expansion. The details of the method for calculating the threshold Tw will be described later.
The pruning rate calculated in (i) may be regarded as a “provisionally calculated” pruning rate in relation to processes of (ii) and (iii).
As described above, the threshold calculating unit 14a calculates the thresholds T of the errors in the tensors between before and after the reduction one for each element of the multiple layers in the trained model 11c of the NN including the multiple layers. The threshold calculating unit 14a selects the reduction ratio candidates to be applied one to each of the multiple layers based on the multiple thresholds T and the errors in the tensors between before and after the reduction in the cases where the elements are reduced by each of the multiple reduction ratio candidates in each of the multiple layers.
(ii) The calculating unit 14 (determining unit 14b) determines the pruning rate based on the accuracy of the machine learning model pruned (downsized) by using the pruning rate determined in (i) and the accuracy of the machine learning model that has not undergone pruning.
For example, the determining unit 14b considers the error caused by the approximate expression (first-order Taylor expansion), and compares the sum of accuracy Accp of the model pruned at the pruning rate determined in (i) for each layer and an accuracy margin Accm with accuracy Accwo of an unpruned model. The accuracy margin Accm is a margin for which the inference accuracy is allowed to be degraded, and may be set by a designer. The margin may be “0”, and in this case, the determining unit 14b may compare the accuracy Accp with the accuracy Accwo of the unpruned model.
If the sum Accp+Accm of the accuracy is equal to or higher than the accuracy Accwo, the determining unit 14b determines to adopt the pruning rates determined in (i). For example, the determining unit 14b stores the pruning rates determined in (i) as the pruning rates 11d into the memory unit 11.
On the other hand, if the sum Accp+Accm of the accuracy is lower than the accuracy Accwo, the determining unit 14b determines to discard the pruning rates determined in (i). For example, the determining unit 14b discards the pruning rates determined in (i) and determines to adopt the pruning rates 11d determined in the latest (ii) (or initial pruning rates 11d).
(iii) The calculating unit 14 (determining unit 14b) repeatedly applies (i) and (ii) multiple times to search for maximum pruning rates that can guarantee the accuracy.
As illustrated in
In the second time searching (see reference numeral 146), in (i), the threshold calculating unit 14a is assumed to calculate (update) the threshold Tw and to determine that, based on the updated threshold Tw, the pruning rates for the layers 131 to 133 are to be “20%, 20%, 40%” from “0%, 0%, 0%”. For example, in (ii), if the determining unit 14b determines Accp+Accm≥Accwo in comparing the inference accuracy, the determining unit 14b adopts “20%, 20%, 40%” and stores them as the pruning rates 11d into the memory unit 11.
In the third time searching (see reference numeral 147), in (i), the threshold calculating unit 14a is assumed to calculate (update) the threshold Tw and to determine that, based on the updated threshold Tw, the pruning rates for the layers 131 to 133 are to be “20%, 40%, 40%” from “20%, 20%, 40%”. For example, in (ii), if the determining unit 14b determines Accp+Accm≥Accwo in comparing the inference accuracy, the determining unit 14b adopts “20%, 40%, 40%” and stores (updates) them as the pruning rates 11d into the memory unit 11.
The determining unit 14b may search for the pruning rates over a predetermined number of times, for example, a preset number of times.
As described above, the determining unit 14b determines the reduction ratios to be applied one to each of the multiple layers based on the inference accuracy of the trained model 11c and the inference accuracy of the reduced model after the machine learning, which is obtained by reducing each element of the multiple layers in the trained model 11c according to the reduction ratio candidates to be applied.
Next, description will be made in relation to a specific example of the pruning rate calculation process described above.
The threshold calculating unit 14a performs first-order Taylor expansion on the loss function in the pruning to calculate the threshold of the pruning rate that can guarantee the accuracy for each layer. For example, assuming that: the error in the tensors for each layer, which error is generated by pruning, is Δw; the loss function in the pruning is L(w+Δw); the loss function of the model of the pruning target is L(w); and the loss function (Lideal) without the pruning is Lwo+Lm, the threshold of the pruning rate that can guarantee the accuracy is calculated by the following equation (4). It should be noted that Lwo is the loss function of the unpruned model, and Lm is a margin of the loss function set by a designer.
The left side of the above equation (4) (see the dashed line box in
As described above, the threshold calculating unit 14a calculates the thresholds T based on the values of the loss functions of the trained model 11c at the time of reducing elements of each of the multiple layers and the weight gradients of each of the multiple layers.
Rearranging the above equation (4) can derive, as expressed by the following equation (5), a condition of the “error in pruning”, which satisfies the limitation for the loss function in the pruning to be smaller than the ideal loss function. In other words, it is possible to derive the upper limit (threshold) of the error caused by the pruning, which guarantees the accuracy (loss function). The threshold calculating unit 14a sets the right side of the following equation (5) to be the threshold T.
As illustrated in
As an example, in accordance with the following equation (6), the threshold calculating unit 14a may determine, for each layer of the pruning target, the pruning rate that causes a pruning error (left side) to be equal to or smaller than the threshold (right side). In the following equation (6), “∥ΔW∥1” is the L1 norm of the weight to be regarded as the pruning target and “n” is the number of elements of the weight of the layer in the pruning target.
As illustrated in the above equation (6), the threshold T is to be a parameter derived by approximation. To prevent mistakes in determining the pruning rate due to an approximation error, an upper limit may be set for the threshold T (see
For example, in accordance with the comparison result of the accuracy in the process of (ii) by the determining unit 14b, the threshold calculating unit 14a may update, in addition to the pruning rates, the trust radius (e.g., by multiplying it by a constant factor or the like). The initial value of the trust radius may be set by, for example, a designer or the like.
As an example, if the sum Accp+Accm of the accuracy is equal to or higher than the accuracy Accwo, the threshold calculating unit 14a may multiply the trust radius by a constant K (“K>1.0”), and if the sum Accp+Accm of the accuracy is lower than the accuracy Accwo, the threshold calculating unit 14a may multiply the trust radius by a constant k (“0<k<1.0”).
<1-3> Explanation According to Type of Pruning Target
Next, description will be made in relation to examples of a method for pruning and a method for calculating the pruning error according to the type of the pruning target. The type of the pruning target may be, for example, channel pruning, node pruning, weight pruning, etc. According to the type of the pruning target, the calculating unit 14 may determine the pruning target and the pruning error by using the weight corresponding to the pruning target.
<1-3-1> Example of Channel Pruning
(Example of Method for Determining Channel to be Pruned)
When the type of the pruning target is the channel, the calculating unit 14 calculates the L1 norm in units of kernels corresponding to the channels of the output data. For example, the calculating unit 14 calculates, as illustrated by “before pruning” in
Next, as illustrated by “after pruning” in
(Example of Calculating Pruning Error)
As illustrated in
The calculating unit 14 may obtain the pruning error by dividing the calculated L1 norm by the number of elements of all kernels before the pruning.
<1-3-2> Example of Node Pruning
(Example of Method for Determining Node to be Pruned)
When the type of the pruning target is the node, the calculating unit 14 calculates the L1 norm in units of weights connected to the output node. In the example of “before pruning” in
Next, as illustrated by “after pruning” in
(Example of Calculating Pruning Error)
As illustrated in
The calculating unit 14 may acquire the pruning error by dividing the calculated L1 norm by the number of elements of all weights before the pruning. In the example of “after pruning” in
<1-3-3> Example of Weight Pruning
(Example of Method for Determining Weight to be Pruned)
When the type of the pruning target is the weight, the calculating unit 14 calculates the L1 norms for all of the weights in units of elements. In the example of “before pruning” in
Next, as illustrated by “after pruning” in
(Example of Calculating Pruning Error)
As illustrated in
The calculating unit 14 may acquire the pruning error by dividing the calculated L1 norm by the number of elements of all weights before the pruning. In the example of “after pruning” in
<1-4> Operation Example
Next, with reference to
As illustrated in
The calculating unit 14 calculates the inference accuracy (recognition rate) Accwo in cases where the pruning is not performed (Step S2).
The threshold calculating unit 14a sets the initial value of the trust radius (Step S3).
The threshold calculating unit 14a calculates the threshold T for each layer and the pruning error for each layer to be for setting the pruning rates (Step S4), and determines whether or not the L2 norm of the thresholds T of all layers are larger than the trust radius (Step S5). If the L2 norm of the thresholds T of all layers are equal to or smaller than the trust radius (NO in Step S5), the process proceeds to Step S7.
If the L2 norm of the thresholds T of all layers are larger than the trust radius (YES in Step S5), the threshold calculating unit 14a scales (updates) the thresholds such that the L2 norm of the thresholds T of all layers become equal to the trust radius (Step S6), and the process proceeds to Step S7.
In Step S7, the threshold calculating unit 14a provisionally calculates the pruning rate for each layer.
For example, the threshold calculating unit 14a provisionally sets the pruning rate for each layer among the set pruning rate candidates. Steps S4 to S7 are examples of the process of (i) described above.
The machine learning unit 13 prunes the trained model 11c at the pruning rates provisionally calculated by the threshold calculating unit 14a, and executes machine learning again on the model after the pruning. The calculating unit 14 calculates the inference accuracy Accp of the model after the re-executed machine learning (Step S8).
The determining unit 14b determines whether or not the inference accuracy Accp+margin Accm is equal to or higher than the inference accuracy Accwo (Step S9). The evaluation of the inference accuracy (recognition rate) can compensate the mistakes in selecting the pruning rates due to the approximation error.
If the inference accuracy Accp+the margin Accm is equal to or higher than the inference accuracy Accwo (YES in Step S9), the determining unit 14b determines to prune the trained model 11c at the provisionally calculated pruning rates (Step S10), and stores, as the pruning rates 11d, the provisionally calculated pruning rates into the memory unit 11. Further, the threshold calculating unit 14a increases the trust radius by multiplying the trust radius by a constant factor (Step S11), and the process proceeds to Step S14.
On the other hand, if the inference accuracy Accp+margin Accm is lower than the inference accuracy Accwo (NO in Step S9), the determining unit 14b discards the provisionally calculated pruning rates (Step S12). The threshold calculating unit 14a decreases the trust radius by multiplying the trust radius by a constant factor (Step S13), and the process proceeds to Step S14. Steps S8 to S13 are examples of the process of (ii) described above.
In Step S14, the determining unit 14b determines whether or not the search (processes of Steps S4 to S13) has been performed predetermined times, in other words, whether or not the predetermined condition is satisfied regarding the execution times of the processes including the threshold calculation, the pruning rate candidate selection, and the pruning rate determination. If the search has not been performed the predetermined times (NO in Step S14), the process moves to Step S4.
If the search has been performed the predetermined times (YES in Step S14), the outputting unit 15 outputs the determined pruning rates 11d (Step S15), and the process ends. Step S14 is an example of the process of (iii) described above.
As described above, by the threshold calculating unit 14a, the server 1 according to the one embodiment calculates the errors in the tensors used for the NN, which errors are generated by the pruning, and generates the thresholds from the values of the loss functions and the gradients obtained by the backpropagation of the NN. Further, the threshold calculating unit 14a compares the calculated errors in the pruning with the thresholds to provisionally calculate the pruning rates. Furthermore, the determining unit 14b compares the inference accuracy of the model after re-learning at the calculated pruning rates with the inference accuracy of the unpruned model, and determines the pruning rate for each layer. At this time, if the inference accuracy of the case with the pruning is determined to be deteriorated as compared to the inference accuracy of the case without the pruning, the threshold calculate unit 14a resets the upper limit of the threshold such that the thresholds is decreased, and searches for the pruning rates again.
Thus, the server 1 according to the one embodiment can determine the pruning rate for each layer regardless of the type of the layers. For example, the server 1 can determine the pruning rates to be applied to the trained model 11c that includes a convolutional layer to which no BN layer is connected, a fully connected layer, and the like.
The example of
As illustrated in
As illustrated in
As described above, by the method according to the one embodiment, since the pruning rate can be determined and outputted for each layer, it is possible to realize the downsizing of the NN including various multiple layers.
<1-5> Modifications
Next, modifications according to the one embodiment will be described. The following description assumes, for simplicity, that the margin Accm of the inference accuracy is “0”, in other words, in comparing the inference accuracy, it is determined whether or not the inference accuracy Accp is equal to or higher than the inference accuracy Accwo.
<1-5-1> First Modification
In the method according to the one embodiment, the number of times of searches for the pruning rates (the number of attempts of the process (iii)) is a hyperparameter manually set by, for example, a designer. As a result, for example, if the number of times of searches is set to be small, the trained model 11c may be insufficiently downsized, and if the number of times of searches is set to be large, the trained model 11c may be sufficiently downsized, but search durations may become longer.
As illustrated in
As such, when the trust radius is multiplied by the constant K or the constant k, the update amount of the threshold is limited by the trust radius, so that the same pruning rate candidates may be adopted in multiple searches. Such a state where combinations of the same pruning rates are searched for multiple times leads to an increase in the times of searches for the pruning rates while the pruning of the model is suppressed from being sufficiently attempted.
In view of this, a first modification describes, by focusing on the update on the trust radius, a method for shortening (decreasing) the search durations (the times of searches) for the pruning rates appropriate to downsize the NN.
The calculating unit 14A searches for combinations of different pruning rates in each search. The state where the selected combination has the pruning rate of “0%” for all of the layers represents that the calculating unit 14A is assumed to determine not to search the pruning rates any more. Under such a premise, the calculating unit 14A (determining unit 14b′) terminates the searching when the combination in which the pruning rate is “0%” for all of the layers is selected.
In accordance with the comparison result of the inference accuracy by the determining unit 14b′, the threshold calculating unit 14a′ measures, for each layer i (i is an integer equal to or greater than 1), an absolute value “Ediff,i” of a different amount between the threshold and the error in the pruning rate one size larger than the searched pruning rate or the error in the searched pruning rate.
For example, when the inference accuracy Accp is equal to or higher than the inference accuracy Accwo, the threshold calculating unit 14a′ measures the absolute value “Ediff,i” of the different amount between the threshold and the error in the pruning rate one size larger than the searched pruning rate.
On the other hand, when the inference accuracy Accp is lower than the inference accuracy Accwo, the threshold calculating unit 14a′ measures the absolute value “Ediff,i” of the different amount between the threshold and the error in the searched pruning rate.
As illustrated by the following equation (7), the threshold calculating unit 14a′ acquires the smallest value (different amount) “Ediff” from the calculated absolute values “Ediff,i” of the different amounts of all layers.
E
diff=min(Ediff,1,Ediff,2, . . . ,Ediff,i) (7)
In accordance with the comparison result of the inference accuracy by the determining unit 14b′, the threshold calculating unit 14a′ updates the trust radius by adopting either one with a larger variation from the trust radius multiplied by a constant factor and the sum of or a difference between the trust radius and the different amount “Ediff”.
For example, when the inference accuracy Accp is equal to or higher than the inference accuracy Accwo, the threshold calculating unit 14a′ adopts one with the larger variation from the trust radius multiplied by the constant K and the sum of the trust radius and the different amount “Ediff”, and consequently, updates the trust radius so as to increase the trust radius.
On the other hand, when the inference accuracy Accp is lower than the inference accuracy Accwo, the threshold calculating unit 14a′ adopts one with the larger variation from the trust radius multiplied by the constant k and the difference between the trust radius and the different amount “Ediff”, and consequently, updates the trust radius to so as to decrease the trust radius.
In this manner, the threshold calculating unit 14a′ updates the trust radius such that the combinations of the pruning rate candidates of the multiple layers differ in each execution of selecting (in other words, searching) the pruning rate candidates.
Then, the threshold calculating unit 14a′ determines (updates) the trust radius at the “m+1”th (next) time according to the following equation (8).
(Trust radius at “m+1”th time)=max((Trust radius at “m”th time·Constant K),(Trust radius at “m”th time+Ediff)) (8)
As a result, at least a value equal to or greater than the “sum of the trust radius and the different amount” is selected as the trust radius at the “m+1”th time, so that in the “m+1”th time, a bit width different from the “m”th time is calculated as the pruning rate.
In the example of
Then, the threshold calculating unit 14a′ determines (updates) the trust radius at the “m+1”th (next) time according to the following equation (9).
(Trust radius at “m+1”th time)=max((Trust radius at “m”th time·Constant factor),(Trust radius at “m”th time−Ediff)) (9)
As a result, at least a value equal to or greater than the “difference between the trust radius and the different amount” is selected as the trust radius at the “m+1”th time, so that in the “m+1”th time, a bit width different from the “m”th time is calculated as the pruning rate.
In the example of
When the above equations (8) and (9) are generalized, the trust radius at the next time can be expressed by the following equation (10).
Trust radius at next time=Current trust radius*max(Constant factor, Qscale_min) (10)
In the above equation (10), the constant factor is K or k, “Qscale_min” is “Qscale” represented by the following equation (11), and “Qscale” is represented by the following equation (12).
Qscale_min=min(Qscale calculated in all quantization target vectors) (11)
Qscale=1+Qdiff/Qth (12)
In the above equation (12), “Qdiff” is the “different amount between the threshold and the quantization error in a bit width one size narrower than the provisionally calculated bit width (pruning ratio)”, and “Qth” is the threshold.
Next, referring to
In Step S21, the threshold calculating unit 14a′ increases the trust radius by using larger one of the multiplication of the constant K and the “sum of the different amount”, and the process proceeds to Step S23.
In Step S22, the threshold calculating unit 14a′ decreases the trust radius by using larger one of the multiplication of the constant k and the “difference from the different amount”, and the process proceeds to Step S23.
In Step S23, the determining unit 14b′ determines whether or not the pruning rates 11d of all layers are “0%”, in other words, whether or not the pruning rates satisfy the predetermined condition. If the pruning rate 11d of at least one layer is not “0%” (NO in Step S23), the process moves to Step S4.
If the pruning rates 11d of all layers are “0%” (YES in Step S23), the outputting unit 15 outputs the determined pruning rates 11d (Step S15), and the process ends.
As described above, the first modification differs from the one embodiment in the method for updating the trust radius by the threshold calculating unit 14a′ and the end condition for determining the end of searching by the determining unit 14b′. Thus, the server 1A can search for the pruning rates appropriate for sufficiently downsizing the NN in shortest durations (least number of times). In addition, it is possible to omit the setting (designation) of the times of searches by the designer or the like.
<1-5-2> Second Modification
In the methods according to the one embodiment and the first modification, the initial value of the trust radius is a hyperparameter set by a designer or the like.
As such, depending on the initial value of the trust radius, the final model size and the times of searches for the pruning rates may vary, in other words, the performance of the servers 1 and 1A may varies.
Therefore, a second modification describes a method for suppressing variation in the performance of the servers 1 and 1A.
In pruning a model, it is known that gradually pruning the model by using low pruning rates can maintain accuracy and compress the model at a high compression rate as compared with pruning the model at once by using high pruning rates.
As illustrated in the above equation (5), since the threshold T is set according to the reciprocal of the gradient, layers with large thresholds T represent layers with small gradients. The layers with small gradients have small effect on the accuracy even when pruned.
Therefore, the server 1B (threshold calculating unit 14a″) sets, for example, the initial value of the trust radius to be a value such that the pruning rate in the first search becomes the minimum. For this, the threshold calculating unit 14a″ may, for example, set the initial value of the trust radius to be a value that causes, among all layers, the layer where the threshold T is the maximum to be pruned and the remaining layer(s) to be unpruned (such that the pruning rates become “0%”).
By setting the initial value of the trust radius as described above, the server 1B can further compress the model size or maintain the accuracy as compared to the case where the initial value of the trust radius is manually set, for example, to be large.
As illustrated in
Th represents a vector according to the threshold T1, T2, . . . for each layer, and in the example of
Next, using the measured threshold and the error, the threshold calculating unit 14a″ sets the initial value of the trust radius according to the following equation (13). In the following equation (13), “∥Th∥2” is the L2 norm of the thresholds of all layers.
The threshold calculating unit 14a″ sets the thresholds T1, T2 such that the minimum pruning rate “10%” is selected as the pruning rate of the layer having the maximum threshold (layer 2) and the pruning rate “0%” is selected in the remaining layer (layer 1) by the initial value of the calculated trust radius.
Thus, as illustrated in the lower part of
The function of the threshold calculating unit 14a″ other than the process of setting the initial value of the trust radius may be similar to the function of at least one of the threshold calculating unit 14a according to the one embodiment and the threshold calculating unit 14a′ according to the first modification. The determining unit 14b″ may be similar to at least one of the determining unit 14b according to the one embodiment and the determining unit 14b′ according to the first modification.
That is, the method according to the second modification may be realized by a combination of one of or both the one embodiment and the first modification.
Next, referring to
In Step S31, after calculating the threshold for each layer in Step S4, the threshold calculating unit 14a″ determines whether or not the search is the first time. When the search is not the first time (NO in Step S31), the process proceeds to Step S5.
When the search is the first time (YES in Step S31), the threshold calculating unit 14a″ sets the initial value of the trust radius based on the threshold and the minimum pruning rate error in the layer where the threshold is the maximum (Step S32), and the process proceeds to Step S5.
Steps S33, S34, and S35 may be either Steps S11, S13, and S14 illustrated in
As described above, the second modification uses the method for setting the initial value of the trust radius by the threshold calculating unit 14a″ that differs from the methods of the first embodiment and the first modification. Thus, the server 1B can suppress variation in the final model size and the times of searches for the pruning rates, and can suppress variation in the performance of the servers 1 and 1A.
In contrast, using the method according to the second modification, by setting the initial value of the trust radius based on the threshold and the minimum pruning rate error in the layer where the threshold is the maximum, the model size at the initial stage of the searching, in other words, the pruning rates can be reduced to the same extent as the case where the initial value of the trust radius is set to be small (see the dash-dot line).
Thus, the server 1B can suppress manual setting of the initial value (hyperparameter) of the trust radius by a designer or the like, and can dynamically set the initial value of the trust radius according to the layers of the trained models 11c. Therefore, appropriate pruning rates can be set for each model, and regardless of the model, the variation in the final model size and the times of searches for the pruning rates can be suppressed, so that variation in the performance of the servers 1 and 1A can be suppressed.
<1-6> Example of Hardware Configuration
The servers 1, 1A, and 1B according to the one embodiment and the first and second modifications may each be a virtual machine (VM; Virtual Machine) or a physical machine. The functions of the servers 1, 1A, and 1B may be realized by one computer or by two or more computers. At least some of the functions of the servers 1, 1A, and 1B may be implemented using HW (Hardware) resources and NW (Network) resources provided by cloud environments.
As illustrated in
The processor 10a is an example of an arithmetic processing device that performs various controls and calculations. The processor 10a may be connected to each block in the computer 10 via a bus 10i so as to mutually communicable. The processor 10a may be a multi-processor including multiple processors or a multi-core processor having multiple processor cores, or may be configured to have multiple multi-core processors.
The processor 10a may be, for example, an integrated circuit (IC; Integrated Circuit) such as a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU (Graphics Processing Unit), an APU (Accelerated Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific IC), or an FPGA (Field-Programmable Gate Array).
As the processor 10a, a combination of two or more of the integrated circuits described above may be used. As an example, the computer 10 may include first and second processors 10a. The first processor 10a is an example of a CPU that executes a program 10g (machine learning program) that realizes all or a part of various functions of the computer 10. For example, based on the programs 10g, the first processor 10a may realize the functions of the obtaining unit 12, the calculating unit 14, 14A or 14B, and the outputting unit 15 of the server 1, 1A or 1B (see
The memory 10b is an example of an HW that stores various data and programs. The memory 10b may be, for example, at least one of a volatile memory such as a DRAM (Dynamic Random Access Memory) and a nonvolatile memory such as a PM (Persistent Memory).
The storing unit 10c is an example of an HW that stores information such as various data and programs. The storing unit 10c may be, for example, a magnetic disk device such as an HDD (Hard Disk Drive), a semiconductor drive device such as an SSD (Solid State Drive), or various storage devices such as nonvolatile memories. The non-volatile memory may be, for example, a flash memory, an SCM (Storage Class Memory), a ROM (Read Only Memory), or the like.
The storing unit 10c may store the program 10g. For example, the processor 10a of the servers 1, 1A, and 1B can realize functions as the controlling unit 16 of the servers 1, 1A, and 1B (see
The memory unit 11 illustrated in
The IF unit 10d is an example of a communication IF that controls the connection and communication with the network. For example, the IF unit 10d may include an adapter compatible with a LAN (Local Area Network) such as Ethernet (registered trademark), an optical communication such as FC (Fibre Channel), or the like. The adapter may be adapted to a communication scheme of at least one of a wireless scheme and a wired scheme. For example, the servers 1, 1A, and 1B may be connected to a non-illustrated computer via the IF unit 10d so as to be mutually communicable. One or both of the functions of the obtaining unit 12 and the outputting unit 15 illustrated in
The IO unit 10e may include one of an input device and an output device, or both. The input device may be, for example, a keyboard, a mouse, or a touch panel. The output device may be, for example, a monitor, a projector, or a printer. For example, the outputting unit 15 illustrated in
The reader 10f is an example of a reader that reads out information on the data and programs recorded on the recording medium 10h. The reader 10f may include a connection terminal or a device to which the recording medium 10h can be connected or inserted. The reader 10f may be, for example, an adapter compatible with a USB (Universal Serial Bus) or the like, a drive device that accesses a recording disk, a card reader that accesses a flash memory such as an SD card, etc. The recording medium 10h may store the program 10g, or the reader 10f may read the program 10g from the recording medium 10h and store it into the storing unit 10c.
The recording medium 10h may illustratively be a non-transitory computer-readable recording medium such as a magnetic/optical disk or a flash memory. The magnetic/optical disk may illustratively be a flexible disk, a CD (Compact Disc), a DVD (Digital Versatile Disc), a Blu-ray disk, an HVD (Holographic Versatile Disc), or the like. The flash memory may illustratively be a solid state memory such as a USB memory or an SD card.
The HW configuration of the computer 10 described above is merely illustrative. Thus, the HW of the computer 10 may appropriately undergo increase or decrease (e.g., addition or deletion of arbitrary blocks), division, integration in arbitrary combinations, and addition or deletion of the bus. For example, the servers 1, 1A, and 1B may omit at least one of the IC unit 10e and the reader 10f.
The above-described technique according to the embodiment and the first and second modifications can be modified and implemented as follows.
For example, the obtaining unit 12, the machine learning unit 13, the calculating unit 14, 14A or 14B, and the outputting unit 15 included in the server 1, 1A or 1B illustrated in
For example, the server 1, 1A, or 1B illustrated in
As one aspect, the present disclosure can realize downsizing of a neural network including multiple layers.
Throughout the descriptions, the indefinite article “a” or “an”, or adjective “one” does not exclude a plurality.
All examples and conditional language recited herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2021-174063 | Oct 2021 | JP | national |