The present invention relates to a neural network circuit.
As a circuit configuration method of neurons in a neural network, a device as disclosed in JP H4-51384 A (PTL 1) is disclosed. In PTL 1, weight data is approximated by one power of two or a sum of a plurality of powers of two. Illustrated is an example in which a power-of-two operation is configured using a bit shift circuit, and results of the operation are added by an adder so that the multiplication of input data and weight data is approximated by a small-scale circuit.
PTL 1: JP H4-51384 A
There is a deep neural network as one method of machine learning. A neuron, which is a basic unit of the neural network, is configured to multiply a plurality of pieces of input data by corresponding weight factors, respectively, add the multiplication results, and output the addition result. Thus, in the case of implementation with a logic circuit such as an FPGA, a large number of multipliers are required so that an increase in circuit scale becomes a problem.
Therefore, a problem is to implement a neural network using a small-scale circuit by simplifying the multiplication of input data by weight data.
The present invention has been made in view of such circumstances. As an example, a neural network circuit according to the present invention is configured from: a means for multiplying input data by a rounded value of the mantissa part of weight data; a means for shifting the multiplication result by the number of bits of the rounded value; a means for adding the shifted result to the original input data; and a means for shifting the addition result by the number of bits of the exponent part of the weight.
According to the present invention, it is possible to implement a neural network using a small-scale circuit by simplifying the multiplication of the input data by weight data.
Hereinafter, an embodiment of the present invention will be described with reference to
For example, when an output of the neuron 1 at the top of the output layer is a character “A”, it is configured to output a larger value than the neurons 1 of the other output layers. In addition, when an output of the second neuron 1 from the top is a character “B”, it is configured to output a larger value than neurons in the other output layers. As described above, an identification result of the character is obtained depending on which number of neuron outputs the maximum value.
For the purpose of processing the neural network circuit of
Therefore, one purpose of the present embodiment is to implement a neural network circuit with a small-scale circuit while maintaining performance by performing such multiplication using an operation of a combination of addition and bit shift.
Next, in the above Formula a, R is a rounded value of a weight, and is an integer in a range of less than 0 and R<2m. Further, K is a bit shift number corresponding to the exponent of weight and is an integer.
The shift addition 101 of
A multiplier 203 is a means for multiplying a mantissa rounded value which is an output from the weight factor storage unit 201.
A mantissa shift 204 is a means for bit-shifting an output of the multiplier 203 according to a mantissa shift number which is an output from the weight factor storage unit 201. If the mantissa shift number is a positive value, the shift is performed to the left. In addition, the shift is performed to the right if the mantissa shift number is a negative value.
An adder 205 is a means for adding an output of the mantissa shift 204 and an output of the code conversion 202.
The exponent shift 206 is a means for bit-shifting an output of the adder 205 according to an exponent shift number which is an output from the weight factor storage unit 201. If the exponent shift number is a positive value, the shift is performed to the left. In addition, the shift is performed to the right if the mantissa shift number is a negative value.
The weight factor according to the above Formula a is reliably applied from the time of first learning to obtain the weight factor of each neuron in the neural network of
Although the present embodiment is directed to a fixed point operation, an example for easily obtaining the weight factor to be used for the shift addition processing from the weight factor of the floating-point format is illustrated. This example is advantageous in a case where, for example, learning to obtain a weight factor is performed by a floating-point operation using a computer, and the obtained weight factor is converted into the weight factor of the present invention so that a neural network is implemented by a small-scale logic circuit.
In the weight factor 201, the reference sign S is the same as the reference sign of the weight factor storage unit 301 in the floating-point format.
An exponent shift number K is generated by an exponent conversion 302 based on exponent part data of the weight factor storage unit 301. Specifically, a value of the exponent of the floating-point format is a value obtained by adding 127 as an offset. Therefore, a value obtained by subtracting 127 is set as the exponent shift number in 2's complement notation in the exponent conversion 302.
The mantissa rounded value R is generated by a mantissa conversion 303 based on mantissa part data of the weight factor storage unit 301. Specifically, the upper m bits of the mantissa part data are used as the mantissa rounded value.
The mantissa shift number m is generated by the mantissa conversion 303 based on mantissa part data of the weighting data 301. Specifically, the number of bits of the mantissa rounded value R is set as m.
First, a table 401 illustrates values of the weight factor Wa in the case where a weight range W is in a range of 2 to 1 and the mantissa shift number m=2, and Formula 1 to obtain the values. According to Formula 1, a value of R can take four integer values from 0 to 3, and thus, the weight factor Wa takes four values from 1.0 to 1.75 by an increment of 0.25.
Next, a table 402 illustrates values of a weight approximate value Wa in the case where the weight range W is in a range of 1.0 to 0.5 and the mantissa shift number m=2, and Formula 2 to obtain the values. Formula 1 is multiplied by 2−1 by setting the exponent shift number K=−1, thereby obtaining the result of Formula 2.
Next, a table 403 illustrates values of the weight factor Wa in the case where the weight range W is in a range of 0.5 to 0.25 and the mantissa shift numbers m=2, and Formula 3 to obtain the values. Formula 1 is multiplied by 2−2 by setting the exponent shift number K=−2, thereby obtaining the result of Formula 3.
As described above, a feature of the present embodiment is that the bit shift is performed according to the range of powers of 2 including the weight factor based on the weight factor value obtained by Formula 1 so that the weight value is not rounded to 0 although being a value close to 0.
Next, in the above Formula b, R is a rounded value of a weight, and is an integer in a range of 0≤R<2m.
Further, K is a bit shift number corresponding to the exponent of weight and is an integer. A shift addition 101 of
A code conversion 701 is a function of converting input data into either positive or negative data. The input data is directly output if the code is 0, and the input data is multiplied by −1 and the multiplication result is output if the code is 1.
A multiplier 702 is a means for multiplying a mantissa rounded value which is an output from the weight factor storage unit 201.
A mantissa shift 703 is a means for bit-shifting an output of the multiplier 702 according to a mantissa shift number which is an output from the weight factor storage unit 201. If the mantissa shift number is a positive value, the shift is performed to the left. In addition, the shift is performed to the right if the mantissa shift number is a negative value.
An exponent shift 704 is a means for bit-shifting an output from the mantissa shift 703 according to an exponent shift number. If the exponent shift number is a positive value, the shift is performed to the left. In addition, the shift is performed to the right if the mantissa shift number is a negative value.
The weight factor according to the above Formula b is reliably applied from the time of first learning to obtain the weight factor of each neuron in the neural network of
In the present embodiment, a formula to be calculated is determined depending on which range of powers of 2 including a weight value on the basis of the power of 2. A specific numerical example will be described hereinafter.
First, a table 801 illustrates values of the weight factor Wb in the case where a weight range W is in a range of 1 to 0 and the mantissa shift number m=2, and Formula 1 to obtain the values. According to Formula 1, a value of R can take four integer values from 0 to 3, and thus, the weight factor Wb takes four values from 1 to 0 by an increment of 0.25.
Next, a table 802 illustrates values of the weight factor Wb in the case where the weight range W is in a range of 0.5 to 0 and the mantissa shift numbers m=2, and Formula 2 to obtain the values. Formula 1 is multiplied by 2−1 by setting the exponent shift number K=−1, thereby obtaining the result of Formula 2.
Next, a table 803 illustrates values of the weight factor Wb in the case where the weight range W is in a range of 0.25 to 0 and the mantissa shift numbers m=2, and Formula 3 to obtain the values. Formula 1 is multiplied by 2−2 by setting the exponent shift number K=−2, thereby obtaining the result of Formula 3.
As described above, a feature of the present embodiment is that the bit shift is performed according to the range of powers of 2 including the weight factor based on the weight factor obtained by Formula 1 so that the weight value is not rounded to 0 although being a value close to 0.
According to the respective embodiments described above, it is possible to make the number of weight factor values in the range of 2n to 2n−1 constant, to prevent a small weight factor value from being rounded to 0, and to implement the neural network with the small-scale circuit while maintaining the performance. In addition, the circuit system is generalized and it is easy to adjust the performance and the circuit scale in accordance with an application target of the DNN.
Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described respective embodiments, and includes various modifications. For example, a part of each of the embodiments can be added, converted, deleted, or the like within a range where the effects of the present invention are exhibited. In addition, it is possible to replace a part of each of the embodiments.
That is, the above-described embodiments have been described in detail in order to facilitate the understanding of the present invention, and are not necessarily limited to one including the above-described configuration thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/000367 | 1/10/2017 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/131059 | 7/19/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6243490 | Mita | Jun 2001 | B1 |
9268529 | Willson, Jr. | Feb 2016 | B2 |
20170103321 | Henry | Apr 2017 | A1 |
20180189640 | Henry | Jul 2018 | A1 |
20180189651 | Henry | Jul 2018 | A1 |
Number | Date | Country |
---|---|---|
H4-051384 | Feb 1992 | JP |
H6-259585 | Sep 1994 | JP |
2006-155102 | Jun 2006 | JP |
Entry |
---|
International Search Report with English translation and Written Opinion issued in corresponding application No. PCT/JP2017/000367 dated Mar. 21, 2017. |
Number | Date | Country | |
---|---|---|---|
20190325311 A1 | Oct 2019 | US |