The invention relates to an operation method for an artificial neural network, and particularly relates to an operation method for an artificial neural network capable of improving efficiency and improving a memory service life.
In a conventional technical field, it is a common practice to perform operations of an artificial neural network through convolution operations. In order to reduce an operation time of the artificial neural network, a fine-tuning artificial neural network is widely used to avoid consuming too much time in a training phase.
Along with increase in a scale of the artificial neural network, a demand for memory capacity used for storing information required by the operation process is also increased. Therefore, how to reduce an access time delay of the memory and a power consumption of the memory during the operation process are important issues for those skilled in the art.
Referring to a system architecture diagram of a conventional artificial neural network shown in
The invention is directed to a plurality of operation methods for an artificial neural network, which are capable of reducing an operation amount, reducing a required memory space and providing a wear-leveling function.
The invention provides an operation method for an artificial neural network including: receiving input information and kernel information; dividing the input information into a plurality of sub-input information, and expanding the kernel information to generate expanded kernel information; performing a fast Fourier transform (FFT) on the sub-input information and the expanded kernel information to respectively generate a plurality of frequency domain sub-input information and frequency domain expanded kernel information; respectively performing a multiplying operation on the frequency domain expanded kernel information and the frequency domain sub-input information to respectively generate a plurality of sub-feature maps; and performing an inverse FFT on the sub-feature maps to provide a plurality of converted sub-feature maps for executing a feature extraction operation of the artificial neural network.
The invention provides another operation method for an artificial neural network including: receiving input information and kernel information; performing a multiplying operation on the input information and the kernel information to generate a feature map; writing the kernel information into a first block of a non-volatile memory according to a first address; performing an adjustment operation on the kernel information; and, in the adjustment operation, receiving adjustment information and changing a plurality of most significant bits in the first address of the non-volatile memory according to the adjustment information, and keeping a plurality of least significant bits of the kernel information in the first address unchanged.
Based on the above description, in the operation method of the artificial neural network of the embodiment of the invention, by dividing the input information into a plurality of sub-input information, each sub-input information and the expanded kernel information are multiplied in a frequency domain to generate a plurality of sub-feature maps. By making a size of the sub-input information close to the kernel information, and through the operation method of the frequency domain, an operation amount and a required memory space may be effectively reduced, which improves operation efficiency. In the embodiment of the invention, in an updating operation of the kernel information, by only updating a part of bits, the access operation of the non-volatile memory may achieve a wear-leveling function, so as to increase a service life.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Referring to
In detail, a size of each of the sub-input information SINF11-SINF33 may be set to be closest to a size of the kernel information KINF, and the size of each of the sub-input information SINF11-SINF33 needs to be set to an Nth power of 2 times the Nth power of 2, N is a positive integer. In the embodiment of
It should be noted that a combined size of the sub-input information SINF11-SINF33 may be slightly larger than or equal to the size of the input information INF, and the input information INF may be effectively divided into a plurality of 2N×2N sub-input information SINF11-SINF33. In the embodiment, if the size of the input information INF is less than 24×24, the insufficient part may be filled with 0, so that the input information INF may be divided into 3×3 (=9) sub-input information SINF11-SINF33 of 8×8, respectively.
In addition, regarding the expansion operation of the kernel information KINF, a plurality of elements of 0 are added to the kernel information KINF to generate the expanded kernel information EKINF, and a size of the expanded kernel information EKINF and the size of each of the sub-input information SINF11-SINF33 are the same. In the embodiment, the size of the kernel information KINF is, for example, 5×5, and it is required to add 39 elements of 0 to generate the expanded kernel information EKINF with the size of 8×8.
Then, in step S230, a fast Fourier transform (FFT) is respectively performed on the sub-input information SINF11-SINF33 and the expanded kernel information EKINF to respectively generate a plurality of frequency domain sub-input information SINF11′-SINF33′ and frequency domain expanded kernel information EKINF′.
In step S240, a multiplying operation MUX is respectively performed on the frequency domain expanded kernel information EKINF′ and the frequency domain sub-input information SINF11′-SINF33′ to respectively generate a plurality of sub-feature maps FM11-FM33. The multiplication operation MUX is to multiply a plurality of first elements in the frequency domain expanded kernel information EKINF′ with a plurality of second elements corresponding to the same positions in each of the frequency domain sub-input information SINF11′-SINF33′ to generate each of the sub-feature maps FM11-FM33.
Finally, in step S250, an inverse FFT (IFFT) is performed on the sub-feature maps FM11-FM33 to generate a converted feature map CFM composed of a plurality of converted sub-feature maps. The converted feature map CFM may be provided to execute a feature extraction operation of the artificial neural network 200.
It should be noted that in the embodiment, the kernel information KINF does not need to be expanded to the same size as the input information INF, and under the condition that the input information INF is divided into multiple small-sized sub-input information SINF11-SINF33, an operation amount of the multiplication operation may be effectively reduced, and accuracy of the multiplication operation may also be effectively improved.
It should be noted that when the FFT is performed on the expanded kernel information EKINF and the sub-input information SINF11-SINF33, each of the generated frequency domain expanded kernel information EKINF′ and the frequency domain sub-input information SINF11′-SINF33′ has real part information and imaginary part information. In the embodiment,
It should be noted that, in the embodiment of the invention, the information required for the operation process of the artificial neural network 100 may all be stored in a non-volatile memory, such as a phase-change memory (PCM). Compared with a dynamic random access memory, the PCM has advantages of small size and low standby power consumption, which may optimize the performance of the artificial neural network 100.
Referring to
In
The linear rectification operation RELU may be executed through a function R(X) shown in
In the embodiment of the invention, since the non-volatile memory is used as a storage medium for operation information, in order to maintain a service life of the non-volatile memory, the invention provides an embodiment that may implement a wear-leveling function for the access of the non-volatile memory. Referring to
In step S620, the kernel information is written into a first block of a non-volatile memory according to a first address. In step S630, an adjustment operation is performed on the kernel information. In the adjustment operation, through step S640, adjustment information is received to change a plurality of most significant bits of the kernel information in the first address of the non-volatile memory according to the adjustment information, and keep a plurality of least significant bits of the kernel information unchanged.
In the embodiment, in each adjustment operation of the kernel information, by updating only the most significant bits of the kernel information, a frequency of writing memory cells of the non-volatile memory may be effectively reduced. The least significant bits of the kernel information have little effect on the operation result of the artificial neural network, so that the least significant bits of the kernel information are not changed, which does not affect the operation result of the artificial neural network.
In the embodiment, a bit number of the least significant bits may be 8 bits or 16 bits.
Referring to
Referring to
The number of the bits BA1 and BA2 that need to be changed may be one or plural, which is not limited by the invention.
Referring to
Referring to
In the embodiment, the number of changes to the kernel information may be recorded, and when the number of changes to the kernel information is greater than a predetermined threshold, the first address a may be shifted by an address b, and the kernel information is integrally written into the first block BLK1 according to a new address a+b.
In addition, based on the above description, when the number of changes to the kernel information is again greater than the predetermined threshold in the new address, the address a may be shifted by a shift value (8 bits) to generate a new address a′, and the kernel information is integrally written into a second block BLK2 according to the new address a′.
From the above description, it is known that the kernel information may be written into different regions of the non-volatile memory along with the number of times of adjustments, so as to effectively implement the wear-leveling function.
Referring to
The firmware program 1140 has a reading function 1141 and a programming function 1142. The firmware program 1140 is provided to the processor 1110 for execution to implement the reading function 1141 or the programming function 1142 on the non-volatile memory 1130.
In the embodiment, the storage device 1150 may be any form of hard disk, such as a conventional hard disk drive or a solid state hard disk, which is not limited by the invention. The storage device 1150 may provide a supply medium for the input information of the artificial neural network, and may be used to store static or dynamic information required for the artificial neural network operations.
In summary, in the invention, by dividing the input information into a plurality of sub-input information, each sub-input information and the expanded kernel information are multiplied in a frequency domain to generate a plurality of sub-feature maps, which may effectively reduce an operation amount, and reduces a demand for memory capacity. In addition, the embodiment of the invention provides the non-volatile memory as the access medium for the kernel information. The embodiment of the invention further provides an adjustment method of the kernel information, which may effectively achieve the wear-leveling function of the non-volatile memory and improve the service life of the system.
This application claims the priority benefit of U.S. provisional application Ser. No. 62/939,688, filed on Nov. 25, 2019, and U.S. provisional application Ser. No. 62/939,689, filed on Nov. 25, 2019. The entirety of each of the above-mentioned patent applications is hereby incorporated by reference herein and made a part of this specification.
Number | Name | Date | Kind |
---|---|---|---|
20140149639 | Sunkavalli | May 2014 | A1 |
20170344880 | Nekuii | Nov 2017 | A1 |
20190294413 | Vantrease | Sep 2019 | A1 |
20190340498 | Mills | Nov 2019 | A1 |
20200133989 | Song | Apr 2020 | A1 |
20200160226 | Ross | May 2020 | A1 |
20210011863 | Zhang | Jan 2021 | A1 |
20210173560 | Choi | Jun 2021 | A1 |
20210201124 | Gelashvili | Jul 2021 | A1 |
20210319289 | Han | Oct 2021 | A1 |
Number | Date | Country |
---|---|---|
109325589 | Feb 2019 | CN |
201331855 | Aug 2013 | TW |
201917583 | May 2019 | TW |
201939266 | Oct 2019 | TW |
2018073975 | Apr 2018 | WO |
Entry |
---|
Karen Simonyan, et al., “Very Deep Convolutional Networks for Large-Scale Image Recognition”, arXiv:1409.1556v6 [cs.CV], Apr. 10, 2015, pp. 1-14. |
Liangzhen Lai, et al., “CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs” arXiv:1801.06601v1 [cs.NE], Jan. 19, 2018., pp. 1-10. |
Kumar Chellapilla, et al., “High Performance Convolutional Neural Networks for Document Processing” retrieved on Nov. 9, 2006, Available at: https://hal.inria.fr/inria-00112631. |
Michael Mathieu, et al., “Fast Training of Convolutional Networks through FFTs” arXiv:1312.5851v5 [cs.CV], Mar. 6, 2014, pp. 1-9. |
Weisstein, Eric W. “Complex Multiplication” Available at: http://mathworld.wolfram.com/ComplexMultiplication.html. |
Piotr Migdał, “Learning Deep Learning with Keras”, retrieved on Apr. 30, 2017, Available at: https://p.migdal.pl/2017/04/30/teaching-deep-learning.html. |
Rangharajan Venkatesan, et al., “TapeCache: A High Density, Energy Efficient Cache Based on Domain Wall Memory”, ISLPED'12, Jul. 30-Aug. 1, 2012, pp. 185-190. |
Number | Date | Country | |
---|---|---|---|
20210158160 A1 | May 2021 | US |
Number | Date | Country | |
---|---|---|---|
62939689 | Nov 2019 | US | |
62939688 | Nov 2019 | US |