Multiplier-Less Sparse Deep Neural Network

Information

  • Patent Application
  • 20230081531
  • Publication Number
    20230081531
  • Date Filed
    March 28, 2022
    2 years ago
  • Date Published
    March 16, 2023
    a year ago
Abstract
Deep neural network (DNN) has been used for various applications to provide inference, regression, classification, and prediction. Although a high potential of DNN has been successfully demonstrated in literature, most DNN requires high computational complexity and high power operation for real-time processing due to a large number of multiply-accumulate (MAC) operations. The present invention provides a way to realize hardware-friendly MAC-less DNN framework with round-accumulate (RAC) operation operations based on power-of-two (PoT) weights. The method and system are based on the realization that rounding-aware training for powers-of-two expansion can eliminate the need of multiplier components from the system without causing any performance loss. In addition, the method and system provide a way to reduce the number of PoT weights based on a knowledge distillation using a progressive compression of an over-parameterized DNN. It is can realize high compression, leading to power-efficient inference for resource-limited hardware implementation such as micro processors and field-programmable gate arrays. A compacting rank is optimized with additional DNN model in a reinforcement learning framework. A rounding granularity is also successively decremented and mixed-order PoT weights are obtained for low-power processing. Another student model is also designed in parallel for a knowledge distillation to find a Pareto-optimal trade-off between performance and complexity.
Description
FIELD OF THE INVENTION

The present invention is related to a machine learning and inference system based on an artificial neural network, and more particularly to a multiplier-less method and system of a deep neural network with round-accumulate operation and weight pruning.


BACKGROUND & PRIOR ART

A machine learning based on a deep neural network (DNN) has been applied to many various issues including but not limited to image classification, speech recognition, computational sensing, data analysis, feature extraction, signal processing, and artificial intelligence. There are many DNN architectures such as multi-layer perceptron, recurrent network, convolutional network, transformer network, attention network, long short-term memory, generative adversarial network, auto-encoder, U-shaped network, residual network, reversible network, loopy network, clique network, and a variant thereof. Although a high performance of a DNN has been demonstrated in many practical systems, it has still been a challenge to train and deploy a DNN at a resource-limited hardware with a small memory and processing power for real-time applications. This is partly because the typical DNN has a large number of trainable parameters such as weights and biases, with many artificial neurons across deep hidden layers which also require a large number of arithmetic multiplication and floating-point operations.


In prior arts, some memory-efficient DNN methods and systems were proposed to resolve a part of the issue. One type of such approaches includes a pruning, sparcification, or compacting of weights and nodes. Another type of approaches is based on a quantization of weights and activations. Both of them are known as a network distillation technique. The weight pruning enforces some DNN weights to be zeros and the corresponding edges are eliminated to realize more compact DNN architectures. For example, L1/L2-regularization is used to realize sparse weights to compress the network size. Recently, another simple approach for weights pruning was proposed based on lottery ticket hypothesis (LTH). The LTH pruning uses a trained DNN to create a pruning mask, and rewinds the weights at early epochs for unpruned edges. It was shown that the LTH pruning can provide better performance with more compact DNNs.


The weight quantization compresses the DNN weights by converting a high-precision value into a low-precision value. For example, the DNN is first trained with floating-point weights of double precision, which typically requires 64 bits to represent, and then the weights are quantized to a half precision of 16 bits, or an integer value of 8 bits. This quantization can reduce the requirement of DNN memory by 4 or 8 folds for deployment, leading to low-power systems. Even weights binarization or ternalization is developed to realize low-memory DNNs. The binary DNN and ternery DNN show reasonable performance for some specific datasets, while they show performance loss in most real-world datasets. Another quantization technique includes rounding, vector quantization, codebook quantization, stochastic quantization, random rounding, adaptive codebook, incremental quantization, additive powers-of-two, or power-of-two quantization. Besides weights quantization, activations and gradients are quantized for some systems. In order to reduce performance degradation due to those network distillation, there are many training methods based on quantization/pruning-aware training besides dynamic quantization.


However, most distillation techniques suffer from a significant performance degradation due to the resource limitation in general. In addition, there is no integrated way to realize sparse and quantized DNN without losing performance. Accordingly, there is a need to develop a method and a system for a low-power and low-complexity deployment of a DNN applicable for a high-speed real-time processing.


SUMMARY OF THE INVENTION

A deep neural network (DNN) has been recently investigated for various applications including tele-communications, speech recognition, image processing and so on. Although a high potential of DNN has been successfully demonstrated in many applications, DNN generally requires high computational complexity and high power operation for real-time processing due to a large number of multiply-accumulate (MAC) operations. This invention provides hardware-friendly DNN framework with round-accumulate (RAC) operations to realize multiplier-less operations based on powers-of-two quantization or its variant. We demonstrate that quantization-aware training (QAT) for additive powers-of-two (APoT) weights can eliminate multipliers without causing visible performance loss. In addition, we introduce weight pruning based on a progressive version of lottery ticket hypothesis (LTH) to sparcify the over-parameterized DNN weights for further complexity reduction. It is demonstrated that the invention can prune most weights, leading to power-efficient inference for resource-limited hardware implementation such as micro processors or field-programmable gate arrays (FPGAs).


The method and system are based on the realization that rounding-aware training for powers-of-two expansion can eliminate the need of multiplier components from the system without causing significant performance loss. In addition, the method and system provide a way to reduce the number of PoT weights based on a knowledge distillation using a progressive compression of an over-parameterized DNN. It is can realize high compression, leading to power-efficient inference for resource-limited hardware implementation. A compacting rank is optimized with additional DNN model in a reinforcement learning framework. A rounding granularity is also successively decremented and mixed-order PoT weights are obtained for low-power processing. Another student model is also designed in parallel for a knowledge distillation to find a Pareto-optimal trade-off between performance and complexity.


The invention provides a way to realize low-complexity low-power DNN design which does not require multipliers. Our method uses progressive updates of LTH pruning and quantization-aware training. Hyperparameters are automatically optimized with Bayesian optimization. Multiple solutions are obtained by multi-seed training to find Pareto-optimal trade-off between performance and complexity. Pruning ranking is optimized with additional DNN model. Quantization granularity is also successively decremented and mixture-order APoT weights are obtained for low-power processing. Another student model derived from a parent model is also designed in parallel for knowledge distillation. Embodiments include digital pre-distortion, MIMO equalization, nonlinear turbo equalization etc.


Some embodiments of the present invention provide a hardware-friendly DNN framework for nonlinear equalization. Our DNN realizes multiplier-less operation based on powers-of-two quantization. We demonstrate that quantization-aware training (QAT) for additive powers-of-two (APoT) weights can eliminate multipliers without causing visible performance loss. In addition, we introduce weight pruning based on the lottery ticket hypothesis (LTH) to sparsify the over-parameterized DNN weights for further complexity reduction. We verify that progressive LTH pruning can prune most of the weights, yielding power-efficient inference in real-time processing.


In order to reduce the computational complexity of DNN computation, we integrate APoT quantization into a DeepShift framework. In the original DeepShift, DNN weights are quantized into a signed PoT as w=±2u,


where u is an integer to train. Note that the PoT weights can eliminate multiplier operations from DNN equalizers as it can be realized with bit shifting for fixed point precision or addition operation for floating point precision. The present invention further improves it with APoT weights: w=±(2u±2v) where we use another trainable integer v<u. It requires an additional summation but no multiplication at all. In our QAT updating, we used a straight-through rounding to find u and v after each epoch iteration once the pre-training phase is done. For some embodiments, higher-order additive PoT is used to further reduce quantization errors.


Even though our DNN equalizer does not require any multipliers, it still needs a relatively large number of addition operations. We introduce a progressive version of the LTH pruning method to realize low-power sparse DNN implementation. It is known that an over-parameterized DNN can be significantly sparsified without losing performance and that sparsified DNN can often outperform the dense DNN. In the progressive LTH pruning according to the invention, we first train the dense DNN via QAT for APoT quantization, starting from random initial weights. We then prune a small percentage of the edges based on the trained weights. We re-train the pruned DNN after rewinding the weights to the early-epoch weights for non-pruned edges. Rewinding, QAT updating, and pruning are repeated with a progressive increase of the pruning percentages. We use late rewinding of the first-epoch weights.


Accordingly, the zero-multiplier sparse DNN can be used for many applications including high-speed optical communications employing probabilistic shaping. The APoT quantization can achieve floating point arithmetic performance without multipliers and progressive LTH pruning can eliminate 99% of the weights for power-efficient implementation in some embodiments.


According to some embodiments of the present invention, a computer-implemented method is provided for training a set of artificial neural networks. The method may be performed by one or more computing processors in association with a memory storing computer-executable programs. The method comprises the steps of: (a) initializing a set of trainable parameters of an artificial neural network, wherein the set of trainable parameters comprise a set of trainable weights and a set of trainable biases; (b) training the set of trainable parameters using a set of training data; (c) generating a pruning mask based on the trained set of trainable parameters; (d) rewinding the set of trainable parameters; (e) pruning a selected set of trainable parameters based on the pruning mask; and (f) repeating the above steps from (b) to (e) for a specified number of times to generate a set of sparse neural networks having an incremental sparsity.


Further, some embodiments of the present invention can provide a computer-implemented method for testing an artificial neural network. The method can be performed by one or more computing processors, comprises the steps of: feeding a set of testing data into a plurality of an input node of the artificial neural network; propagating the set of testing data across the artificial neural network according to a set of pruning masks; and generating a set of output values from a plurality of an output node of the artificial network.


Yet further, some embodiments provide a system deployed for an artificial neural network. The system includes at least one computing processor; at least one memory bank; at least one interface link; at least one trained set of trainable parameters of the artificial neural network; causing processors to execute training method and testing method based on the at least one trained set of trainable parameters; (a) initializing a set of trainable parameters of an artificial neural network, wherein the set of trainable parameters comprise a set of trainable weights and a set of trainable biases; (b) training the set of trainable parameters using a set of training data; (c) generating a pruning mask based on the trained set of trainable parameters; (d) rewinding the set of trainable parameters; (e) pruning a selected set of trainable parameters based on the pruning mask; and (f) repeating the above steps from (b) to (e) for a specified number of times to generate a set of sparse neural networks having an incremental sparsity.


Accordingly, the embodiments can realize multiplier-less sparse DNN computation with no performance degradation. It can be used for ultra high-speed real-time applications requiring low-power and limited-resource deployment.





BRIEF DESCRIPTION OF THE DRAWING

The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description, explaining the principle of the invention.



FIG. 1 shows an exemplar system employing a sparse deep neural network according to some embodiments;



FIG. 2 shows an exemplar signal diagram to denoise with a sparse deep neural network according to some embodiments;



FIG. 3A and FIG. 3B show an exemplar deep neural network based on forward multi-layer perceptron architecture using different loss functions according to some embodiments;



FIG. 4 shows an exemplar performance with deep neural network using different loss functions according to some embodiments;



FIG. 5 shows an exemplar performance with deep neural network using different architecture according to some embodiments;



FIG. 6 shows a multiplier-less affine transform with additive powers-of-two weights for quantized deep neural network according to some embodiments;



FIG. 7A, FIG. 7B and FIG. 7C show an exemplar operations for quantized affine transforms according to some embodiments;



FIG. 8 shows an exemplar schematic of quantization-aware optimization for multiplier-less affine transform with additive powers-of-two weights for quantized deep neural network according to some embodiments;



FIG. 9 shows an exemplar performance of quantized deep neural network with different order of additive powers-of-two according to some embodiments;



FIG. 10 shows an exemplar schematic of progressive pruning for incremental sparsification according to some embodiments;



FIG. 11 shows an exemplar performance of sparse deep neural network according to some embodiments; and



FIG. 12 shows an exemplar system employing sparse quantized neural network according to some embodiments.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Various embodiments of the present invention are described hereafter with reference to the figures. It would be noted that the figures are not drawn to scale elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be also noted that the figures are only intended to facilitate the description of specific embodiments of the invention. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an aspect described in conjunction with a particular embodiment of the invention is not necessarily limited to that embodiment and can be practiced in any other embodiments of the invention.


Some embodiments of the present disclosure provide a multiplier-less deep neural network (DNN) to mitigate fiber-nonlinear distortion of shaped constellations. The novel DNN achieves an excellent performance-complexity trade-off with progressive lottery ticket hypothesis (LHT) weight pruning and additive powers-of-two (APoT) quantization. Some other embodiments include but not limited to inference and prediction for digital pre-distortion, channel equalization, channel estimation, nonlinear turbo equalization, speech recognition, image processing, biosignal sensing and so on.


DNN Equalization for Probabilistic Amplitude Shaping (PAS) QAM Systems

The optical communications system 100 under consideration is depicted in FIG. 1. The figure is an exemplar system employing a sparse deep neural network (DNN) according to some embodiments of the present invention. A transmitter side may use a distribution matcher (DM) 11 and forward error correction (FEC) coding 12 such as low-density parity-check (LDPC) or turbo codes. Eleven-channel dual-polarization quadrature-amplitude modulations (DP-QAM) 13 for 34 GBaud and 35 GHz spacing are multiplexed (MUX) 14 to be sent over fiber plants towards coherent receivers. In some embodiments, we consider N spans of dispersion unmanaged links 15 with 80 kin standard single-mode fiber (SSMF) 16. The SSMF 16 may typically have a dispersion parameter of D=17 ps/nm/km, a nonlinear factor of γ=1.2/W/km, and an attenuation of 0.2 dB/km. The span loss is compensated by Erbium-doped fiber amplifiers (EDFA) 17 with a noise figure of 5 dB. We may use digital root-raised cosine filters with 2% rolloff at both transmitter and receiver for Nyquist band-limited pulse-shaping. The receiver employs standard digital signal processing with symbol synchronization, carrier-phase recovery, dispersion compensation, and polarization recovery with 61-tap linear equalization (LE) 19 after demultiplexing (DEMUX) 18. The residual distortion is compensated by a DNN equalization 20, which generates log-likelihood ratio (LLR) values for FEC decoder 23 followed by inverse DM 23. The LLR values are also used to calculate loss values such as binary cross-entropy loss 21 to train DNN parameters. For some embodiments, the FEC decoding output is further fed back into the DNN equalization as a turbo iteration to improve the performance.


Due to fiber nonlinearity, residual distortion after LE will limit the achievable information rates and degrade the bit error rate performance. FIG. 2 shows an exemplar signal diagram to denoise with a sparse deep neural network according to some embodiments. The figure shows distorted DP-64QAM constellation after least-squares LE 19 for 1-/10-/20-span transmissions. Here, we compare uniform QAM and shaped QAM, which uses distribution matcher (DM) 11 for probabilistic amplitude shaping (PAS) following Maxwell-Boltzmann distribution; Pr(xi)∝exp(−λ|xi|2) with λ=2. We can observe that the shaped constellation is more distinguishable as the Euclidean distance is increased because of a reduced entropy (11.51 b/s/4D symbol).



FIG. 3A and FIG. 3B show an exemplar deep neural network based on forward multi-layer perceptron architecture using different loss functions according to some embodiments. FIG. 3A uses a single-label nonbinary cross-entropy loss 310, and FIG. 3B uses multi-label binary cross-entropy loss 360. To mitigate nonlinear distortion, some embodiments use DNN architecture which is configured with multiple layers with artificial neurons. Specifically, input layers 302, 352, the first hidden layers 303, 353, the second hidden layers 304, 354, and output layers 305, 355, may be used to configure DNN. The input layers 302, 352 may take a few adjacent symbols using a finite-length window of received sequences through a dely line 301, 351. Depending on loss functions, a post-processing of the output values may be required. For example, the DNN 300 using nonbinary cross entropy requires symbol-to-bit LLR convertor 320, while the DNN 350 using binary cross entropy can use the output layer as a direct LLR value 370 without using convertor.


To train the DNN, various different loss functions can be used, including but not limited to softmax cross entropy, binary cross entropy, distance, mean-square error, mean absolute error, connectionist temporal classification loss, negative log-likelihood, Kullback-Leibler divergence, margin loss, ranking loss, embedding loss, hinge loss, Huber loss, and so on. Specifically, the trainable parameters of DNN architecture are trained by some steps of operations: feeding a set of training data into the input nodes of the DNN; propagating the training data across layers of the DNN; pruning trainable parameters according to some pruning masks; and generating output values from output nodes of the DNN; calculating loss values for the training data; updating the trainable parameters based on the loss values through a backward message passing; and repeating those steps for a specified number of iteration times.



FIG. 4 shows an exemplar performance with deep neural network using different loss functions according to some embodiments. For post nonlinearity compensation, binary cross entropy loss can outperform nonbinary cross entropy loss to train the DNN, especially for high-order modulation cases.


To compensate for the residual nonlinear distortion, we use DNN-based equalizers, which directly generate bit-wise soft-decision log-likelihood ratios (LLRs) for the decoder. FIG. 5 shows an exemplar performance with deep neural network using different architecture according to some embodiments. The figure shows an exemplar performance with deep neural network using different architecture according to some embodiments. The figure shows the achievable rate of DP-256QAM across SSMF spans for various DNN equalizers; residual multi-layer perceptron (6-layer 100-node ResMLP), residual convolutional neural network (4-layer kernel-3 ResCNN), and bidirectional long short-term memory (2-layer 100-memory BiLSTM). Binary cross entropy loss is minimized via adaptive momentum (Adam) with a learning rate of 0.001 for 2,000 epochs over 216 training symbols to evaluate 214 distinct testing symbols. For some embodiments, besides Adam, the optimization algorithms are based on stochastic gradient descent, Nesterov gradient, resilient backpropagation, root-mean-square propagation, Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, adaptive momentum optimization, adaptive delta, and so on. The optimizers have hyperparameters including learning rate and decay, which are controled by a parameter scheduling method such as exponential decaying or step functions. For ResMLP, it is seen that the constellation shaping can achieve a reach extension by 29% for a target rate of 10 b/s. We found that the use of more hidden layers for CNN and LSTM architectures will further improve the training performance, while degrading testing performance due to over-fitting. It suggests that even larger size of training data is required to offer better performance for deeper models.


Zero-Multiplier DNN with Additive Powers-of-Two (APoT) Quantization


In order to reduce the computational complexity of DNN for real-time processing, the present invention prodives a way to integrate APoT quantization into a DeepShift framework. In the original DeepShift, DNN weights are quantized into a signed PoT as w−±2u, where u is an integer to train. Note that the PoT weights can fully eliminate multiplier operations from DNN equalizers as it can be realized with bit shifting for fixed-point precision or addition operation for floating-point (FP) precision.



FIG. 6 shows a multiplier-less affine transform with additive powers-of-two (APoT) weights for quantized deep neural network according to some embodiments. Most DNNs including linear layers and convolutional layers require affine transforms 610 to propagate values across layers. Typically, the affine transform requires a matrix-vector multiplication, which may be implemented with multiply-accumurate operations. When the affine weight matrix is represented with APoT 615, the matrix-vector multiplication for the affine transform 620 can be realized by balanced ternery DNN 630 witch does not require multiplications but bit-plane expansion 650 with bit-shifting and accumulation operations. More importantly, the balanced ternery DNN 630 has a potential to reduce the number of addition operations 640 because of the increased degress of freedom.



FIG. 7A, FIG. 7B and FIG. 7C show an exemplar operations for quantized affine transforms according to some embodiments. The typical affine transform requires multiply and accumulate operations as shown in FIG. 7A, while PoT used in DeepShift framework can eliminate the multiplication operatins by shift-and-acumulation operations 720. We further improve the DeepShift by using APoT weights, i.e., w=±(2u±2v), where we use another trainable integer v<u . It requires an additional summation as shown in FIG. 7C, but no multiplication likewise PoT. Using APoT weights, we can significantly decrease the residual quantization error of the conventional PoT as depicted in FIG. 7C compared to FIG. 7B. Note that the original APoT uses a deterministic non-trainable look-up table, whereas our invention extends it as an improved DeepShift with trainable APoT weights through the use of QAT. For some embodiments, how to quantize the weights is based on power-of-two, additive powers-of-two, power-of-three, additive powers-of-three, table look-up, vector quantization and so on. The multiplication-less operations can realize a high-speed low-power real-time DNN operation. The order of additive powers-of-two, i.e., how many shift operations per weight value, can be adaptive according to the requirement of accuracy for some embodiments.



FIG. 8 shows an exemplar schematic of quantization-aware optimization for multiplier-less affine transform with additive powers-of-two weights for quantized deep neural network according to some embodiments. FIG. 8 includes (a) Multiplier-Less DNN 800 and (b) Quantization-Aware Training (QAT) process. For the DNN 800, the signal is propagated in forward way to calculate loss and backward way to update the weights with loss gradient. In the QAT updating, we use a straight-through rounding to find dual bit-shift integers u and v after each epoch iteration for some embodiments. Specifically, the DNN output is calculated through a forward propagation 820, and the loss gradient is backpropagated 830. After updating the trainable parameters such as weights and biases through optimizatin step like Adam 840, the updated parameters are quantized with APoT operation 810. For some embodiments, we may use a pre-training phase before the QAT fine-tuning in order to stabilize the DNN learning. For yet another embodiment, the DNN uses hypernetwork and feature-wise linear modulation (FiLM) to control the affinite transform for online adaptation, and those hypernetworks are also quantized in the similar way of QAT. In some embodiments, the gradient values in backward propagation are further quantized with APoT to reduce the computational complexity of QAT.



FIG. 9 shows an exemplar performance of quantized deep neural network with different order of additive powers-of-two according to some embodiments. The figure illustrates the achievable rate across launch power at the 22nd span for 6-layer 100-node ResMLP. PoT quantization has a small degradation of 0.11 b/s compared to FP precision DNN equalizer for shaped DP-64QAM, whereas a considerable loss of 0.33 b/s is seen for shaped DP-256QAM. Notably, our multiplier-less DNN with APoT quantization has no degradation (but slight improvement) from FP precision. This is a great advantage in practice for real-time fiber nonlinearity compensation as there is no performance loss yet no multipliers are required.


Lottery Ticket Hypothesis (LTH) Pruning for Sparse DNN

Even though our DNN equalizer does not require any multipliers, it still needs a relatively large number of addition operations due to the over-parameterized DNN architecture having huge number of trainable weights. We introduce a progressive version of the LTH pruning method to realize low-power sparse DNN implementation. It is known that an over-parameterized DNN can be significantly sparsified without losing performance and that sparsified DNN can often outperform the dense DNN in LTH. FIG. 10 shows an exemplar schematic of progressive pruning 1000 for incremental sparsification according to some embodiments. The figure illustrates the progressive LTH pruning. We first train the dense DNN via QAT for APoT quantization, starting from random initial weights 1010 over a finite number of epochs 1020 towards the last update 1030. We then prune a small percentage of the edges 1040 based on the trained weights to generate a pruning mask. We re-train the pruned DNN after rewinding the weights to the early-epoch weights for non-pruned edges according to the pruning mask 1050. Rewinding 1060, QAT updating 1070 over multiple epochs 1080, and pruning 1090 are repeated with a progressive increase of the pruning percentages. For some embodiments, we use late rewinding of the later-epoch weights. For some embodiments, the control of the incremental sparcification is adjusted by another auxiliary neural network trained by multiple training episodes through a deep reinforcement learning framework. The process is formulated as steps of: initializing trainable weights and biases of DNN; training those trainable parameters using training data; generating a pruning mask based on the trained parameters; rewinding the trainable parameters; pruning a selected parameters based on the pruning mask; and repeating those steps for a specified number of times to generate a set of sparse DNN having an incremental sparsity.



FIG. 11 is an exemplar performance of sparse deep neural network according to some embodiments. The figure shows a trade-off between the achievable rate and the number of non-zero weights. For dense DNNs, more hidden nodes and more hidden layers can improve the performance in general, at the cost of computational complexity. In consequence, a moderate depth such as 4-layer DNN can be best in the Pareto sense of the performance-complexity trade-off in low-complexity regimes. The LTH pruning can significantly improve the trade-off, i.e., the sparse DNNs can achieve more than 50% complexity reduction over the dense DNNs to achieve a target rate of 10 b/s. Using the progressive pruning, we can prune more than 99% of the weights of 6-layer 100-node ResMLP to maintain 10 b/s. Consequently, the sparse DNNs can be significantly lower-complex than the best dense DNNs by 73% and 87% for shaped 64QAM and 256QAM, respectively.


As discussed above, various DNN equalizers are provided for nonlinear compensation in optical fiber communications employing probabilistic amplitude shaping. We then proposed a zero-multiplier sparse DNN equalizer based on trainable version of APoT quantization and LTH pruning techniques. We showed that APoT quantization can achieve floating-point arithmetic performance without using any multipliers, whereas the conventional PoT quantization suffers from a severe penalty. We also demonstrated that the progressive LTH pruning can eliminate 99% of the weights, enabling highly power-efficient implementation of DNN equalization for real-time fiber-optic systems.



FIG. 12 shows an exemplar system 500 employing sparse quantized neural network according to some embodiments. The system 500 may be referred to as a sparse quantized neural network system. The system 500 may include at least one computing processor 120, at least one memory bank 130, at least one interface link 105 that may include HMI 110, NIC 150 or the combination of HMI 110 and NIC 150. Further the system 500 may include a storage 140 that stores computer executable program and at least one set of trainable parameters of the artificial neural network, which includes reconfigurable DNNs 141, hyperparameter 142, scheduling criteria 143, forward/backward data 144, temporary cache 145, pruning algorithm 146 and quantization algorithm 147.


When the computer executable program is performed by the at least one computing processor 120, the program causes the at least one processor to execute a training method and testing method based on the at least one trained set of trainable parameters; (a) initializing a set of trainable parameters of an artificial neural network, wherein the set of trainable parameters comprise a set of trainable weights and a set of trainable biases; (b) training the set of trainable parameters using a set of training data; (c) generating a pruning mask based on the trained set of trainable parameters; (d) rewinding the set of trainable parameters; (e) pruning a selected set of trainable parameters based on the pruning mask; and (f) repeating the above steps from (b) to (e) for a specified number of times to generate a set of sparse neural networks having an incremental sparsity.


The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable format.


Also, the embodiments of the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.


Use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.


Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention.


Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims
  • 1. A computer-implemented method for training a set of artificial neural networks, performed by at least one computing processor, wherein the method uses the at least one processor coupled with a memory storing instructions implementing the method, wherein the instructions, when executed by the at least processor, carry out steps of the method, comprising: (a) initializing a set of trainable parameters of an artificial neural network, wherein the set of trainable parameters comprise a set of trainable weights and a set of trainable biases;(b) training the set of trainable parameters using a set of training data;(c) generating a pruning mask based on the trained set of trainable parameters;(d) rewinding the set of trainable parameters;(e) pruning a selected set of trainable parameters based on the pruning mask; and(f) repeating the above steps from (b) to (e) for a specified number of times to generate a set of sparse neural networks having an incremental sparsity.
  • 2. The method of claim 1, wherein the training further comprises the steps of: feeding the set of training data into a plurality of an input node of the artificial neural network;propagating the set of training data across the artificial neural network according to the set of pruning masks and the trainable parameters; andgenerating a set of output values from a plurality of an output node of the artificial network;calculating a set of loss values for the set of training data based on the set of output values;updating the set of trainable parameters based on the set of loss values through a backward message passing; andrepeating the above steps for a specified number of iteration times.
  • 3. The method of claim 2, wherein the updating the set of trainable parameters is based on stochastic gradient descent, resilient backpropagation, root-mean-square propagation, Broyden-Fletcher-Goldfarb-Shanno algorithm, adaptive momentum optimization, adaptive subgradient, adaptive delta, or a variant thereof.
  • 4. The method of claim 1, wherein the set of loss values is based on mean-square error, mean absolute error, cross entropy, connectionist temporal classification loss, negative log-likelihood, Kullback-Leibler divergence, margin loss, ranking loss, embedding loss, hinge loss, Huber loss, or a variant thereof.
  • 5. The method of claim 2, wherein the updating the set of trainable parameters further comprises the step of rounding the trainable weights to quantize values based on power-of-two, additive powers-of-two, power-of-three, additive powers-of-three, or a variant thereof.
  • 6. The method of claim 1, wherein the incremental sparsity is controlled by an auxiliary neural network trained with a reinforcement learning.
  • 7. The method of claim 5, wherein the rounding the trainable weights is controlled by a rounding neural network trained with a reinforcement learning.
  • 8. A computer-implemented method for testing an artificial neural network, performed by at least one computing processor, wherein the method uses the at least one processor coupled with a memory storing instructions implementing the method, wherein the instructions, when executed by the at least processor, carry out steps of the method, comprising: feeding a set of testing data into a plurality of an input node of the artificial neural network;propagating the set of testing data across the artificial neural network according to a set of pruning masks; andgenerating a set of output values from a plurality of an output node of the artificial network.
  • 9. The method of claim 8, wherein the propagating further comprises the steps of: transforming values of neuron nodes according to a trained set of trainable parameters of the artificial neural network;modifying the transformed values of neuron nodes according to a set of activation functions; andrepeating the above steps across a plural of a neuron layer of the artificial neural network.
  • 10. The method of claim 9, wherein the transforming values of neuron nodes is based on sign flipping, bit shifting, accumulation, biasing, or a variant thereof, according to a quantized set of trainable parameters with power-of-two, additive powers-of-two, power-of-three, additive powers-of-three, or a variant thereof.
  • 11. A system deployed for an artificial neural network comprises: at least one interface link;at least one computing processor;at least one memory bank configured to at least one trained set of trainable parameters of the artificial neural network and a training method and a testing method, wherein the training method and the testing method use the at least one processor coupled with the at least one memory storing instructions implementing the training and testing methods, wherein the instructions, when executed by the at least processor, carry out at steps of the method, comprising;causing the at least one processor to execute the training method and the testing method based on the at least one trained set of trainable parameters;(a) initializing a set of trainable parameters of an artificial neural network, wherein the set of trainable parameters comprise a set of trainable weights and a set of trainable biases;(b) training the set of trainable parameters using a set of training data;(c) generating a pruning mask based on the trained set of trainable parameters;(d) rewinding the set of trainable parameters;(e) pruning a selected set of trainable parameters based on the pruning mask; and(f) repeating the above steps from (b) to (e) for a specified number of times to generate a set of sparse neural networks having an incremental sparsity.
  • 12. The system of claim 11, wherein the artificial neural network is parallelized and serialized.
  • 13. The system of claim 11, wherein the artificial neural network uses a set of pruning masks to reduce arithmetic operations.
  • 14. The system of claim 11, wherein the artificial neural network uses a set of quantized parameters to reduce arithmetic multiplication operations using power-of-two, additive powers-of-two, power-of-three, additive powers-of-three, or a variant thereof.
Provisional Applications (1)
Number Date Country
63242636 Sep 2021 US