The present invention relates to a Deep Neural Network (DNN), and more particularly, to a method for reconfiguring a DNN model and an associated electronic device.
Large scale Deep Neural Networks have achieved remarkable results performing cutting-edge tasks in the fields of computer vision, image recognition, and speech recognition. Thanks to intensive computational power and a large amount of data and memory storage, deep earning models become bigger and deeper, enabling them to better learn from scratch. However, the high computation intensity of these models cannot be deployed at a resource-limited end-user with low memory storage and computing capabilities such as mobile phones and embedded devices. Moreover, learning from scratch is not feasible for end-users because of the limited data set. It means that end-users cannot develop customized deep learning models based on a very limited dataset.
One of the objectives of the present invention is therefore to provide a self-tuning model compression methodology for reconfiguring a deep neural network, and an associated electronic device.
According to an embodiment of the present invention, the proposed methodology for reconfiguring a Deep Neural Network is disclosed, including two components: (1) a pre-trained DNN model and a dataset, wherein the pre-trained DNN model consists of a number of stacked layers including a plurality of neurons. These staked layers extract low, middle, and high level feature maps and lead the results on dataset. (2) A self-tuning model compression framework compresses the pre-trained DNN model into a smaller size DNN model with acceptable computational complexity and accuracy loss from a limited dataset. The compressed smaller size DNN model can be applied on an end-user application.
According to an embodiment of the present invention, an electronic device is disclosed, comprising: a storage device arranged to store a program code, and a processor arranged to execute the program code. As processors load and execute the program code, the code instructs the processor to execute the following steps: (1) receive a pre-trained DNN model and a dataset; (2) compress the pre-trained DNN model into a smaller size DNN model according to the dataset with acceptable computational complexity and accuracy loss.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should not be interpreted as a close-ended term such as “consist of”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
The idea of artificial neural networks has existed for a long time; nevertheless, limited computational ability of hardware has been an obstacle to related research. Over the last decade, there has been significant progress in computational capabilities of processors and algorithms of machine learning. Only recently has an artificial neural network that can generate reliable judgments become possible. Gradually, artificial neural networks are being experimented with in many fields such as autonomous vehicles, image recognition, natural language understanding, and data mining.
Neurons are the basic computation units in a brain. Each neuron receives input signals from its dendrites and produces output signals along its single axon (usually provided to other neurons as input signals). The typical operation of an artificial neuron can be modeled as:
wherein x represents the input signal and y represents the output signal. Each dendrite multiplies its input signal x by a weight w; this parameter is used to simulate the strength of influence of one neuron on another. The symbol b represents a bias contributed by the artificial neuron itself. The symbol f represents a specific nonlinear function and is generally implemented as a sigmoid function, hyperbolic tangent function, or rectified linear function in practical computation.
For an artificial neural network, the relationship between its input data and final judgment is in effect defined by the weights and biases of all the artificial neurons in the network. In an artificial neural network adopting supervised learning, training samples are fed to the network. Then, the weights and biases of artificial neurons are adjusted with the goal of finding out a judgment policy where the judgments can match the training samples. In an artificial neural network adopting unsupervised learning, whether a judgment matches the training sample is unknown. The network adjusts the weights and biases of artificial neurons and tries to find out an underlying rule. No matter which kind of learning is adopted, the goals are the same—finding out suitable parameters (i.e. weights and biases) for each neuron in the network. The determined parameters will be utilized in future computation.
Currently, most artificial neural networks are designed with a multi-layer structure. Layers serially connected between the input layer and the output layer are called hidden layers. The input layer receives external data and does not perform computation. In a hidden layer or the output layer, input signals are the output signals generated by its previous layer, and each artificial neuron included therein respectively performs computation according to the aforementioned equation. Each hidden layer and output layer can respectively be a convolutional layer or a fully-connected layer. The main difference between a convolutional layer and a fully-connected layer is that neurons in a fully connected layer have full connections to all neurons in its previous layer, whereas neurons in a convolutional layer are only connected to a local region of its previous layer. Many artificial neurons in a convolutional layer share parameters.
Currently, neural networks can have a variety of network structures. Each structure has its unique combination of convolutional layers and fully-connected layers. Taking the AlexNet structure proposed by Alex Krizhevsky et al. in 2012 as an example, the network includes 650,000 artificial neurons that form five convolutional layers and three fully-connected layers connected in series.
As the number of layers increases, an artificial neural network can simulate a more complicated function (i.e. a more complicated judgment policy). The number of artificial neurons required in the network will swell significantly, however, introducing a huge burden in the hardware cost. The high computational intensity of these models therefore cannot be deployed at a resource-limited end-user with low memory storage and computing capabilities, such as mobile phones and embedded devices. Besides, a network with this large scale is generally not an optimal solution for an end-user application. For example, the aforementioned AlexNet structure might be used for the recognition of hundreds of objects, but the end-user application might only be applying a network for the recognition of two objects. The pre-trained model with a large scale will not be the optimal solution for the end-user. The present invention provides a method for reconfiguring the DNN and an associated electronic device to solve the aforementioned problem.
Step 202: receive a DNN model and a dataset.
As mentioned above, the pre-trained model (for example, the AlexNet structure, VGG16, ReseNet, or MobileNet structure) with the large scale is not applicable for the end-user terminal. In order to satisfy the end-user's requirements, inspired by the transfer-learning technique, we apply the pre-trained model into the end-user terminal for an end-user application via the proposed self-tuning model compression technology. In this way, the pre-trained DNN model can learn customized features from the limited measurement dataset.
Step 204: compress the DNN model into a reconfigured model according to the data set.
In this step, the DNN model is compressed into the reconfigured model which is applicable for the end-user terminal according to the provided dataset. As mentioned above, the DNN model comprises an input layer, at least one hidden layer and an output layer, wherein a neuron is the basic computation unit in each layer. In one embodiment, the compression operation removes a plurality of neurons from the DNN model to form the reconfigured model, so that the number of neurons comprised in the reconfigured model is less than the number of neurons comprised in the pre-trained DNN model. This is not a limitation of the present invention, however. As mentioned above, the typical operation of an artificial neuron can be modeled as:
To implement the above model, each neuron may be implemented by a logic circuit which comprises at least one multiplexer or at least one adder. The compression operation is dedicated to simplify the models of the neurons comprised in the pre-trained model. For example, the compression operation may remove at least one logic circuit from the pre-trained model to simplify the complexity of hardware to form the reconfigured model. In other words, the total number of logic circuits in the reconfigured model is less than in the pre-trained DNN model.
Step 206: execute the self-tuning compression methodology on a user terminal for an end-user application.
After the pre-trained DNN model is compressed by the proposed methodology, the reconfigured model is applicable for the end-user application and executed on the end-user terminal. The end-user application, in this embodiment, can be used for image recognition or speech recognition which is not a limitation of the present invention. Through the compression operation, the pre-trained model with a large scale is compressed into the reconfigured model which is applicable for the end-user application.
Step 302: analyze a sparsity of the DNN model to generate an analysis result.
To exploit redundancies within parameters and feature maps for the pre-trained DNN model, the sparsity of the pre-trained DNN model is analyzed, and the analysis result is accordingly generated.
Step 304: prune and quantize a network redundancy of the DNN model.
In this step, in order to find the nest rank of filters, firstly, utilizing the redundancies of neural network, we apply the prune and quantization techniques to compress the network. After that, we apply a low-rank approximation method to the hidden layers and the output later to reduce in the pre-trained DNN model according to an analysis result. As mentioned above, the pre-trained DNN model comprises a plurality of neurons, each neuron corresponding to multiple parameters, e.g. the weight w and the bias b. Among these parameters, some are redundant and do not contribute a lot to the output. If the neurons could be ranked in the network according to the contribution, the low ranking neurons from the network could be removed to generate a smaller and faster network, i.e. the reconfigured model. For example, the ranking can be done according to the L1/L2 mean of neuron weights, the mean activations, or the number of times of not being zero on some validation set, etc. It should be noted that the reconfigured model can still be finely tuned (or retrained) based on the provided data set in order to construct the base model to describe the common features of the end-user application. This should be a well-known technique for those skilled in the art; the detailed description is omitted here for brevity.
Briefly summarized, by compressing the pre-trained DNN model with a large scale to remove its redundancy, a reconfigured model with a customized model size and having an acceptable computational complexity is generated.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.