Success of Deep Neural Networks (DNNs) in a large range of video applications such as semantic classification, target detection/recognition, target tracking, video quality enhancement, etc. poses a need for compressing DNN models. Therefore, the Motion Picture Experts Group (MPEG) is actively working on the Coded Representation of Neural Network standard (NNR) that is used to encode DNN models to save both storage and computation.
According to embodiments, a method of neural network model compression is performed by at least one processor and includes receiving an input neural network and an input mask, and reducing parameters of the input neural network, using a deep neural network that is trained by selecting pruning micro-structure blocks to be pruned, from a plurality of blocks of input weights of the deep neural network that are masked by the input mask, pruning the input weights, based on the selected pruning micro-structure blocks, selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, and unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network. The method further includes obtaining an output neural network with the reduced parameters, based on the input neural network and the pruned and unified input weights of the deep neural network.
According to embodiments, an apparatus for neural network model compression includes at least one memory configured to store program code, and at least one processor configured to read the program code and operate as instructed by the program code. The program code includes receiving code configured to cause the at least one processor to receive an input neural network and an input mask, and reducing code configured to cause the at least one processor to reduce parameters of the input neural network, using a deep neural network that is trained by selecting pruning micro-structure blocks to be pruned, from a plurality of blocks of input weights of the deep neural network that are masked by the input mask, pruning the input weights, based on the selected pruning micro-structure blocks, selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, and unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network. The program code further includes obtaining code configured to cause the at least one processor to output an output neural network with the reduced parameters, based on the input neural network and the pruned and unified input weights of the deep neural network.
According to embodiments, a non-transitory computer-readable medium stores instructions that, when executed by at least one processor for neural network model compression, cause the at least one processor to receive an input neural network and an input mask, and reduce parameters of the input neural network, using a deep neural network that is trained by selecting pruning micro-structure blocks to be pruned, from a plurality of blocks of input weights of the deep neural network that are masked by the input mask, pruning the input weights, based on the selected pruning micro-structure blocks, selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, and unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network. The instructions, when executed by the at least one processor, further cause the at least one processor to obtain an output neural network with the reduced parameters, based on the input neural network and the pruned and unified input weights of the deep neural network.
This disclosure is related to neural network model compression. To be more specific, methods and apparatuses described herein are related to neural network model compression with micro-structured weight pruning and weight unification.
Embodiments described herein include a method and an apparatus for compressing a DNN model by using a micro-structured weight pruning regularization in an iterative network retraining/finetuning framework. A pruning loss is jointly optimized with the original network training target through the iterative retraining/finetuning process.
The embodiments described herein further include a method and an apparatus for compressing a DNN model by using a structured unification regularization in an iterative network retraining/finetuning framework. A weight unification loss includes a compression rate loss, a unification distortion loss, and a computation speed loss. The weight unification loss is jointly optimized with the original network training target through the iterative retraining/finetuning process.
The embodiments described herein further include a method and an apparatus for compressing a DNN model by using a micro-structured joint weight pruning and weight unification regularization in an iterative network retraining/finetuning framework. A pruning loss and a unification loss are jointly optimized with the original network training target through the iterative retraining/finetuning process.
There exist several approaches for learning a compact DNN model. The target is to remove unimportant weight coefficients and the assumption is that the smaller the weight coefficients are in value, the less important they are, and the less impact there is on the prediction performance by removing these weights. Several network pruning methods have been proposed to pursue this goal. For example, the unstructured weight pruning methods adds sparsity-promoting regularization terms into the network training target and obtain unstructurally distributed zero-valued weights, which can reduce model size but can not reduce inference time. The structured weight pruning methods deliberately enforce entire weight structures to be pruned, such as rows or columns. The removed rows or columns will not participate in the inference computation and both the model size and inference time can be reduced. However, removing entire weight structures like rows and columns may cause large performance drop of the original DNN model.
Several network pruning methods add sparsity-promoting regularization terms into the network training target. Unstructured weight pruning methods add sparsity-promoting regularization terms into the network training target and obtain unstructurally distributed zero-valued weights. The structured weight pruning methods deliberately enforce selected weight structures to be pruned, such as rows or columns. From the perspective of compressing DNN models, after learning a compact network model, the weight coefficients can be further compressed by quantization followed by entropy coding. Such further compression processes can significantly reduce the storage size of the DNN model, which are used for model deployment over mobile devices, chips, etc.
Embodiments described herein include a method and an apparatus for micro-structured weight pruning aiming at reducing the model size as well as accelerating inference computation, with little sacrifice of the prediction performance of the original DNN model. An iterative network retraining/refining framework is used to jointly optimize the original training target and the weight pruning loss. Weight coefficients are pruned according to small micro-structures that align with the underlying hardware design, so that the model size can be largely reduced, the original target prediction performance can be largely preserved, and the inference computation can be largely accelerated. The method and the apparatus can be applied to compress an original pretrained dense DNN model. They can also be used as an additional processing module to further compress a pre-pruned sparse DNN model by other unstructured or structured pruning approaches.
The embodiments described herein further include a method and an apparatus for a structured weight unification regularization aiming at improving the compression efficiency in later compression process. An iterative network retraining/refining framework is used to jointly optimize the original training target and the weight unification loss including the compression rate loss, the unification distortion loss, and the computation speed loss, so that the learned network weight coefficients preserves the original target performance, are suitable for further compression, and can speed up computation of using the learned weight coefficients. The method and the apparatus can be applied to compress the original pretrained DNN model. They can also be used as an additional processing module to further compress any pruned DNN model.
The embodiments described herein include a method and an apparatus for a joint micro-structured weight pruning and weight unification aiming at improving the compression efficiency in later compression process as well as accelerating inference computation. An iterative network retraining/refining framework is used to jointly optimize the original training target and the weight pruning loss and weight unification loss. Weight coefficients are pruned or unified according to small micro-structures, and the learned weight coefficients preserve the original target performance, are suitable for further compression, and can speed up computation of using the learned weight coefficients. The method and the apparatus can be applied to compress an original pretrained dense DNN model. They can also be used as an additional processing module to further compress a pre-pruned sparse DNN model by other unstructured or structured pruning approaches.
As shown in
The user device 110 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with platform 120. For example, the user device 110 may include a computing device (e.g., a desktop computer, a laptop computer, a tablet computer, a handheld computer, a smart speaker, a server, etc.), a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a wearable device (e.g., a pair of smart glasses or a smart watch), or a similar device. In some implementations, the user device 110 may receive information from and/or transmit information to the platform 120.
The platform 120 includes one or more devices as described elsewhere herein. In some implementations, the platform 120 may include a cloud server or a group of cloud servers. In some implementations, the platform 120 may be designed to be modular such that software components may be swapped in or out. As such, the platform 120 may be easily and/or quickly reconfigured for different uses.
In some implementations, as shown, the platform 120 may be hosted in a cloud computing environment 122. Notably, while implementations described herein describe the platform 120 as being hosted in the cloud computing environment 122, in some implementations, the platform 120 may not be cloud-based (i.e., may be implemented outside of a cloud computing environment) or may be partially cloud-based.
The cloud computing environment 122 includes an environment that hosts the platform 120. The cloud computing environment 122 may provide computation, software, data access, storage, etc. services that do not require end-user (e.g., the user device 110) knowledge of a physical location and configuration of system(s) and/or device(s) that hosts the platform 120. As shown, the cloud computing environment 122 may include a group of computing resources 124 (referred to collectively as “computing resources 124” and individually as “computing resource 124”).
The computing resource 124 includes one or more personal computers, workstation computers, server devices, or other types of computation and/or communication devices. In some implementations, the computing resource 124 may host the platform 120. The cloud resources may include compute instances executing in the computing resource 124, storage devices provided in the computing resource 124, data transfer devices provided by the computing resource 124, etc. In some implementations, the computing resource 124 may communicate with other computing resources 124 via wired connections, wireless connections, or a combination of wired and wireless connections.
As further shown in
The application 124-1 includes one or more software applications that may be provided to or accessed by the user device 110 and/or the platform 120. The application 124-1 may eliminate a need to install and execute the software applications on the user device 110. For example, the application 124-1 may include software associated with the platform 120 and/or any other software capable of being provided via the cloud computing environment 122. In some implementations, one application 124-1 may send/receive information to/from one or more other applications 124-1, via the virtual machine 124-2.
The virtual machine 124-2 includes a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. The virtual machine 124-2 may be either a system virtual machine or a process virtual machine, depending upon use and degree of correspondence to any real machine by the virtual machine 124-2. A system virtual machine may provide a complete system platform that supports execution of a complete operating system (“OS”). A process virtual machine may execute a single program, and may support a single process. In some implementations, the virtual machine 124-2 may execute on behalf of a user (e.g., the user device 110), and may manage infrastructure of the cloud computing environment 122, such as data management, synchronization, or long-duration data transfers.
The virtualized storage 124-3 includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of the computing resource 124. In some implementations, within the context of a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization may eliminate dependencies between data accessed at a file level and a location where files are physically stored. This may enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations.
The hypervisor 124-4 may provide hardware virtualization techniques that allow multiple operating systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as the computing resource 124. The hypervisor 124-4 may present a virtual operating platform to the guest operating systems, and may manage the execution of the guest operating systems. Multiple instances of a variety of operating systems may share virtualized hardware resources.
The network 130 includes one or more wired and/or wireless networks. For example, the network 130 may include a cellular network (e.g., a fifth generation (5G) network, a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, or the like, and/or a combination of these or other types of networks.
The number and arrangement of devices and networks shown in
A device 200 may correspond to the user device 110 and/or the platform 120. As shown in
The bus 210 includes a component that permits communication among the components of the device 200. The processor 220 is implemented in hardware, firmware, or a combination of hardware and software. The processor 220 is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, the processor 220 includes one or more processors capable of being programmed to perform a function. The memory 230 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by the processor 220.
The storage component 240 stores information and/or software related to the operation and use of the device 200. For example, the storage component 240 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive.
The input component 250 includes a component that permits the device 200 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, the input component 250 may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator). The output component 260 includes a component that provides output information from the device 200 (e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)).
The communication interface 270 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables the device 200 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. The communication interface 270 may permit the device 200 to receive information from another device and/or provide information to another device. For example, the communication interface 270 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like.
The device 200 may perform one or more processes described herein. The device 200 may perform these processes in response to the processor 220 executing software instructions stored by a non-transitory computer-readable medium, such as the memory 230 and/or the storage component 240. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices.
Software instructions may be read into the memory 230 and/or the storage component 240 from another computer-readable medium or from another device via the communication interface 270. When executed, software instructions stored in the memory 230 and/or the storage component 240 may cause the processor 220 to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
Methods and apparatuses for neural network model compression with micro-structured weight pruning and weight unification will now be described in detail.
As shown in
The parameter reduction module 310 reduces a set of parameters of an input neural network, to obtain an output neural network. The neural network may include the parameters and an architecture as specified by a deep learning framework.
For example, the parameter reduction module 310 may sparsify (set weights to zero) and/or prune away connections of the neural network. In another example, the parameter reduction module 310 may perform matrix decomposition on parameter tensors of the neural network into a set of smaller parameter tensors. The parameter reduction module 310 may perform these methods in cascade, for example, may first sparsify the weights and then decompose a resulting matrix.
The parameter approximation module 320 applies parameter approximation techniques on parameter tensors that are extracted from the output neural network that is obtained from the parameter reduction module 310. For example, the techniques may include any one or any combination of quantization, transformation and prediction. The parameter approximation module 320 outputs first parameter tensors that are not modified by the parameter approximation module 320, second parameter tensors that are modified or approximated by the parameter approximation module 320, and respective metadata to be used to reconstruct original parameter tensors that are not modified by the parameter approximation module 320, from the modified second parameter tensors.
The reconstruction module 330 reconstructs the original parameter tensors from the modified second parameter tensors that are obtained from the parameter approximation module 320 and/or the decoder 350, using the respective metadata that is obtained from the parameter approximation module 320 and/or the decoder 350. The reconstruction module 330 may reconstruct the output neural network, using the reconstructed original parameter tensors and the first parameter tensors.
The encoder 340 may perform entropy encoding on the first parameter tensors, the second parameter tensors and the respective metadata that are obtained from the parameter approximation module 320. This information may be encoded into a bitstream to the decoder 350.
The decoder 350 may decode the bitstream that is obtained from the encoder 340, to obtain the first parameter tensors, the second parameter tensors and the respective metadata.
The system 300 may be implemented in the platform 120, and one or more modules of
The parameter reduction module 310 or the parameter approximation module 320 may include a DNN that is trained by the following training apparatuses.
As shown in
As shown in
Let ={(x,y)} denote a data set in which a target y is assigned to an input x. Let Θ={w} denote a set of weight coefficients of a DNN (e.g., of the parameter reduction module 310 or the parameter approximation module 320). The target of network training is to learn an optimal set of weight coefficients Θ so that a target loss £(|Θ) can be minimized. For example, in previous network pruning approaches, the target loss £T(|Θ) has two parts, an empirical data loss (|Θ) and a sparsity-promoting regularization loss £R(Θ):
£T(|Θ)=(|Θ)+λR£R(Θ), (1)
where λR≥0 is a hyperparameter balancing the contributions of the data loss and the regularization loss. When λR=0, only the target loss £T(|Θ) only considers the empirical data loss, and the pre-trained weight coefficients are dense.
The pre-trained weight coefficients Θ can further go through another network training process in which an optimal set of weight coefficients can be learned achieve further model compression and inference acceleration. Embodiments include a micro-structured pruning method to achieve this goal.
Specifically, a micro-structured weight pruning loss £S(|Θ) is defined, which is optimized together with the original target loss:
£(|Θ)=£T(|Θ)+λS£S(Θ), (2)
where λS≥0 is a hyperparameter to balance the contributions of the original training target and the weight pruning target. By optimizing £(|Θ) of Equation (2), the optimal set of weight coefficients that can largely help the effectiveness of further compression can be obtained. Also, the micro-structured weight pruning loss takes into consideration the underlying process of how the convolution operation is performed as a GEMM matrix multiplication process, resulting in optimized weight coefficients that can largely accelerate computation. It is worth noting that the weight pruning loss can be viewed as an additional regularization term to a target loss, with (when λR>0) or without (when λR=0) regularizations. Also, the method can be flexibly applied to any regularization loss £R(Θ).
For both the learning effectiveness and the learning efficiency, an iterative optimization process is performed. In the first step, parts of the weight coefficients satisfying the desired micro structure are fixed, and then in the second step, the non-fixed parts of the weight coefficients are updated by back-propagating the training loss. By iteratively conducting these two steps, more and more weights can be fixed gradually, and the joint loss can be gradually optimized effectively.
Moreover, in embodiments, each layer is compressed individually, and so £S(|Θ) can be further written as:
£S(Θ)=Σj=1NLS(Wj), (3)
where LS(Wj) is a pruning loss defined over the j-th layer, N is the total number of layers that are involved in this training process, and Wj denotes the weight coefficients of the j-th layer. Again, since LS(Wj) is computed for each layer independently, the script j may be omitted without loss of generality.
For each network layer, its weight coefficients W is a 5-Dimension (5D) tensor with size (ci, k1, k2, k3, co). The input of the layer is a 4-Dimension (4D) tensor A of size (hi,wi,di,ci), and the output of the layer is a 4D tensor B of size (ho,wo,do,co). The sizes ci, k1, k2, k3, co, wi, di, ho, wo, do are integer numbers greater or equal to 1. When any of the sizes ci, k1, k2, k3, co, hi, wi, di, ho, wo, do takes number 1, the corresponding tensor reduces to a lower dimension. Each item in each tensor is a floating number. Let M denote a 5D binary mask of the same size as W, where each item in M is a binary number 0/1 indicating whether the corresponding weight coefficient is pruned/kept in a pre-pruned process. M is introduced to be associated with W to cope with the case in which W is from a pruned DNN model using previous structured or unstructured pruning methods, where some connections between neurons in the network are removed from computation. When W is from the original unpruned dense model, all items in M take value 1. The output B is computed through the convolution operation ⊙ based on A, M and W:
The parameters hi, wi and di (h0, wo and do) are the height, weight and depth of the input tensor A (output tensor B). The parameter ci (co) is the number of input (output) channel. The parameters k1, k2 and k3 are the size of the convolution kernel corresponding to the height, weight and depth axes, respectively. That is, for each output channel v=1, . . . , co, the operation described in Equation (4) can be seen as a 4D weight tensor Wv of size (ci,k1,k2,k3) convolving with the input A.
The order of the summation operation in Equation (4) can be changed, resulting in different configurations of the shapes of input A, weight W (and mask M) to obtain the same output B. In embodiments, two configurations are taken. (1) The 5D weight tensor is reshaped into a 3D tensor of size (c′i, c′o, k), where c′i×c′o×k=ci×co×k1×k2×k3. For example, a configuration is c′i=ci, c′o=co, k=k1×k2×k3. (2) The 5D weight tensor is reshaped into a 2D matrix of size (c′i, c′o), where c′i×c′o=ci×co×k1×k2×k3. For example, some embodiments are c′i=ci, c′o=co×k1×k2×k3, or c′i=co, c′i=ci×k1×k2×k3.
The desired micro-structure of the weight coefficients is aligned with the underlying GEMM matrix multiplication process of how the convolution operation is implemented so that the inference computation of using the learned weight coefficients is accelerated. In embodiments, block-wise micro-structures for the weight coefficients are used in each layer in the 3D reshaped weight tensor or the 2D reshaped weight matrix. Specifically, for the case of reshaped 3D weight tensor, it is partitioned into blocks of size (gi,go,gk), and for the case of reshaped 2D weight matrix, it is partitioned into blocks of size (gi,go). The pruning operation happens within the 2D or 3D blocks, i.e., pruned weights in a block are set to be all zeros. A pruning loss of the block can be computed measuring the error introduced by such a pruning operation. Given this micro-structure, during an iteration, the part of the weight coefficients to be pruned is determined based on the pruning loss. Then, in the second step, the pruned weights are fixed, and the normal neural network training process is performed and the remaining un-fixed weight coefficients are updated through the back-propagation mechanism.
After obtaining the set of pruning micro-structure, the target turns to finding a set of updated optimal weight coefficients W* and the corresponding weight mask M* by iteratively minimizing the joint loss described in Equation (2). In the first embodiment illustrated by
Then in the second step, the weight update module 430 fixes the weight coefficients that are marked by P(t−1) as being micro-structurally pruned, and then updates the remaining unfixed weight coefficients of WP(t−1) through a neural network training process, resulting in updated W(t) and M(t). In embodiments, the pre-pruned weight coefficients masked by the pre-trained pruning mask M is forced to be fixed during this network training process (i.e., to stay as zero). In another embodiment, no such restriction is placed on the pre-pruned weights, and a pre-pruned weight can be reset to some value other than zero during the training process, resulting in a less sparse model associated with better prediction performance, possibly even better than the original pretrained model.
Specifically, let ={(x,y)} denote a training dataset, where can be the same as the original dataset 0={(x0,y0)} based on which the pre-trained weight coefficients W are obtained. can also be a different dataset from 0, but with the same data distribution as the original dataset . In the second step, the network forward computation module 415 passes each input x though the current network via a Network Forward Computation process using the current weight coefficients WP(t−1) and mask MP(t−1), which generates an estimated output
In the second embodiment of the training process illustrated by
Then in the second step, the weight update module 430 fixes the weight coefficients that are marked by P(t−1) as being micro-structurally pruned, and then updates the remaining unfixed weight coefficients of W(t−1) through a neural network training process, resulting in updated W(t). Similar to the first embodiment of
£J(|Θ)=£T(|Θ)+λres£res(W(t−1)). (5)
£res(W(t−1)) measures the difference between the current weights W(t−1) and the target pruned weights WP(t−1). For example, the L1 norm can be used:
£res(W(t−1))=∥W(t−1−))−WP(t−1)∥ (6)
Then, the gradient computation module 425 computes the gradient of the joint loss G(W(t−1)). The automatic gradient computing method used by deep learning frameworks such as tensorflow or pytorch can be used to compute G(W(t−1)). Based on the gradient G(W(t−1)) and the micro-structurally pruning mask P(t−1), the weight update module 430 updates the non-fixed weight coefficients of W(t−1) through back-propagation using a Back Propagation and Weight Update process. The retraining process is also an iterative process itself. Multiple iterations are taken to update the non-fixed parts of W(t−1), e.g., until the target loss converges. Then the system goes to the next iteration t, where given a pruning ratio p(t), a new set of pruning micro-structures (as well as the new micro-structurally pruning mask P(t)) are determined through the Pruning Micro-Structure Selection process. Similar to the previous embodiment of
During this whole iterative process, at a T-th iteration, a pruned weight coefficients WP(T) can be computed through the Weight Pruning process, in which the selected pruning micro-structures masked are pruned by P(T), resulting an updated weight mask MP(T). This WP(T) and MP(T) can be used to generate the final updated model W* and M*. For example, W*=WP(T), and M*=M·MP(T).
In embodiments, the hyperparameter p(t) may increase its value during iterations as t increases, so that more and more weight coefficients will be pruned and fixed throughout the entire iterative learning process.
The micro-structured pruning method targets reducing the model size, speeding up computation for using the optimized weight coefficients, and preserving the prediction performance of the original DNN model. It can be applied to a pre-trained dense model, or a pre-trained sparse model pruned by previous structured or unstructured pruning methods, to achieve additional compression effects.
Through the iterative retraining process, the method can effectively maintain the performance of the original prediction target and pursue compression and computation efficiency. The iterative retraining process also gives the flexibility of introducing different loss at different times, making the system focus on different target during the optimization process.
The method can be applied to datasets with different data forms. The input/output data are 4D tensors, which can be real video segments, images, or extracted feature maps.
As shown in
The sparsity-promoting regularization loss places regularization over the entire weight coefficients, and the resulting sparse weights have weak relationship with the inference efficiency or computation acceleration. From another perspective, after pruning, the sparse weights can further go through another network training process in which an optimal set of weight coefficients can be learned that can improve the efficiency of further model compression.
A weight unification loss £U(|Θ) is optimized together with the original target loss:
£(|Θ)=£T(|Θ)+λU£U(Θ), (7)
where λU≥0 is a hyperparameter to balance the contributions of the original training target and the weight unification. By jointly optimizing £(|Θ) of Equation (7), the optimal set of weight coefficients that can largely help the effectiveness of further compression is obtained. Also, the weight unification loss takes into consideration the underlying process of how the convolution operation is performed as a GEMM matrix multiplication process, resulting in optimized weight coefficients that can largely accelerate computation. It is worth noting that the weight unification loss can be viewed as an additional regularization term to a target loss, with (when λR>0) or without (when λR=0) regularizations. Also, the method can be flexibly applied to any regularization loss £R(Θ).
In embodiments, the weight unification loss £U(Θ) further includes the compression rate loss £C(Θ), the unification distortion loss £I(Θ), and the computation speed loss £S(Θ):
£U(Θ)=£I(Θ)+λC£C(Θ)+λS£S(Θ), (8)
Detailed descriptions of these loss terms are described in later sessions. For both the learning effectiveness and the learning efficiency, an iterative optimization process is performed. In the first step, parts of the weight coefficients satisfying the desired structure are fixed, and then in the second step, the non-fixed parts of the weight coefficients are updated by back-propagating the training loss. By iteratively conducting these two steps, more and more weights can be fixed gradually, and the joint loss can be gradually optimized effectively.
Moreover, in embodiments, each layer is compressed individually, £U(|Θ) can be further written as:
£U(Θ)=Σj=1NLU(Wj), (9)
where LU (Wj) is a unification loss defined over the j-th layer; N is the total number of layers where the quantization loss is measured; and Wj denotes the weight coefficients of the j-th layer. Again, since LU(Wj) is computed for each layer independently, in the rest of the disclosure the script j may be omitted without loss of generality.
For each network layer, its weight coefficients W is a 5-Dimension (5D) tensor with size (ci, k1, k2, k3, co). The input of the layer is a 4-Dimension (4D) tensor A of size (hi,wi,di,ci), and the output of the layer is a 4D tensor B of size (ho,wo,do,co). The sizes ci, k1, k2, k3, co, hi, wi, ho, wo, do are integer numbers greater or equal to 1. When any of the sizes ci, k1, k2, k3, co, hi, wi, di, ho, wo, do takes number 1, the corresponding tensor reduces to a lower dimension. Each item in each tensor is a floating number. Let M denote a 5D binary mask of the same size as W, where each item in M is a binary number 0/1 indicating whether the corresponding weight coefficient is pruned/kept. M is introduced to be associated with W to cope with the case in which W is from a pruned DNN model in which some connections between neurons in the network are removed from computation. When W is from the original unpruned pretrained model, all items in M take value 1. The output B is computed through the convolution operation ⊙ based on A, M and W:
The parameters hi, wi and di (h0, wo and do) are the height, weight and depth of the input tensor A (output tensor B). The parameter ci (co) is the number of input (output) channel. The parameters k1, k2 and k3 are the size of the convolution kernel corresponding to the height, weight and depth axes, respectively. That is, for each output channel v=1, . . . , co, the operation described in Equation (10) can be seen as a 4D weight tensor Wv of size (ci,k1,k2,k3) convolving with the input A.
The order of the summation operation in Equation (10) can be changed, and in embodiments, the operation of Equation (10) is performed as follows. The 5D weight tensor is reshaped into a 2D matrix of size (c′i, c′o) where c′i×c′o=ci×co×k1×k2×k3. For example, some embodiments are c′i=ci, c′o=co×k1×k2×k3, or c′o=co, c′i=ci×k1×k2×k3.
The desired structure of the weight coefficients is designed by taking into consideration two aspects. First, the structure of the weight coefficients is aligned with the underlying GEMM matrix multiplication process of how the convolution operation is implemented so that the inference computation of using the learned weight coefficients is accelerated. Second, the structure of the weight coefficients can help to improve the quantization and entropy coding efficiency for further compression. In embodiments, a block-wise structure for the weight coefficients is used in each layer in the 2D reshaped weight matrix. Specifically, the 2D matrix is partitioned into blocks of size (gi,go), and all coefficients within the block are unified. Unified weights in a block are set to follow a pre-defined unification rule, e.g., all values are set to be the same so that one value can be used to represent the whole block in the quantization process that yields high efficiency. There can be multiple rules of unifying weights, each associated with a unification distortion loss measuring the error introduced by taking this rule. For example, instead of setting the weights to be the same, the weights are set to have the same absolute value while keeping their original signs. Given this designed structure, during an iteration, the part of the weight coefficients is determined to be fixed by taking into consideration the unification distortion loss, the estimated compression rate loss, and the estimated speed loss. Then, in the second step, the normal neural network training process is performed and the remaining un-fixed weight coefficients are updated through the back-propagation mechanism.
Similarly, the compression rate loss £C(u,b) of Equation (8) reflects the compression efficiency of unifying weights in b using method u. For example, when all weights are set to be the same, only one number is used to represent the whole block, and the compression rate is rcompression=gi·go. £C(u,b) can be defined as 1/rcompression.
The speed loss £S(u,b) in Equation (8) reflects the estimated computation speed of using the unified weight coefficients in b with method u, which is a function of the number of multiplication operation in computation using the unified weight coefficients.
By now, and for each possible method u of unifying weights in b by the weight unifier, the weight unification loss £U(u,b) of Equation (8) is computed based on £I(u,b), £C(u,b), £S(u,b). The optimal weight unifying method u* can be selected with the smallest weight unification loss £U*(u,b).
Once the weight unifying method u* is determined for every block b, the target turns to finding a set of updated optimal weight coefficients W* and the corresponding weight mask M* by iteratively minimizing the joint loss described in Equation (7). Specifically, for the t-th iteration, there are the current weight coefficients W(t−1) and mask M(t−1). Also, a weight unifying mask Q(t−1) is maintained throughout the training process. The weight unifying mask Q(t−1) has the same shape as W(t−1), which records whether a corresponding weight coefficient is unified or not. Then, the weight unification module 445 computes unified weight coefficients WU(t−1) and a new unifying mask Q(t−1) through a Weight Unification process. In the Weight Unification process, the blocks are ranked based on their unification loss £U(u*,b) in accenting order. Given a hyperparameter q, the top q % blocks are selected to be unified. And the weight unifier unifies the blocks in the selected blocks b using the corresponding determined method u*, resulting in a unified weight WU(t−1) and weight mask MU(t−1). The corresponding entry in the unifying mask Q(t−1) is marked as being unified. In embodiments, MU(t−1) is different from M(t−1), in which for a block having both pruned and unpruned weight coefficients, the originally pruned weight coefficients will be set to have a non-zero value again by the weight unifier, and the corresponding item in MU(t−1) will be changed. In another embodiment, MU(t−1) is the same with M(t−1), in which for the blocks having both pruned and unpruned weight coefficients, only the unpruned weights will be reset, while the pruned weights remain to be zero.
Then in the second step, the weight update module 450 fixes the weight coefficients that are marked in Q(t−1) as being unified, and then updates the remaining unfixed weight coefficients of W(t−1) through a neural network training process, resulting in updated W(t) and M(t).
Let ={(x,y)} denote a training dataset, where can be the same as the original dataset 0={(x0,y0)} based on which the pre-trained weight coefficients W are obtained. can also be a different dataset from 0, but with the same data distribution as the original dataset . In the second step, the network forward computation module 415 passes each input x through the current network via a Network Forward Computation process using the current weight coefficients WU(t−1) and mask MU(t−1), which generates an estimated output
In embodiments, the hyperparameter q(t) increases its value during each iteration as t increases, so that more and more weight coefficients will be unified and fixed throughout the entire iterative learning process.
The unification regularization targets improving the efficiency of further compression of the learned weight coefficients, speeding up computation for using the optimized weight coefficients. This can significantly reduce the DNN model size and speedup the inference computation.
Through the iterative retraining process, the method can effectively maintain the performance of the original training target and pursue compression and computation efficiency. The iterative retraining process also gives the flexibility of introducing different loss at different times, making the system focus on different target during the optimization process.
The method can be applied to datasets with different data forms. The input/output data are 4D tensors, which can be real video segments, images, or extracted feature maps.
As shown in
As shown in
From another perspective, the pre-trained weight coefficients Θ can further go through another network training process in which an optimal set of weight coefficients can be learned to improve the efficiency of further model compression and inference acceleration. This disclosure describes a micro-structured pruning and unification method to achieve this goal.
Specifically, a micro-structured weight pruning loss £S(|Θ) and a micro-structured weight unification loss £U(|Θ) are defined, which are optimized together with the original target loss:
£(|Θ)=£T(|Θ)+λU£U(Θ)+λS£S(Θ), (11)
where λS≥0 and λU≥0 are hyperparameters to balance the contributions of the original training target, the weight unification target, and the weight pruning target. By jointly optimizing £(|Θ) of Equation (11), the optimal set of weight coefficients that can largely help the effectiveness of further compression is obtained. Also, the weight unification loss takes into consideration the underlying process of how the convolution operation is performed as a GEMM matrix multiplication process, resulting in optimized weight coefficients that can largely accelerate computation. It is worth noting that the weight pruning and weight unification loss can be viewed as an additional regularization term to a target loss, with (when λR>0) or without (when λR=0) regularizations. Also, the method can be flexibly applied to any regularization loss £R(Θ).
For both the learning effectiveness and the learning efficiency, an iterative optimization process is performed. In the first step, parts of the weight coefficients satisfying the desired structure are fixed, and then in the second step, the non-fixed parts of the weight coefficients are updated by back-propagating the training loss. By iteratively conducting these two steps, more and more weights can be fixed gradually, and the joint loss can be gradually optimized effectively.
Moreover, in embodiments, each layer is compressed individually, and £U(|Θ) and £S(|Θ) can be further written as:
£U(Θ)=Σj=1NLU(Wj), £S(Θ)=Σj=1NLS(Wj), (12)
where LU(Wj) is a unification loss defined over the j-th layer; Ls(Wj) is a pruning loss defined over the j-th layer, N is the total number of layers that are involved in this training process, and IV denotes the weight coefficients of the j-th layer. Again, since LU(Wj) and LS(Wj) are computed for each layer independently, in the rest of the disclosure the script j is omitted without loss of generality.
For each network layer, its weight coefficients W is a 5-Dmension (5D) tensor with size (ci, k1, k2, k3, co). The input of the layer is a 4-Dimension (4D) tensor A of size (hi,wi,di,ci), and the output of the layer is a 4D tensor B of size (ho,wo,do,co). The sizes ci, k1, k2, k3, co, hi, wi, di, ho, wo, do are integer numbers greater or equal to 1. When any of the sizes ci, k1, k2, k3, co, hi, wi, di, ho, wo, do takes number 1, the corresponding tensor reduces to a lower dimension. Each item in each tensor is a floating number. Let M denote a 5D binary mask of the same size as W, where each item in M is a binary number 0/1 indicating whether the corresponding weight coefficient is pruned/kept in a pre-pruned process. M is introduced to be associated with W to cope with the case in which W is from a pruned DNN model in which some connections between neurons in the network are removed from computation. When W is from the original unpruned dense model, all items in M take value 1. The output B is computed through the convolution operation ⊙ based on A, M and W:
The parameters hi, wi and di (h0, wo and do) are the height, weight and depth of the input tensor A (output tensor B). The parameter ci (co) is the number of input (output) channel. The parameters k1, k2 and k3 are the size of the convolution kernel corresponding to the height, weight and depth axes, respectively. That is, for each output channel v=1, . . . , co, the operation described in Equation (13) can be seen as a 4D weight tensor Wv of size (ci,k1,k2,k3) convolving with the input A.
The order of the summation operation in Equation (13) can be changed, resulting in different configurations of the shapes of input A, weight W (and mask M) to obtain the same output B. In embodiments, two configurations are taken. (1) The 5D weight tensor is reshaped into a 3D tensor of size (c′i, c′o, k), where c′i×c′o×k=ci×co×k1×k2×k3. For example, a configuration is c′i=ci, c′o=co, k=k1×k2×k3. (2) The 5D weight tensor is reshaped into a 2D matrix of size (c′i, c′o), where c′i×c′o=ci×co×k1×k2×k3. For example, some configurations are c′i=ci, c′o=co×k1×k2×k3, or c′o=co, c′i=ci×k1×k2×k3.
The desired micro-structure of the weight coefficients is designed by taking into consideration two aspects. First, the micro-structure of the weight coefficients is aligned with the underlying GEMM matrix multiplication process of how the convolution operation is implemented so that the inference computation of using the learned weight coefficients is accelerated. Second, the micro-structure of the weight coefficients can help to improve the quantization and entropy coding efficiency for further compression. In embodiments, block-wise micro-structures for the weight coefficients are used in each layer in the 3D reshaped weight tensor or the 2D reshaped weight matrix. Specifically, for the case of reshaped 3D weight tensor, it is partitioned into blocks of size (gi,go,gk), and all coefficients within the block are pruned or unified. For the case of reshaped 2D weight matrix, it is partitioned into blocks of size (gi,go), and all coefficients within the block are pruned or unified. Pruned weights in a block are set to be all zeros. A pruning loss of the block can be computed measuring the error introduced by such a pruning operation. Unified weights in a block are set to follow a pre-defined unification rule, e.g., all values are set to be the same so that one value can be used to represent the whole block in the quantization process which yields high efficiency. There can be multiple rules of unifying weights, each associated with a unification distortion loss measuring the error introduced by taking this rule. For example, instead of setting the weights to be the same, the weights are set to have the same absolute value while keeping their original signs. Given this micro-structure, during an iteration, the part of the weight coefficients to be pruned or unified is determined by taking into consideration the pruning loss and the unification loss. Then, in the second step, the pruned and unified weights are fixed, and the normal neural network training process is performed and the remaining un-fixed weight coefficients are updated through the back-propagation mechanism.
In method 2, for each layer with weight coefficient W and mask M, the unification loss Lu(b) and Lu(W) is computed in a similar way as the method 1. Then given a unification ratio u, the top layers whose micro-structure blocks will be unified in a similar way as the method 1. Then, the pruning loss Ls(b) of the remaining layers is computed in the same way as the method 1. Given a pruning ratio p, all the blocks of all the remaining layers are ranked according to Ls(b) in accenting order, and the top p % blocks are selected to be pruned. For the remaining blocks of the remaining layers, an optional additional step can be taken, in which the remaining blocks of the remaining layers are ranked based on the unification loss Lu(b) in accenting order, and select the top (u−u′) % as {bu} to be unified.
In method 3, for each layer with weight coefficients W and mask M, for each block b in W, the unification loss Lu(b) and pruning loss LS(b) are computed in the same way as method 1. Given the pruning ratio p and unification ratio u, the blocks of this layer are ranked according to Ls(b) in accenting order, and the top p % blocks are selected as {bs} to be pruned. For the remaining blocks of this layer, they are ranked based on the unification loss Lu(b) in accenting order, and then select the top u % as {bu} to be unified.
In method 4, for each layer with weight coefficients W and mask M, for each block b in W, the unification loss Lu(b) and pruning loss Ls(b) are computed in the same way as method 1. Given the pruning ratio p and unification ratio u, all the blocks are ranked from all the layers of the DNN model according to Ls(b) in accenting order, and the top p % blocks are selected to be pruned. For the remaining blocks of the entire model, they are ranked based on the unification loss Lu(b) in accenting order, and then select the top u % to be unified.
After obtaining the set of pruning micro-structure and the set of unification micro-structure, the target turns to finding a set of updated optimal weight coefficients W* and the corresponding weight mask M* by iteratively minimizing the joint loss described in Equation (11) are selected. In the first embodiment illustrated by
Then in the second step, the weight update module 465 fixes the weight coefficients that are marked by U(t−1) and P(t−1) as being micro-structurally unified or micro-structurally pruned, and then updates the remaining unfixed weight coefficients of W(t−1) through a neural network training process, resulting in updated W(t) and M(t).
Specifically, let ={(x,y)} denote a training dataset, where can be the same as the original dataset 0={(x0,y0)} based on which the pre-trained weight coefficients W are obtained. can also be a different dataset from 0, but with the same data distribution as the original dataset . In the second step, the network forward computation module 415 passes each input x though the current network via a Network Forward Computation process using the current weight coefficients WU(t−1) and mask M, which generates an estimated output y. Based on the ground-truth annotation y and the estimated output y, the target loss computation module 420 computes the target training loss £T(|Θ) in Equation (11) through a Compute Target Loss process. Then, the gradient computation module 425 computes the gradient of the target loss G(WU(t−1)). The automatic gradient computing method used by deep learning frameworks such as tensorflow or pytorch can be used to compute G(WU(t−1)). Based on the gradient G(WU(t−1)) and the micro-structurally unifying mask U(t−1) and the micro-structurally pruning mask P(t−1), the weight update module 465 updates the non-fixed weight coefficients of WU(t−1) through back-propagation using a Back Propagation and Weight Update process. The retraining process is also an iterative process itself. Multiple iterations are taken to update the non-fixed parts of WU(t−1), e.g., until the target loss converges. Then the system goes to the next iteration t, in which given a new unification ratio u(t) and pruning ratio p(t), a new set of unifying micro-structures and pruning micro-structures (as well as the new micro-structurally unifying mask U(t) and micro-structurally pruning mask P(t)) are determined through the Pruning and Unification Micro-Structure Selection process.
In the second embodiment of the training process illustrated by
Then in the second step, the weight update module 465 fixes the weight coefficients which are marked by U(t−1) and P(t−1) as being micro-structurally unified or micro-structurally pruned, and then updates the remaining unfixed weight coefficients of W(t−1) through a neural network training process, resulting in updated W(t). Similar to the first embodiment of
£res(W(t−1)) measures the difference between the current weights W(t−1) and the target pruned and unified weights WPU(t−1). For example, the L1 norm can be used:
£res(W(t−1))=∥W(t−1))−WPU(t−1)∥ (14)
Then, the gradient computation module 425 computes the gradient of the joint loss G(W(t−1)). The automatic gradient computing method used by deep learning frameworks such as tensorflow or pytorch can be used to compute G(W(t−1)). Based on the gradient G(W(t−1)) and the micro-structurally unifying mask U(t−1) and the micro-structurally pruning mask P(t−1), the weight update module 465 updates the non-fixed weight coefficients of W(t−1) through back-propagation using a Back Propagation and Weight Update process. The retraining process is also an iterative process itself. Multiple iterations are taken to update the non-fixed parts of W(t−1), e.g., until the target loss converges. Then the system goes to the next iteration t, in which given a unification ratio u(t) and pruning ratio p(t), a new set of unifying micro-structures and pruning micro-structures (as well as the new micro-structurally unifying mask U(t) and micro-structurally pruning mask P(t)) are determined through the Pruning and Unification Micro-Structure Selection process.
During this whole iterative process, at a T-th iteration, a pruned and unified weight coefficients WPU(T) can be computed through the Weight Pruning and Unification process, in which the selected pruning micro-structures masked by P(T) are pruned and weights in the selected unification micro-structures masked are unified by U(T), resulting an updated weight mask MPU(T). Similar to the previous embodiment of
In embodiments, the hyperparameters u(t) and p(t) may increase their values during iterations as t increases, so that more and more weight coefficients will be pruned and unified and fixed throughout the entire iterative learning process.
The unification regularization targets improving the efficiency of further compression of the learned weight coefficients, speeding up computation for using the optimized weight coefficients. This can significantly reduce the DNN model size and speedup the inference computation.
Through the iterative retraining process, the method can effectively maintain the performance of the original training target and pursue compression and computation efficiency. The iterative retraining process also gives the flexibility of introducing different loss at different times, making the system focus on different target during the optimization process.
The method can be applied to datasets with different data forms. The input/output data are 4D tensors, which can be real video segments, images, or extracted feature maps.
In some implementations, one or more process blocks of
The method 500 is performed to train a deep neural network that is used to reduce parameters of an input neural network, to obtain an output neural network.
As shown in
In operation 520, the method 500 includes pruning the input weights, based on the selected pruning micro-structure blocks.
In operation 530, the method 500 includes updating the input mask and a pruning mask indicating whether each of the input weights is pruned, based on the selected pruning micro-structure blocks.
In operation 540, the method 500 includes updating the pruned input weights and the updated input mask, based on the updated pruning mask, to minimize a loss of the deep neural network.
The updating of the pruned input weights and the updated input mask may include reducing parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the input weights are pruned and masked by the updated input mask, determining the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determining a gradient of the determined loss, based on the pruned input weights, and updating the pruned input weights and the updated input mask, based on the determined gradient and the updated pruning mask, to minimize the determined loss.
The deep neural network may be further trained by reshaping the input weights masked by the input mask, partitioning the reshaped input weights into the plurality of blocks of the input weights, unifying multiple weights in one or more of the plurality of blocks into which the reshaped input weights are partitioned, among the input weights, updating the input mask and a unifying mask indicating whether each of the input weights is unified, based on the unified multiple weights in the one or more of the plurality of blocks, and updating the updated input mask and the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, based on the updated unifying mask, to minimize the loss of the deep neural network.
The updating of the updated input mask and the input weights may include reducing parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the input weights are unified and masked by the updated input mask, determining the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determining a gradient of the determined loss, based on the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, and updating the pruned input weights and the updated input mask, based on the determined gradient and the updated unifiying mask, to minimize the determined loss.
The deep neural network may be further trained by selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network, and updating a unifying mask indicating whether each of the input weights is unified, based on the unified multiple weights in the one or more of the plurality of blocks. The updating the input mask may include updating the input mask, based on the selected pruning micro-structure blocks and the selected unification micro-structure blocks, to obtain a pruning-unification mask. The updating the pruned input weights and the updated input mask may include updating the pruned and unified input weights and the pruning-unification mask, based on the updated pruning mask and the updated unifying mask, to minimize the loss of the deep neural network.
The updating of the pruned and unified input weights and the pruning-unification mask may include reducing parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the pruned and unified input weights are masked by the pruning-unification mask, determining the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determining a gradient of the determined loss, based on the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, and updating the pruned and unified input weights and the pruning-unification mask, based on the determined gradient, the updated pruning mask and the updated unifying mask, to minimize the determined loss.
The pruning micro-structure blocks may be selected from the plurality of blocks of the input weights masked by the input mask, based on a predetermined pruning ratio of the input weights to be pruned for each iteration.
As shown in
The apparatus 600 trains a deep neural network that is used to reduce parameters of an input neural network, to obtain an output neural network.
The selecting code 610 is configured to cause at least one processor to selects pruning micro-structure blocks to be pruned, from a plurality of blocks of input weights of the deep neural network that are masked by an input mask;
The pruning code 620 is configured to cause at least one processor to prune the input weights, based on the selected pruning micro-structure blocks.
The first updating code 630 is configured to cause at least one processor to update the input mask and a pruning mask indicating whether each of the input weights is pruned, based on the selected pruning micro-structure blocks.
The second updating code 640 is configured to cause at least one processor to update the pruned input weights and the updated input mask, based on the updated pruning mask, to minimize a loss of the deep neural network.
The second updating code 640 may be further configured to cause the at least one processor to reduce parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the input weights are pruned and masked by the updated input mask, determine the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determine a gradient of the determined loss, based on the pruned input weights, and update the pruned input weights and the updated input mask, based on the determined gradient and the updated pruning mask, to minimize the determined loss.
The deep neural network may be further trained by reshaping the input weights masked by the input mask, partitioning the reshaped input weights into the plurality of blocks of the input weights, unifying multiple weights in one or more of the plurality of blocks into which the reshaped input weights are partitioned, among the input weights, updating the input mask and a unifying mask indicating whether each of the input weights is unified, based on the unified multiple weights in the one or more of the plurality of blocks, and updating the updated input mask and the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, based on the updated unifying mask, to minimize the loss of the deep neural network.
The second updating code 640 may be further configured to cause the at least one processor to reduce parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the input weights are unified and masked by the updated input mask, determine the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determine a gradient of the determined loss, based on the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, and update the pruned input weights and the updated input mask, based on the determined gradient and the updated unifiying mask, to minimize the determined loss.
The deep neural network may be further trained by selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network, and updating a unifying mask indicating whether each of the input weights is unified, based on the unified multiple weights in the one or more of the plurality of blocks. The updating the input mask may include updating the input mask, based on the selected pruning micro-structure blocks and the selected unification micro-structure blocks, to obtain a pruning-unification mask. The updating the pruned input weights and the updated input mask may include updating the pruned and unified input weights and the pruning-unification mask, based on the updated pruning mask and the updated unifying mask, to minimize the loss of the deep neural network.
The second updating code 640 may be further configured to cause the at least one processor to reduce parameters of a first training neural network, to estimate a second training neural network, using the deep neural network of which the pruned and unified input weights are masked by the pruning-unification mask, determine the loss of the deep neural network, based on the estimated second training neural network and a ground-truth neural network, determine a gradient of the determined loss, based on the input weights among which the multiple weights in the one or more of the plurality of blocks are unified, and update the pruned and unified input weights and the pruning-unification mask, based on the determined gradient, the updated pruning mask and the updated unifying mask, to minimize the determined loss.
The pruning micro-structure blocks may be selected from the plurality of blocks of the input weights masked by the input mask, based on a predetermined pruning ratio of the input weights to be pruned for each iteration.
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations.
As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.
Even though combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used herein may be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
This application claims priority from U.S. Provisional Patent Application No. 63/040,216, filed on Jun. 17, 2020, U.S. Provisional Patent Application No. 63/040,238, filed on Jun. 17, 2020, and U.S. Provisional Patent Application No. 63/043,082, filed on Jun. 23, 2020, in the U.S. Patent and Trademark Office, the disclosures of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
63040216 | Jun 2020 | US | |
63040238 | Jun 2020 | US | |
63043082 | Jun 2020 | US |