The invention relates to a data processing device and a data processing method that encode and compress information about a configuration of a neural network, and relates to compressed data.
As a method for solving classification problems and regression problems of input data, there is machine learning.
For the machine learning, there is a technique called a neural network that imitates a brain's neural circuit (neurons). In the neural network, classification (discrimination) or regression of input data is performed using a probabilistic model (a discriminative model or a generative model) represented by a network in which neurons are mutually connected to each other.
Furthermore, in a convolutional neural network which is a neural network having not only a fully-connected layer but also a convolution layer and a pooling layer, a network can be created that implements data processing other than classification and regression, such as a network that implements a data filtering process. For example, a convolutional neural network can implement an image or audio filtering process that achieves noise removal of an input signal, an improvement in quality, or the like, with an image or audio being an input, a high-frequency restoration process for audio with missing high frequencies such as compressed audio, inpainting for an image whose region is partially missing, a super-resolution process for an image, etc.
In addition to the above; in recent years, there has also been released a new neural network called a generative adversarial network in Which the network is constructed by combining a generative model and a discriminative model that are used to determine whether data is real by inputting the data to the discriminative model that determines whether data generated by the generative model is real data (whether the data is not data generated by the generative model), and the generative model is adversarially trained in such a manner that the discriminative model cannot distinguish generated data as generated data, and the discriminative model is adversarially trained so as to be able to distinguish generated data as generated data, by which creation of a generative model with high accuracy is implemented.
In these neural networks, by optimizing network parameters by training using a large amount of data, an improvement in performance can be achieved.
Note, however, that there is a tendency that the data size of a neural network becomes large, and computational load on a computer using the neural network also increases.
Regarding this, Non-Patent Literature 1 describes a technique for scalar-quantizing and encoding edge weights which are parameters of a neural network. By scalar-quantizing and encoding the edge weights, the data size of data about edges is compressed.
Non-Patent Literature 1: Vincent Vanhoucke, Andrew Senior, Mark Z. Mao, “Improving the speed of neural networks on CPUs,” Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
However, the optimal values of weights assigned to a respective plurality of edges in a neural network vary depending on network training results and are not fixed.
Hence, variations occur in the compression size of edge weights, and the technique described in Non-Patent Literature 1 has a problem that high compression of parameter data about edges of a neural network cannot be achieved.
The invention is to solve the above-described problem, and an object of the invention is to obtain a data processing device and a data processing method that can highly compress parameter data of a neural network, and compressed data.
A data processing device according to the invention includes a data processing unit, a compression controlling unit, and an encoding wilt. The data processing unit processes input data using a neural network. The compression controlling unit determines quantization steps and generates quantization information that defines the quantization steps, the quantization steps being used when parameter data of the neural network is quantized. The encoding unit encodes network configuration information and the quantization information to generate compressed data, the network configuration information including the parameter data quantized using the quantization steps determined by the compression controlling unit.
According to the invention, quantization information that defines quantization steps which are used when parameter data of a neural network is quantized, and network configuration information including the parameter data quantized using the quantization steps in the quantization information are encoded to generate compressed data. Accordingly, the parameter data of the neural network can be highly compressed.
By using quantization information and network configuration information which are decoded from the compressed data, a neural network optimized on the encoding side can be constructed on the decoding side.
To more specifically describe the invention, modes for carrying out the invention will be described below with reference to the accompanying drawings.
In addition, the data processing device 100 functions as an encoder that encodes quantization information and network configuration information, and includes a data processing unit 101, a compression controlling unit 102, and an encoding unit 103.
The data processing unit 101 processes input data using the above-described neural network.
In addition, the data processing unit 101 accepts, as input, quantization information generated by the compression controlling unit 102, and quantizes parameter data of the neural network using quantization steps defined in the quantization information. Then, the data processing unit 101 outputs network configuration information including the above-described quantized parameter data to the encoding unit 103.
For the above-described neural network used by the data processing unit 101, a predetermined neural network may be used, or the parameter data may be optimized by training.
When parameter data of a neural network is trained, neural network training is performed on the neural network in a predetermined initial state (initial values of the parameter data) using input data to be trained, and then quantization information generated by the compression controlling unit 102 is inputted, and the parameter data of the neural network is quantized using quantization steps defined in the quantization information.
Then, with the quantized neural network being in an initial state for the next training, the above-described training and quantization are performed. A neural network Obtained as a result of repeating the training and quantization processes L times (L is an integer greater than or equal to one) is outputted, as a part of network configuration information, to the encoding unit 103.
Note that when L=1, the quantized neural network is not retrained and thus it can be said that this is the same process as using a neural network that is trained outside without being trained by the data processing unit 101. That is, the difference is only whether training is performed by the data processing unit 101 or outside.
The network configuration information is information indicating a configuration of the neural network, and includes, for example, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, and type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a filly-connected layer).
The parameter data of the neural network includes, for example, weight information assigned to edges that connect nodes of the neural network.
The compression controlling unit 102 determines quantization steps which are used when the parameter data of the neural network is quantized, and generates quantization information that defines the quantization steps.
For example, the compression controlling unit 102 determines quantization steps that change on an edge-by-edge, node-by-node, kernel-by-kernel, or layer-by-layer basis in the neural network.
The quantization information is information that defines quantization steps which are used when the parameter data of the neural network is quantized. The quantization step is a width (quantization width) used when parameter data is quantized, and the larger the quantization step is, the more coarsely the parameter data is broken down, and thus, the compression ratio increases. The smaller the quantization step is, the more finely the parameter data is broken down, and thus, the compression ratio decreases.
Specifically, a quantized value k is represented by the following equation (1):
k=floor((x/Q)+d0)+d1 (1)
In the above-described equation (1), x represents the value of a parameter to be quantized, Q represents a quantization step, d0 (0≤d0<1) represents an adjustment offset in a range of values to be quantized that correspond to respective quantized values, d1 (0≤d1<1) represents an offset that adjusts the quantized value, and floor( ) represents a round-down processing function. Furthermore, there is also a method for providing a dead zone in which quantized values k for values x to be quantized within a range set in the above-described equation (1) are 0.
In addition, the value y of a quantized parameter is as shown in the following equation (2):
y=kQ (2)
When the minimum change unit of the quantization steps described above is changed from units of edges to units of layers, the compression ratio of parameter data by quantization increases, and thus, pre-encoding parameter data can be reduced.
The encoding unit 103 encodes the network configuration information including the parameter data quantized by the data processing unit 101 and the quantization information generated by the compression controlling unit 102, to generate compressed data.
Note that the network configuration information inputted to the encoding unit 103 from the data processing unit 101 is network configuration information including the parameter data which is quantized by the data processing unit 101 using the quantization steps determined by the compression controlling unit 102.
The data processing device 200 functions as a decoder that decodes quantization information and network configuration information from compressed data, and includes a decoding unit 201 and a data processing unit 202.
The decoding unit 201 decodes quantization information and network configuration information from the compressed data encoded by the encoding unit 103 as described above.
The network configuration information decoded by the decoding unit 201 includes results (quantized values k) obtained by quantizing parameter data such as edge weight information, the edge weight information having been optimized as a result of performing training on a side of the data processing device 100 which is an encoder:
The quantization information decoded by the decoding unit 201 defines quantization steps Q which are used when the parameter data is quantized. From the above-described results k obtained by quantizing the parameter data and the above-described quantization steps Q, quantized parameters y are decoded in accordance with the above-described equation (2). These decoding results are outputted from the decoding unit 201 to the data processing unit 202.
The data processing unit 202 processes input data using a neural network.
In addition, the data processing unit 202 inversely quantizes the edge weight information which is parameter data, using the quantization information and network configuration information decoded from the compressed data by the decoding unit 201. Furthermore, the data processing unit 202 constructs a neural network using the network configuration information including the inversely quantized parameter data.
As such, the data processing unit 202 constructs a neural network including parameter data such as edge weight information which has been optimized as a result of performing training on the data processing device 100 side, using information decoded from compressed data, and processes input data using the neural network. Accordingly, the compression size of the optimized parameter data can he made uniform between the data processing device 100 and the data processing device 200, thereby being able to achieve high compression of the parameter data.
The functions of the data processing unit 101, the compression controlling unit 102, and the encoding unit 103 in the data processing device 100 are implemented by a processing circuit.
Namely, the data processing device 100 includes a processing circuit for performing processes at step ST1 to ST3 which will be described later with reference to
The processing circuit may be dedicated hardware, but may be a Central Processing Unit (CPU) that executes programs stored in a memory.
When the above-described processing circuit is the dedicated hardware shown in
Note that the functions of the data processing unit 101, the compression controlling unit 102, and the encoding unit 103 may be implemented by different respective processing circuits, or the functions may be collectively implemented by a single processing circuit.
When the above-described processing circuit is the processor shown in
The software or firmware is described as programs and stored in the memory 302.
The processor 301 implements the functions of the data processing unit 101, the compression controlling unit 102, and the encoding unit 103, by reading and executing the programs stored in the memory 302. Namely, the data processing device 100 includes the memory 302 for storing programs that when executed by the processor 301, cause the processes at step ST1 to ST3 shown in
The programs cause a computer to perform procedures or methods of the data processing unit 101, the compression controlling unit 102, and the encoding unit 103.
The memory 302 may be a computer readable storage medium having stored therein programs for causing a computer to function as the data processing unit 101, the compression controlling unit 102, and the encoding unit 103.
The memory 302 corresponds, for example, to a nonvolatile or volatile semiconductor memory such as a Random Access Memory (RAM), a Read Only Memory (ROM), a flash memory, an Erasable Programmable Read Only Memory (EPROM), or an Electrically-EPROM (EEPROM), a magnetic disk, a flexible disk, an optical disc, a compact disc, a MiniDisc, or a DVD.
Note that some of the functions of the data processing unit 101, the compression controlling unit 102, and the encoding unit 103 may be implemented by dedicated hardware, and some of the functions may be implemented by software or firmware.
For example, the function of the data processing unit 101 may be implemented by a processing circuit which is dedicated hardware, and the functions of the compression controlling unit 102 and the encoding unit 103 may be implemented by the processor 301 reading and executing programs stored in the memory 302.
As such, the processing circuit can implement the above-described functions by hardware, software, firmware, or a combination thereof.
Note that although the data processing device 100 is described, the same also applies to the data processing device 200. For example, the data processing device 200 includes a processing circuit for performing processes at step ST1a to ST4a which will be described later with reference to
When the above-described processing circuit is the dedicated hardware shown in
Note that the functions of the decoding unit 201 and the data processing unit 202 may be implemented by different respective processing circuits, or the functions may be collectively implemented by a single processing circuit.
When the above-described processing circuit is the processor shown in
The software or firmware is described as programs and stored in the memory 302.
The processor 301 implements the functions of the decoding unit 201 and the data processing unit 202, by reading and executing the programs stored in the memory 302.
Namely, the data processing device 200 includes the memory 302 for storing programs that when executed by the processor 301, cause the processes at step ST1a to ST4a shown in
The programs cause a computer to performing procedures or methods of the decoding unit 201 and the data processing unit 202.
The memory 302 may be a computer readable storage medium having stored therein programs for causing a computer to function as the decoding unit 201 and the data processing unit 202.
Note that some of the functions of the decoding unit 201 and the data processing unit 202 may be implemented by dedicated hardware, and some of the functions may be implemented by software or firmware.
For example, the function of the decoding unit 201 may be implemented by a processing circuit which is dedicated hardware, and the function of the data processing unit 202 may be implemented by the processor 301 reading and executing a program stored in the memory 302.
Next operation will be described.
The following describes a case in which parameter data of a neural network is edge weight information.
The compression controlling unit 102 determines a quantization step which is used when weight information of each of a plurality of edges included in a trained neural network is quantized, and generates quantization information that defines the quantization step (step ST1). The quantization information is outputted from the compression controlling unit 102 to the data processing unit 101 and the encoding unit 103.
When the data processing unit 101 accepts, as input, the quantization information from the compression controlling unit 102, the data processing unit 101 quantizes the above-described edge weight information of the neural network using the quantization step in the quantization information (step ST2). The data processing unit 101 generates network configuration information including the quantized edge weight information, and outputs the network configuration information to the encoding unit 103.
The encoding unit 103 encodes the above-described network configuration information inputted from the data processing unit 101 and the above-described quantization information inputted from the compression controlling unit 102 (step ST3).
Compressed data of the above-described network configuration information and quantization information encoded by the encoding unit 103 is outputted to the data processing device 200.
The decoding unit 201 decodes quantization information and network configuration information from the above-described compressed data encoded by the encoding unit 103 (step ST1a). The quantization information and the network configuration information are outputted from the decoding unit 201 to the data processing unit 202.
Then, the data processing unit 202 calculates edge weight information which is inversely quantized using the quantization information and network configuration information decoded from the compressed data by the decoding unit 201 (step ST2a).
Subsequently, the data processing unit 202 constructs a neural network using the network configuration information including the inversely quantized edge weight information (step ST3a).
Accordingly, the data processing device 200 can construct the neural network trained by the data processing device 100.
The data processing unit 202 processes input data using the neural network constructed at step ST3a (step ST4a).
In the neural network shown in
In
As shown in
An output value of each of the plurality of nodes can be calculated, from an output value of a node in the immediately previous layer which is linked to the corresponding one of the plurality of nodes by an edge, weight information of the edge, and an activation function set on a layer-by-layer basis.
An example of a neural network includes a Convolutional Neural Network (CNN). In hidden layers of the CNN, convolutional layers and pooling layers are alternately linked to each other, and one or more fully-connected neural network layers (fully-connected layers) are provided in accordance with final outputs. For activation functions for the convolutional layers, for example, ReLU functions are used.
Note that a network called a deep neural network (DNN) (which is also called deep learning, a deep CNN (DCNN), etc.) is a network including multiple CNN layers.
The convolutional layer shown in
Edges 12-1, 12-6, and 12-11 are each assigned an identical weight, edges 12-2, 12-7, and 12-12 are each assigned an identical weight, edges 12-3, 12-8, and 12-13 are each assigned an identical weight, edges 12-4, 12-9, and 12-14 are each assigned an identical weight, and edges 12-5, 12-10, and 12-15 are each assigned an identical weight. In addition, the weights of the edges 12-1 to 12-5 may all have different values, or some or all of the weights may have the same value.
Of the nine nodes 10-1 to 10-9 in the previous layer, five nodes are linked to one node in the subsequent layer with the above-described weights. The kernel size K is five, and the kernel is defined by a combination of these weights.
For example, as shown in
The node 10-3 is linked to a node 11-2 through the edge 12-6, the node 10-4 is linked to the node 11-2 through the edge 12-7, the node 10-5 is linked to the node 11-2 through the edge 12-8, the node 10-6 is linked to the node 11-2 through the edge 12-9, and the node 10-7 is linked to the node 11-2 through the edge 12-10. The kernel is defined by a combination of the weights of the edges 12-6 to 12-10.
The node 10-5 is linked to a node 11-3 through the edge 12-11, the node 10-6 is linked to the node 11-3 through the edge 12-12, the node 10-7 is linked to the node 11-3 through the edge 12-13, the node 10-8 is linked to the node 11-3 through the edge 12-14, and the node 10-9 is linked to the node 11-3 through the edge 12-15. The kernel is defined by a combination of the weights of the edges 12-11 to 12-15.
In a process for input data using a CNN, the data processing unit 101 or the data processing unit 202 performs, for each kernel, a convolution operation at an interval of the number of steps S (in
Note that in a CNN used for image recognition, the network is often constructed using a convolutional layer including a plurality of kernels.
In the two-dimensional data shown in
In the two-dimensional data, the data processing unit 101 or the data processing unit 202 performs a convolution operation on data for each kernel 20 at an interval of the number of steps Sx in the x-direction and an interval of the number of steps Sy in the y-direction. Here, the steps Sx and Sy are integers greater than or equal to one.
In the neural network, a combination of weights wij for each layer shown in
Hence, in order to reduce the amount of data of edge weight information, the data processing device 100 according to the first embodiment quantizes the weight information. As shown in
Furthermore, a common quantization step may be used among a plurality of node indices or a plurality of edge indices, or among a plurality of node indices and edge indices. By doing so, quantization information to be encoded can be reduced.
Note that i′ is the kernel index and i′=1, 2, . . . , M1(1=1, 2, . . . , L), and j′ is the edge index and j′=1, 2, . . . , K1.
Furthermore, a common quantization step may be used among a plurality of kernel indices or a plurality of edge indices, or among a plurality of kernel indices and edge indices. By doing so, quantization information to be encoded can be reduced.
At step ST1 of
At step ST2 of
Likewise, at step ST2 of
Note that the network configuration information includes, in addition to the quantized weights, the number of network layers, the number of nodes for each of the layers, edges that link nodes, weight information assigned to each of the edges, activation functions representing outputs from the nodes, type information for each of the layers (e.g., a convolutional layer, a pooling layer, or a fully-connected layer), etc. Note, however, that information that is fixed (defined) in advance between the data processing device 100 and the data processing device 200 is not included in network configuration information to be encoded.
In
In addition, L is the number of layers.
In
In addition, C is the number of nodes Nlayer_id or the number of kernels Mlayer_id. Furthermore, E is the number of edges Nlayer_id-1 or the kernel size Klayer_id.
In
The information shown in
When the flag “quant_enable_flag” is 0 (false), all edge weight information in the network is not quantized. Namely, no quantization steps are set in the quantization information.
On the other hand, when the flag “quant_enable_flag” is 1 (true), the compression controlling unit 102 refers to the flag “layer_adaptive_quant_flag”.
When the flag “layer_adaptive_quant_flag” is 0 (false), the compression controlling unit 102 sets, in the quantization information, the quantization step “fixed_quant_step” which is common to all edges in the network.
When the flag “layer_adaptive_quant_flag” is 1 (true), the compression controlling unit 102 refers to the flag “matrix_adaptive_quant_flag”.
When the flag “matrix_adaptive_quant_flag” is 0 (false), the compression controlling unit 102 determines a quantization step which is common on a per-layer basis, as a quantization step for weight information of each of a plurality of edges in the network.
Note, however, that an input layer (first layer) has no edges and thus no quantization steps are set for the input layer.
Note that
When the flag “matrix_adaptive_quant_flag” is 1 (true), the compression controlling unit 102 determines the quantization steps shown in
The syntax shown in
As described above, the input layer (first layer) has no edges.
Hence, quantization steps are set for a layer_id+lth layer in which the ID information “layer_id” is 1 to L-1.
First, in the layer_id+lth layer, the compression controlling unit 102 sets base_quant_step[j] (j=0, 1, . . . , F-1) indicating a quantization step for the first node shown in
Note that E is the number of edges Nlayer_id or the kernel size Klayer_id+1.
Then, for the second and subsequent nodes (or kernels) (i≥1), the compression controlling unit 102 refers, on a node-by-node (or kernel-by-kernel) basis, to the flag “prev_quant_copy_flag[i-1]” indicating whether the quantization step is the same as that for a node (or a kernel) with an immediately previous index.
When the flag “prev_quant_copy_flag[i-1]” is 1 (true), an i+lth node (or kernel) has the same quantization step as that for an ith node (or kernel).
On the other hand, when the flag “prev_quant_copy_flag[i-1]” is 0 (false), the compression controlling unit 102 sets the difference value “diff_quant_value[i-1][j]” (i =1, 2, . . . , C-1, and j=0, 1, . . . , E-1) as information for generating a quantization step for the i+1th node (or kernel).
The quantization step can be generated by adding a quantization step set for an immediately previous node (kernel) to the difference value “diff_quant_value[i-1] [j]”.
Namely in the second node (or kernel) (i=1), base_quant_step[j] +diff_quant_value[0][j] is a quantization step. In the third and subsequent nodes (or kernels) (i≥2), diff_quant_value[i-2][j]+diff_quant_value[i-1][i-1] is a quantization step.
Note that although the difference value “diff_quant_value[i-1][j]” between the quantization steps of nodes (or kernels) is shown as an encoding parameter, an independent quantization step may be set on a per-node (or kernel) basis.
Configuring in this manner improves the encoding efficiency in the encoding unit 103 when there is a tendency that the correlation between the quantization steps of nodes (or kernels) is low.
The syntax shown in
As described above, the input layer (first layer) has no edges.
Hence, the compression controlling unit 102 sets base_layer_quant_step as a quantization step which is common to all edge weight information in the second layer.
Then, for the third and subsequent layers (i≥2), the compression controlling unit 102 refers to the flag “layer_quant_copy_flag[i-2]” (i=2, 3, . . . , L−1) indicating whether a quantization step which is common to all edge weight information in an i+lth layer is the same as a quantization step which is common to all edge weight information in an ith layer
When the flag “layer_quant_copy_flag[i-2]” is 1 (true), the compression controlling unit 102 sets the same quantization step as the quantization step which is common to all edge weight information in the ith layer, as a quantization step which is common to all edge weight information in the i+lth layer. On the other hand, when the flag “layer_quant_copy_flag[i-2]” is 0 (false), the compression controlling unit 102 sets layer_quant_step[i-2] as a quantization step which is common to all edge weight information in the i+lth layer.
Note that although a case is shown in which the compression controlling unit 102 defines layer_quant_step[i-2] as a quantization step which is independent on a per-layer basis, layer_quant_step[i-2] may be defined as a difference value to a quantization step for an immediately previous layer (ith layer). By defining layer_quant_step[i-2] as a difference value, many difference values close to 0 occur, thereby being able to improve the encoding efficiency in the encoding unit 103.
The encoding unit 103 encodes the encoding parameters in
Note that although the minimum change unit of quantization steps is units of edges, as shown in
In addition, the minimum change unit of quantization steps may be units of layers.
This has the same meaning as always executing only layer_quant_coding( ) without the flag “matrix_adaptive_quant_flag” when the flag “layer_adaptive_quant_flag” =1 (true) in
As described above, in the data processing device 100 according to the first embodiment, the data processing unit 101 processes input data using a neural network. The compression controlling unit 102 determines quantization steps, and generates quantization information that defines the quantization steps. The encoding unit 103 encodes network configuration information including parameter data which is quantized using the quantization steps determined by the compression controlling unit 102, and the quantization information, to generate compressed data.
Particularly, in the above-description, an example is described in which weight information assigned to edges that connect nodes in a neural network is handled as parameter data of the neural network. By having these configurations, quantization information that defines quantization steps, and network configuration information including parameter data which is quantized using the quantization steps m the quantization information are encoded into compressed data. Accordingly; the parameter data of the neural network can be highly compressed.
In addition, by using quantization information and network configuration information which are decoded from the compressed data, a neural network optimized on the encoding side can be constructed on the decoding side.
In the data processing device 200 according to the first embodiment, the data processing unit 202 processes input data using a neural network. The decoding unit 201 decodes compressed data.
In this configuration, the data processing unit 202 inversely quantizes parameter data using quantization information and network configuration information which are decoded from compressed data by the decoding unit 201, and constructs a neural network using the network configuration information including the inversely quantized parameter data.
Accordingly, a neural network optimized on the encoding side can be constructed using the quantization information and network configuration information decoded from the compressed data.
In the data processing device 100 according to the first embodiment, the compression controlling unit 102 changes quantization steps on an edge-by-edge basis. The encoding unit 103 encodes quantization information that defines the edge-by-edge quantization steps. By such a configuration, parameter data can be quantized with high accuracy.
In the data processing device 100 according to the first embodiment, the compression controlling unit 102 changes quantization steps on a node-by-node or kernel-by-kernel basis. The encoding unit 103 encodes quantization information that defines the node-by-node or kernel-by-kernel quantization steps.
By such a configuration, too, parameter data can be quantized with high accuracy.
In the data processing device 100 according to the first embodiment, the compression controlling unit 102 changes quantization steps on a layer-by-layer basis of a neural network. The encoding unit 103 encodes quantization information that defines the layer-by-layer quantization steps for the neural network.
By such a configuration, the compression ratio of parameter data by quantization increases, and thus, the amount of data of pre-encoding weight information can be reduced.
The first embodiment describes an example in which output results of a neural network are directly used as data processing results, but there an exemplary application in which outputs of an intermediate layer of a neural network are used as features for data processing on image data and audio data, e.g., image retrieval or matching in the following reference, and the features are subjected to another data processing technique as shown in the following reference, by which final data processing results are obtained.
For example, when outputs of an intermediate layer of a neural network are used as image features for image processing such as image retrieval, matching, or object tracking, by substituting the outputs for or adding the outputs to image features such as a Histogram of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), and Speeded Up Robust Features (SURF) which are image features used in the above-described conventional image processing, image processing can be implemented using the same processing flow as that of image processing that uses the above-described conventional image features.
In this case, what is to be encoded as network configuration information and quantization information by the data processing device 100 is a portion of the neural network up to the intermediate layer in which outputs serving as features for data processing are obtained.
Furthermore, the data processing device 100 performs data processing such as image retrieval, using the above-described features for data processing. The data processing device 200 decodes the portion of the neural network up to the intermediate layer from compressed data, and performs data processing such as image retrieval, using outputs obtained by inputting input data, as features for the data processing.
(Reference) ISO/IEC JTC1/SC29/WG11/m39219, “Improved retrieval and matching with CNN feature for CDVA”, Chengdu, China, Octpber 2016.
Therefore, in the data processing device 100 according to the second embodiment, the compression ratio of parameter data by quantization increases, thereby being able to reduce the amount of data of pre-encoding weight information. In the data processing device 200 according to the second embodiment, by creating a neural network by decoding compressed data which is outputted from the data processing device 100, data processing can be performed.
Note that the present invention is not limited to the above-described embodiments, and modifications to any component of the embodiments or omissions of any component of the embodiments are possible within the scope of the present invention.
Data processing devices according to the invention can highly compress parameter data of a neural network and thus can be used in, for example, image recognition techniques.
10-1 to 10-9, 11-1 to 11-3: node, 12-1 to 12-15: edge, 20: kernel, 100, 200: data processing device, 101, 202: data processing unit, 102: compression controlling unit, 103: encoding unit, 201: decoding unit, 300: processing circuit, 301: processor, 302: memory.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/024992 | 7/7/2017 | WO | 00 |