The present application claims priority to European Patent Application 18158171.1 filed by the European Patent Office on 22 Feb. 2018, the entire contents of which being incorporated herein by reference.
This disclosure relates to artificial neural networks.
So-called deep neural networks (DNN) have become standard machine learning tools to solve a variety of problems such as computer vision and automatic speech recognition processing.
Designing and training such a DNN is typically very time consuming. When a new DNN is developed for a given task, many so-called hyper-parameters (parameters related to the overall structure of the network) must be chosen empirically. For each possible combination of structural hyper-parameters, a new network is typically trained from scratch and evaluated. While progress has been made on hardware (such as Graphical Processing Units providing efficient single instruction multiple data (SIMD) execution) and software (such as a DNN library developed by NVIDIA called cuDNN) to speed-up the training time of a single structure of a DNN, the exploration of a large set of possible structures remains still very slow.
In order to speed up the exploration of DNN structures, it has bene proposed to transfer the knowledge of an already trained network (teacher or base network) to a new neural network structure. The DNN with the new structure can thereafter be trained (potentially more rapidly) taking advantage from the knowledge acquired from a teacher network. This process can be referred to as “morphing” or “morphism”.
The idea of knowledge transfer has also been proposed with the purpose of obtaining smaller networks from well-trained large networks. These approaches rely on the distillation idea that the “student” or derived network can be trained using the output of the teacher or base network. Therefore these approaches still require training from scratch and are not appropriate for fast DNN structure exploration.
More relevant approaches called Net2Net and network morphism have been proposed to address the problem of fast knowledge transfer to be used for DNN structure exploration. Both Net2Net and network morphism are based on the idea of initializing the student network to represent the same function as the teacher. Some of these proposals indicate that the student network must be initialized to preserve the teacher network but the initialization should also facilitate the convergence to a better network. Other methods introduce sparse layers (having many zero weights) when increasing the size of a layer and layers with correlated weights when increasing the network width which are difficult to further train after morphing.
The present disclosure provides a computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,
the method comprising:
detecting the data signals for a first position and a second position in the ordered series of layers of neurons;
generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN;
deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; and
processing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.
The present disclosure also provides computer software which, when executed by a computer, causes the computer to implement the above method.
The present disclosure also provides a non-transitory machine-readable medium which stores such computer software.
The present disclosure also provides an artificial neural network (ANN) generated by the above method and data processing apparatus comprising one or more processing elements to implement such an ANN.
Embodiments of the present disclosure can provide homogeneous and potentially more complete set of morphing operations based on least square optimization.
Morphing operations can be implemented using these techniques to either increase or decrease the parent network depth, to increase or decrease the network width, or to change the activation function. All these morphing operations are based on a consistent strategy based process of derivation of parameters using least square approximation. While the previous proposals have somewhat separate methods for each morphism operation (increasing width, increasing depth, etc.), the least square morphism (LSM) proposed by the present disclosure allows applying the same approach toward a larger variety of morphism operations. It is possible to use the same approach for fully connected layers as well as for convolutional layers. Since LSM produces naturally non-sparse layers, further training the network after morphing is potentially easier than the methods involving introducing sparse layers.
Further respective aspects and features of the present disclosure are defined in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary, but are not restrictive, of the present technology.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, in which:
Referring now to the drawings,
Here x and w represent the inputs and weights respectively, b is the bias term that the neuron optionally adds, and the variable i is an index covering the number of inputs (and therefore also the number of weights that affect this neuron).
The neurons in a layer have the same activation function φ, though from layer to layer, the activation functions can be different.
The input neurons I1 . . . I3 do not themselves normally have associated activation functions. Their role is to accept data from (for example) a supervisory program overseeing operation of the ANN. The output neuron(s) O1 provide processed data back to the supervisory program. The input and output data may be in the form of a vector of values such as:
Neurons in the layers 210, 220 are referred to as hidden neurons. They receive inputs only from other neurons and output only to other neurons.
The activation functions is non-linear (such as a step function, a so-called sigmoid function, a hyperbolic tangent (tan h) function or a rectification function (ReLU).)
Use of an ANN such as the ANN of
The so-called training process for an ANN can involve providing known training data as inputs to the ANN, generating an output from the ANN, comparing the output of the overall network to a known or expected output, and modifying one or more parameters of the ANN (such as one or more weights or biases) in order to aim towards bringing the output closer to the expected output. Therefore, training represents a process to search for a set of parameters which provide the lowest error during training, so that those parameters can then be used in an operational or inference stage of processing by the ANN, when individual data values are processed by the ANN.
An example training process includes so-called back propagation. A first stage involves initialising the parameters, for example randomly or using another initialisation technique. Then a so-called forward pass and a backward pass of the whole ANN are iteratively applied. A gradient or derivative of an error function is derived and used to modify the parameters.
At a basic level the error function can represent how far the ANN's output is from the expected output, though error functions can also be more complex, for example imposing constraints on the weights such as a maximum magnitude constraint. The gradient represents a partial derivative of the error function with respect to a parameter, at the parameter's current value. If the ANN were to output the expected output, the gradient would be zero, indicating that no change to the parameter is appropriate. Otherwise, the gradient provides an indication of how to modify the parameter to achieve the expected output. A negative gradient indicates that the parameter should be increased to bring the output closer to the expected output (or to reduce the error function). A positive gradient indicates that the parameter should be decreased to bring the output closer to the expected output (or to reduce the error function).
Gradient descent is therefore a training technique with the aim of arriving at an appropriate set of parameters without the processing requirements of exhaustively checking every permutation of possible values. The partial derivative of the error function is derived for each parameter, indicating that parameter's individual effect on the error function. In a backpropagation process, starting with the output neuron(s), errors are derived representing differences from the expected outputs and these are then propagated backwards through the network by applying the current parameters and the derivative of each activation function. A change in an individual parameter is then derived in proportion to the negated partial derivative of the error function with respect to that parameter and, in at least some examples, having a further component proportional to the change to that parameter applied in the previous iteration.
An example of this technique is discussed in detail in the following publication http://page.mi.fu-berlin.de/rojas/neural/(chapter 7), the contents of which are incorporated herein by reference.
Training from Scratch
For comparison with the present disclosure,
At a step 400, the parameters (such as W, b for each layer) of the ANN to be trained are initialised. The training process then involves the successive application of known training data, having known outcomes, to the ANN, by steps 410, 420 and 430.
At the step 410, an instance of the input training data is processed by the ANN to generate a training output. The training output is compared to the known output at the step 420 and deviations from the known output (representing the error function referred to above) are used at the step 430 to steer changes in the parameters by, for example, a gradient descent technique as discussed above.
The technique described above can be used to train a network from scratch, but in the discussion below, techniques will be described by which an ANN is established by adaptation or morphing of an existing ANN.
Embodiments of the present disclosure can provide techniques to use an approximation method to modify the structure of a previously trained neural network model (a base ANN) to a new structure (of a derived ANN) to avoid training from scratch every time. In the present examples, the previously trained network is a base ANN and the new structure is that of a derived ANN. The possible modifications (of the derived ANN over the base ANN) include for example increasing and decreasing layer size, widen and shorten depth, and changing activation functions.
A previously proposed approach to this problem would have involved evaluating several net structures by training each structure from scratch and evaluating on a validation set. This requires the training of many networks and can potentially be very slow. Also, in some cases only a limited amount of different structure can be evaluated. In contrast, embodiments of the disclosure modify the structure and parameters of the base ANN to a new structure (the derived ANN) to avoid training from scratch every time.
In embodiments, the derived ANN has a different network structure to the base ANN. In examples, the base ANN has an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,
the method comprising:
detecting the data signals for a first position and a second position in the ordered series of layers of neurons;
generating the derived ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN; and
initialising at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position.
In a left hand column of
In the present example, the two or more successive layers 1000, 1010, 1020 may be fully connected layers in which each neuron in a fully connected layer is connected to receive data signals from each neuron in a preceding layer and to pass data signals to each neuron in a following layer.
In the present technique, a so-called least squares morphism (LSM) is used to approximate the parameters of a single linear layer such that it preserves the function of a (replaced) sub-network of the parent network.
To do this, a first step is to forward training samples through the parent network up to the input of the sub-network to be replaced, and up to the output of the sub-network. In the example of
Given the data at the input of the parent sub-network x1, . . . , xN and the corresponding data at the output of the sub-network y1, . . . , yN it is possible to approximate (or for example optimize) a replacement linear layer with weights parameters Winit and bias term binit which approximate the sub-network. This then provides a starting point for subsequent training of the replacement network (derived ANN) as discussed above. The approximation/optimization problem can be written as:
The expression in the vertical double bars is the square of the deviation of the desired output y of the replacement layer, from its actual output (the expression with W and b). The sub index n is over the neurons (units) of the layer. So, the sum is something that is certainly positive (because of the square) and zero only if the linear replacement layer accurately reproduces y (for all neurons). So an aim is to minimize the sum, and the free parameters which are available to do this are W and b, which is reflected in the “arg min” (argument of the minimum) operation. In general, no solution is possible that provides zero error unless in certain circumstances; the expected error has a closed form solution and is given below as Jmin.
The solution to this least squares problem can be expressed closed-form and is given by:
The residual error is given by:
So, for the replacement layer 1040 of the morphed network (derived ANN) 1050, the initial weights W′ are given by Winit and the initial bias b′ is given by binit, both of which are derived by a least squares approximation process from the input and output data (at the first and second positions).
Therefore, in examples, the neurons of each layer of the base ANN process the data signals received from the preceding layer according to a bias function for that layer, the method comprising deriving an initial approximation of at least a bias function for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position.
This process of parameter initialisation is summarised in
the method comprising:
detecting (at a step 1100) the data signals for a first position x1, . . . , xN (such as the input to the layer 1000) and a second position y1, . . . , yN (such as the output of the layer 1010) in the ordered series of layers of neurons;
generating (at a step 1110) the modified ANN from the base ANN by providing an introduced layer 1040 of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN (in the example above, the layer 1040 replaces the layers 1000, 1010 and so acts between the (previous) input to the layer 1000 and the (previous) output of the layer 1010);
deriving (at a step 1120) an initial approximation of at least a set of weights (such as Winit and/or binit) for the introduced layer 1040 using a least squares approximation from the data signals detected for the first position and a second position; and
processing (at a step 1140) training data using the modified ANN to train the modified ANN including training the weights W′ of the introduced layer from their initial approximation.
In this example, use is made of training data comprising a set of data having a set of known input data and corresponding output data, and in which the processing step 1140 comprises varying at least the weighting of at least the introduced layer to so that, for an instances of known input data, the output data of the modified ANN is closer to the corresponding known output data. For example, for each instance of input data in the set of known input data, the corresponding known output data may be output data of the base ANN for that instance of input data.
An optional further weighting step 1130 is also provided in
In particular,
The process discussed above can be used in the following example ways:
The ANNs of
The techniques may be implemented by computer software which, when executed by a computer, causes the computer to implement the method described above and/or to implement the resulting ANN. Such computer software may be stored by a non-transitory machine-readable medium such as a hard disk, optical disk, flash memory or the like, and implemented by data processing apparatus comprising one or more processing elements.
In further example embodiments, when increasing net size (increase layer size or adding more layers), it can be possible to make use of the increased size to make the subnet more robust to noise.
The scheme discussed above for increasing the size of a subnet aims to preserve a subnet's function t:
t=NET(X)=MORPHED_NET(X)
In other examples, similar techniques can be used in respect of a deliberately corrupted outcome, so as to provide a morphed subnet so that:
t=NET(X)≈MORPHED_NET({tilde over (X)})
with {tilde over (X)} being a corrupted version of X.
A way to corrupt {tilde over (X)} is to use binary masking noise, sometimes known as so-called “Dropout”. Dropout is a technique in which neurons and their connections are randomly or pseudo-randomly dropped or omitted from the ANN during training. Each network from which neurons have been dropped in this way can be referred to as a thinned network. This arrangement can provide a precaution against so-called overfitting, in which a single network, trained using a limited set of training data including sampling noise, can aim to fit too precisely to the noisy training data. It has been proposed that in training, any neuron is dropped with a probability p (0<p<=1). Then at inference time, the neuron is always present but the weight associated with the neuron is modified by multiplying it by p.
Applying this type of technique to the LSM process discussed above (to arrive at a so-called denoising morphing process), as seen previously the least square solution for:
For the denoising morphing an aim is to optimize:
where {tilde over (x)}k is xk corrupted by dropout with probability p. The corruption {tilde over (x)}k depends on a random or pseudo-random corruption, therefore, in some examples the technique is used to produce R repetitions of the dataset with different corruption {tilde over (x)}r,k so as to produce a large dataset representative of the corrupted dataset. The least squares (LS) problem then becomes:
The ideal position is to perform the optimization with a very large number of repetitions R→∞. Clearly in a practical embodiment, R will not be infinite, but for the purposes of the mathematical derivation the limit R→∞ is considered, in which case the solution of the LS problem is:
W=E[Ct{tilde over (x)}]E[C{tilde over (x)}{tilde over (x)}]−1
The coefficients of (tk−μt)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p) or are set to zero.
Therefore, the expected corrupted correlation matrix can be expressed as:
E[Ct{tilde over (x)}]=(1−p)Ctx
Construction of E[C{tilde over (x)}{tilde over (x)}]:
The off-diagonal coefficients of ({tilde over (x)}k−μx)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p)2 (they are corrupted if any of the two dimension is corrupted).
The diagonal coefficients of ({tilde over (x)}k−μx)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p).
Therefore, the expected corrupted correlation matrix can be expressed as:
The optimization is ideally performed with a very large number of repetitions R→∞.
When R→∞ the solution of the LS problem is:
W=E[Ct{tilde over (x)}]E[C{tilde over (x)}{tilde over (x)}]−1
By taking (1−p) out, the solution can also be expressed with a simple weighting of Cxx:
W=C
tx(A∘Cxx)−1
with A being a weighting matrix with ones in the diagonal and the off-diagonal coefficients being (1−p).
Therefore, W and b can be computed in a closed-form solution directly from the original input data xk without in fact having to construct any corrupted data {tilde over (x)}k. This requires a relatively small modification to the LS solution implementation of the network decreasing operation.
This provides an example of the further weighting step 530, or in other words an example of adding a further weighting to the least squares approximation of the weights to simulate the addition of dropout noise in the ANN.
The techniques discussed above relate to fully-connected or Affine layers. In the case of a convolutional layer a further technique can be applied to reformulate the convolutional layer as an Affine layer for the purposes of the above technique. In a convolutional layer a set of one or more learned filter functions is convolved with the input data. Referring to
So, in this example, at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.
So, in this example, at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.
In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Similarly, a data signal comprising coded data generated according to the methods discussed above (whether or not embodied on a non-transitory machine-readable medium) is also considered to represent an embodiment of the present disclosure.
It will be apparent that numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended clauses, the technology may be practised otherwise than as specifically described herein.
Various respective aspects and features will be defined by the following numbered clauses:
1. A computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,
the method comprising:
detecting the data signals for a first position and a second position in the ordered series of layers of neurons;
generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN;
deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; and
processing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.
2. A method according to clause 1, in which the two or more successive layers are fully connected layers in which each neuron in a fully connected layer is connected to receive data signals from each neuron in a preceding layer and to pass data signals to each neuron in a following layer.
3. A method according to clause 1 or clause 2, in which at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.
4. A method according to any one of the preceding clauses, in which the training data comprises a set of data having set of known input data and corresponding output data, and in which the processing step comprises varying at least the weighting of at least the introduced layer to so that, for an instances of known input data, the output data of the modified ANN is closer to the corresponding known output data.
5. A method according to clause 4, in which, for each instance of input data the set of known input data, the corresponding known output data are output data of the base ANN for that instance of input data.
6. A method according to any one of the preceding clauses, in which the generating step comprises providing the introduced layer to replace one or more layers of the base ANN.
7. A method according to clause 6, in which the introduced layer has a different layer size to that of the one or more layers it replaces.
8. A method according to any one of the preceding clauses, in which the first position and the second position are the same and the generating step comprises providing the introduced layer in addition to the layers of the base ANN.
9. A method according to any one of the preceding clauses, comprising adding a further weighting to the least squares approximation of the weights to simulate the addition of dropout noise in the ANN.
10. A method according to any one of the preceding clauses, in which the neurons of each layer of the ANN process the data signals received from the preceding layer according to a bias function for that layer, the method comprising deriving an initial approximation of at least a bias function for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position
11. Computer software which, when executed by a computer, causes the computer to implement the method of any one of the preceding clauses.
12. A non-transitory machine-readable medium which stores computer software according to clause 11.
13. An Artificial neural network (ANN) generated by the method of any one of the preceding clauses.
14. Data processing apparatus comprising one or more processing elements to implement the ANN of clause 13.
Number | Date | Country | Kind |
---|---|---|---|
18158171.1 | Feb 2018 | EP | regional |