ARTIFICIAL NEURAL NETWORK

Information

  • Patent Application
  • 20190258931
  • Publication Number
    20190258931
  • Date Filed
    February 20, 2019
    5 years ago
  • Date Published
    August 22, 2019
    5 years ago
Abstract
A computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer comprises: detecting the data signals for a first position and a second position in the ordered series of layers of neurons; generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN; deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; and processing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.
Description
CROSS-REFERENCE TO RELATED APPLICATION

The present application claims priority to European Patent Application 18158171.1 filed by the European Patent Office on 22 Feb. 2018, the entire contents of which being incorporated herein by reference.


BACKGROUND

This disclosure relates to artificial neural networks.


So-called deep neural networks (DNN) have become standard machine learning tools to solve a variety of problems such as computer vision and automatic speech recognition processing.


Designing and training such a DNN is typically very time consuming. When a new DNN is developed for a given task, many so-called hyper-parameters (parameters related to the overall structure of the network) must be chosen empirically. For each possible combination of structural hyper-parameters, a new network is typically trained from scratch and evaluated. While progress has been made on hardware (such as Graphical Processing Units providing efficient single instruction multiple data (SIMD) execution) and software (such as a DNN library developed by NVIDIA called cuDNN) to speed-up the training time of a single structure of a DNN, the exploration of a large set of possible structures remains still very slow.


In order to speed up the exploration of DNN structures, it has bene proposed to transfer the knowledge of an already trained network (teacher or base network) to a new neural network structure. The DNN with the new structure can thereafter be trained (potentially more rapidly) taking advantage from the knowledge acquired from a teacher network. This process can be referred to as “morphing” or “morphism”.


The idea of knowledge transfer has also been proposed with the purpose of obtaining smaller networks from well-trained large networks. These approaches rely on the distillation idea that the “student” or derived network can be trained using the output of the teacher or base network. Therefore these approaches still require training from scratch and are not appropriate for fast DNN structure exploration.


More relevant approaches called Net2Net and network morphism have been proposed to address the problem of fast knowledge transfer to be used for DNN structure exploration. Both Net2Net and network morphism are based on the idea of initializing the student network to represent the same function as the teacher. Some of these proposals indicate that the student network must be initialized to preserve the teacher network but the initialization should also facilitate the convergence to a better network. Other methods introduce sparse layers (having many zero weights) when increasing the size of a layer and layers with correlated weights when increasing the network width which are difficult to further train after morphing.


SUMMARY

The present disclosure provides a computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,


the method comprising:


detecting the data signals for a first position and a second position in the ordered series of layers of neurons;


generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN;


deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; and


processing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.


The present disclosure also provides computer software which, when executed by a computer, causes the computer to implement the above method.


The present disclosure also provides a non-transitory machine-readable medium which stores such computer software.


The present disclosure also provides an artificial neural network (ANN) generated by the above method and data processing apparatus comprising one or more processing elements to implement such an ANN.


Embodiments of the present disclosure can provide homogeneous and potentially more complete set of morphing operations based on least square optimization.


Morphing operations can be implemented using these techniques to either increase or decrease the parent network depth, to increase or decrease the network width, or to change the activation function. All these morphing operations are based on a consistent strategy based process of derivation of parameters using least square approximation. While the previous proposals have somewhat separate methods for each morphism operation (increasing width, increasing depth, etc.), the least square morphism (LSM) proposed by the present disclosure allows applying the same approach toward a larger variety of morphism operations. It is possible to use the same approach for fully connected layers as well as for convolutional layers. Since LSM produces naturally non-sparse layers, further training the network after morphing is potentially easier than the methods involving introducing sparse layers.


Further respective aspects and features of the present disclosure are defined in the appended claims.


It is to be understood that both the foregoing general description and the following detailed description are exemplary, but are not restrictive, of the present technology.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, in which:



FIG. 1 schematically illustrates an example neuron of an artificial neural network (ANN);



FIG. 2 schematically illustrates an example ANN;



FIG. 3 is a schematic flowchart illustrating training and inference phases of operation;



FIG. 4 is a schematic flowchart illustrating a training process;



FIG. 5 schematically represents a morphing process;



FIG. 6 schematically represents a base ANN and a modified ANN;



FIG. 7 is a schematic flowchart illustrating a method;



FIGS. 8a to 8d schematically represent example ANNs;



FIG. 9 schematically represents a process to convert a convolutional layer to an Affine layer; and



FIG. 10 schematically represents a data processing apparatus.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

Referring now to the drawings, FIG. 1 schematically illustrates an example neuron 100 of an artificial neural network (ANN). A neuron in this example is an individual interconnectable unit of computation which receives one or more inputs x1, x2 . . . , applies a respective weight w1, w2 . . . to the inputs x1, x2, for example by a multiplicative process shown schematically by multipliers 110 and then adds the weighted inputs and optionally a so-called bias term b, and then applies a so-called activation function φ to generate an output O. So the overall functional effect of the neuron can be expressed as:






O
=


f


(


x
i

,

w
i


)


=

φ
(



i










(



w
i

·

x
i


+
b

)


)






Here x and w represent the inputs and weights respectively, b is the bias term that the neuron optionally adds, and the variable i is an index covering the number of inputs (and therefore also the number of weights that affect this neuron).



FIG. 2 schematically illustrates an example ANN 240 formed of an array of the neurons of FIG. 1. The examples shown in FIG. 2 comprises an ordered series of so-called fully-connected or Affine layers 210, 220, preceded by an input layer 200 and followed by an output layer 230. The fully connected layers 210, 220 are referred to in this way because each neuron N1 . . . N3 and N4 . . . N6 in each of these layers is connected to each neuron in the next layer.


The neurons in a layer have the same activation function φ, though from layer to layer, the activation functions can be different.


The input neurons I1 . . . I3 do not themselves normally have associated activation functions. Their role is to accept data from (for example) a supervisory program overseeing operation of the ANN. The output neuron(s) O1 provide processed data back to the supervisory program. The input and output data may be in the form of a vector of values such as:

    • [x1, x2, x3]


Neurons in the layers 210, 220 are referred to as hidden neurons. They receive inputs only from other neurons and output only to other neurons.


The activation functions is non-linear (such as a step function, a so-called sigmoid function, a hyperbolic tangent (tan h) function or a rectification function (ReLU).)


Training and Inference

Use of an ANN such as the ANN of FIG. 2 can be considered in two phases, training (320, FIG. 3) and inference (or running) 330.


The so-called training process for an ANN can involve providing known training data as inputs to the ANN, generating an output from the ANN, comparing the output of the overall network to a known or expected output, and modifying one or more parameters of the ANN (such as one or more weights or biases) in order to aim towards bringing the output closer to the expected output. Therefore, training represents a process to search for a set of parameters which provide the lowest error during training, so that those parameters can then be used in an operational or inference stage of processing by the ANN, when individual data values are processed by the ANN.


An example training process includes so-called back propagation. A first stage involves initialising the parameters, for example randomly or using another initialisation technique. Then a so-called forward pass and a backward pass of the whole ANN are iteratively applied. A gradient or derivative of an error function is derived and used to modify the parameters.


At a basic level the error function can represent how far the ANN's output is from the expected output, though error functions can also be more complex, for example imposing constraints on the weights such as a maximum magnitude constraint. The gradient represents a partial derivative of the error function with respect to a parameter, at the parameter's current value. If the ANN were to output the expected output, the gradient would be zero, indicating that no change to the parameter is appropriate. Otherwise, the gradient provides an indication of how to modify the parameter to achieve the expected output. A negative gradient indicates that the parameter should be increased to bring the output closer to the expected output (or to reduce the error function). A positive gradient indicates that the parameter should be decreased to bring the output closer to the expected output (or to reduce the error function).


Gradient descent is therefore a training technique with the aim of arriving at an appropriate set of parameters without the processing requirements of exhaustively checking every permutation of possible values. The partial derivative of the error function is derived for each parameter, indicating that parameter's individual effect on the error function. In a backpropagation process, starting with the output neuron(s), errors are derived representing differences from the expected outputs and these are then propagated backwards through the network by applying the current parameters and the derivative of each activation function. A change in an individual parameter is then derived in proportion to the negated partial derivative of the error function with respect to that parameter and, in at least some examples, having a further component proportional to the change to that parameter applied in the previous iteration.


An example of this technique is discussed in detail in the following publication http://page.mi.fu-berlin.de/rojas/neural/(chapter 7), the contents of which are incorporated herein by reference.


Training from Scratch


For comparison with the present disclosure, FIG. 4 schematically illustrates an overview of a training process from “scratch”, which is to say where a previously trained ANN is not available.


At a step 400, the parameters (such as W, b for each layer) of the ANN to be trained are initialised. The training process then involves the successive application of known training data, having known outcomes, to the ANN, by steps 410, 420 and 430.


At the step 410, an instance of the input training data is processed by the ANN to generate a training output. The training output is compared to the known output at the step 420 and deviations from the known output (representing the error function referred to above) are used at the step 430 to steer changes in the parameters by, for example, a gradient descent technique as discussed above.


Training by Adaptation

The technique described above can be used to train a network from scratch, but in the discussion below, techniques will be described by which an ANN is established by adaptation or morphing of an existing ANN.


Least Squares Process

Embodiments of the present disclosure can provide techniques to use an approximation method to modify the structure of a previously trained neural network model (a base ANN) to a new structure (of a derived ANN) to avoid training from scratch every time. In the present examples, the previously trained network is a base ANN and the new structure is that of a derived ANN. The possible modifications (of the derived ANN over the base ANN) include for example increasing and decreasing layer size, widen and shorten depth, and changing activation functions.


A previously proposed approach to this problem would have involved evaluating several net structures by training each structure from scratch and evaluating on a validation set. This requires the training of many networks and can potentially be very slow. Also, in some cases only a limited amount of different structure can be evaluated. In contrast, embodiments of the disclosure modify the structure and parameters of the base ANN to a new structure (the derived ANN) to avoid training from scratch every time.


In embodiments, the derived ANN has a different network structure to the base ANN. In examples, the base ANN has an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,


the method comprising:


detecting the data signals for a first position and a second position in the ordered series of layers of neurons;


generating the derived ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN; and


initialising at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position.



FIG. 5 provides a schematic representation of this process, as applied to a succession of generations of modification. Note that in some example arrangements, only a single generation of modification may be considered, but the techniques can be extended to multiple generations each imposing a respective structural adaptation.


In a left hand column of FIG. 5, a first ANN structure (Net 1, which could correspond to a base ANN) is prepared, trained and evaluated. A so-called morphing process 900 is used to develop a second ANN (Net 2, which could correspond to a derived ANN) as a variation of Net 1. By basing a starting state of Net 2 on the parameters derived for Net1, potentially a lesser amount of subsequent training is required to arrive at appropriate weights for Net2. The process can be continued by relatively minor variations and fine-tuning training up to (as illustrated in schematic form) Net N.



FIG. 6 provides an example arrangement of three layers 1000, 1010, 1020 of an ANN 1030 having associated activation functions f, g, h. In an example of a so-called morphing process to develop a new or derived ANN 1050 from this base ANN 1030, the layer 1010 is removed so that the output of the layer 1000 is passed to the layer 1020.


In the present example, the two or more successive layers 1000, 1010, 1020 may be fully connected layers in which each neuron in a fully connected layer is connected to receive data signals from each neuron in a preceding layer and to pass data signals to each neuron in a following layer.


In the present technique, a so-called least squares morphism (LSM) is used to approximate the parameters of a single linear layer such that it preserves the function of a (replaced) sub-network of the parent network.


To do this, a first step is to forward training samples through the parent network up to the input of the sub-network to be replaced, and up to the output of the sub-network. In the example of FIG. 6, the sub-network to be replaced (referred to below as the sub-network) is represented by the layers 1000, 1010, and a replacement layer to replace the function of both of these is represented by a replacement layer 1040 having the same activation function f but different initial weights and bias terms (which may then be subject to fine-tuning training in the normal way). Note that although the layer 1010 is being removed, this is considered to be equivalent to replacing the pair of layers or sub-network 1000, 1010 by the single replacement layer 1040.


Given the data at the input of the parent sub-network x1, . . . , xN and the corresponding data at the output of the sub-network y1, . . . , yN it is possible to approximate (or for example optimize) a replacement linear layer with weights parameters Winit and bias term binit which approximate the sub-network. This then provides a starting point for subsequent training of the replacement network (derived ANN) as discussed above. The approximation/optimization problem can be written as:







{


W
init

,

b
init


}

=

arg







min


W
init

,

b
init








n
=
1

N











y
n

-

(



W

init





T




x
n


+

b
init


)




2








The expression in the vertical double bars is the square of the deviation of the desired output y of the replacement layer, from its actual output (the expression with W and b). The sub index n is over the neurons (units) of the layer. So, the sum is something that is certainly positive (because of the square) and zero only if the linear replacement layer accurately reproduces y (for all neurons). So an aim is to minimize the sum, and the free parameters which are available to do this are W and b, which is reflected in the “arg min” (argument of the minimum) operation. In general, no solution is possible that provides zero error unless in certain circumstances; the expected error has a closed form solution and is given below as Jmin.


The solution to this least squares problem can be expressed closed-form and is given by:








W
init

=


C
yx



C
xx

-
1




,


b
init

=


y
_

-


W
init



x
_




,




with








C
yx

=




n
=
1

N




(


y
n

-

y
_


)




(


x
n

-

x
_


)

T




,






C
xx

=




n
=
1

N




(


x
n

-

x
_


)




(


x
n

-

x
_


)

T




,




and








y
_

=


1
N






n
=
1

N



y
n




,


x
_

=


1
N






n
=
1

N




x
n

.








The residual error is given by:







J
min

=


tr


{


C
tt

-


C
tx



C
xx

-
1




C
tx
T



}






with






C
tt


=


1
K






k
=
1

K




(


t
k

-

μ
t


)




(


t
k

-

μ
t


)

T









So, for the replacement layer 1040 of the morphed network (derived ANN) 1050, the initial weights W′ are given by Winit and the initial bias b′ is given by binit, both of which are derived by a least squares approximation process from the input and output data (at the first and second positions).


Therefore, in examples, the neurons of each layer of the base ANN process the data signals received from the preceding layer according to a bias function for that layer, the method comprising deriving an initial approximation of at least a bias function for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position.


This process of parameter initialisation is summarised in FIG. 7, which is a schematic flowchart illustrating a computer-implemented method of generating a modified or derived artificial neural network (ANN) (such as the modified network 1050) from a base ANN (such as the base ANN 1030) having an ordered series of two or more successive layers 1000, 1010, 1020 of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function f, g, h and weights W for that layer,


the method comprising:


detecting (at a step 1100) the data signals for a first position x1, . . . , xN (such as the input to the layer 1000) and a second position y1, . . . , yN (such as the output of the layer 1010) in the ordered series of layers of neurons;


generating (at a step 1110) the modified ANN from the base ANN by providing an introduced layer 1040 of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN (in the example above, the layer 1040 replaces the layers 1000, 1010 and so acts between the (previous) input to the layer 1000 and the (previous) output of the layer 1010);


deriving (at a step 1120) an initial approximation of at least a set of weights (such as Winit and/or binit) for the introduced layer 1040 using a least squares approximation from the data signals detected for the first position and a second position; and


processing (at a step 1140) training data using the modified ANN to train the modified ANN including training the weights W′ of the introduced layer from their initial approximation.


In this example, use is made of training data comprising a set of data having a set of known input data and corresponding output data, and in which the processing step 1140 comprises varying at least the weighting of at least the introduced layer to so that, for an instances of known input data, the output data of the modified ANN is closer to the corresponding known output data. For example, for each instance of input data in the set of known input data, the corresponding known output data may be output data of the base ANN for that instance of input data.


An optional further weighting step 1130 is also provided in FIG. 11 and will be discussed below.



FIGS. 8a-8d schematically illustrate some further example ways in which the present technique can be used to derive a so-called morphed or derived network (such as a next stage to the right in the schematic representation of FIG. 5) from a parent, teacher or base network.


In particular, FIG. 8a schematically represents a base ANN 1200 having an ordered series of successive layers 1210, 1220, 1230, 1240 of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer. Note that an input and output layer and indeed further layers may additionally be provided. So, the arrangement of FIG. 8a does not necessarily represent the whole of the base ANN, but just a portion relevant to the present discussion.


The process discussed above can be used in the following example ways:



FIG. 8b: the layers 1220, 1230 are replaced by a replacement layer 1225. Here, the step 1100 involves detecting data signals (when training data is applied) for a first position x1, . . . , xN (such as the input to the layer 1220) and a second position y1, . . . , yN (such as the output of the layer 1230) in the ordered series of layers of neurons; the introduced layer is the layer 1225; and the step 1120 involves deriving an initial approximation of at least a set of weights (Winit and/or binit) for the introduced layer 1225 using a least squares approximation from the data signals detected for the first position and a second position. This provides an example of providing the introduced layer to replace one or more layers of the base ANN.



FIG. 8c: a further layer 1226 is inserted between the layers 1220, 1230. Here, the step 1100 involves detecting data signals (when training data is applied) for a first position x1, . . . , xN (such as the output of the layer 1220) and a second position y1, . . . , yN (such as the input to the layer 1230) in the ordered series of layers of neurons; the introduced layer is the layer 1226; and the step 1120 involves deriving an initial approximation of at least a set of weights (Winit and/or binit) for the introduced layer 1226 using a least squares approximation from the data signals detected for the first position and a second position. This provides an example in which the first position and the second position are the same (position) and the generating step comprises providing the introduced layer in addition to the layers of the base ANN.



FIG. 8d: the layer 1230 is replaced by a smaller (fewer neurons replacement layer 1227. (In other examples the layer 1227 could be larger; the significant feature here is that it is a differently sized layer to the one it is replacing). Here, the step 1100 involves detecting data signals (when training data is applied) for a first position x1, . . . , xN (such as the input to the layer 1230) and a second position y1, . . . , yN (such as the output of the layer 1230) in the ordered series of layers of neurons; the introduced layer is the layer 1227; and the step 1120 involves deriving an initial approximation of at least a set of weights (Winit and/or binit) for the introduced layer 1227 using a least squares approximation from the data signals detected for the first position and a second position. This provides an example in which the introduced layer has a different layer size to that of the one or more layers it replaces.


The ANNs of FIGS. 8b-8d, once trained by the step 1140, provide respective examples of a derived artificial neural network (ANN) generated by the method of FIG. 7.


The techniques may be implemented by computer software which, when executed by a computer, causes the computer to implement the method described above and/or to implement the resulting ANN. Such computer software may be stored by a non-transitory machine-readable medium such as a hard disk, optical disk, flash memory or the like, and implemented by data processing apparatus comprising one or more processing elements.


In further example embodiments, when increasing net size (increase layer size or adding more layers), it can be possible to make use of the increased size to make the subnet more robust to noise.


The scheme discussed above for increasing the size of a subnet aims to preserve a subnet's function t:






t=NET(X)=MORPHED_NET(X)


In other examples, similar techniques can be used in respect of a deliberately corrupted outcome, so as to provide a morphed subnet so that:






t=NET(X)≈MORPHED_NET({tilde over (X)})


with {tilde over (X)} being a corrupted version of X.


A way to corrupt {tilde over (X)} is to use binary masking noise, sometimes known as so-called “Dropout”. Dropout is a technique in which neurons and their connections are randomly or pseudo-randomly dropped or omitted from the ANN during training. Each network from which neurons have been dropped in this way can be referred to as a thinned network. This arrangement can provide a precaution against so-called overfitting, in which a single network, trained using a limited set of training data including sampling noise, can aim to fit too precisely to the noisy training data. It has been proposed that in training, any neuron is dropped with a probability p (0<p<=1). Then at inference time, the neuron is always present but the weight associated with the neuron is modified by multiplying it by p.


Applying this type of technique to the LSM process discussed above (to arrive at a so-called denoising morphing process), as seen previously the least square solution for:







{

W
,
b

}

=

arg







min

W
,
b





1

2

K







k
=
1

K












Wx
k

+
b
-

t
k




2






is











W
=



C
tx



C
xx

-
1







and





b

=


μ
t

-

W






μ
x








For the denoising morphing an aim is to optimize:







{

W
,
b

}

=

arg







min

W
,
b





1

2

K







k
=
1

K











W



x
~

k


+
b
-

t
k




2









where {tilde over (x)}k is xk corrupted by dropout with probability p. The corruption {tilde over (x)}k depends on a random or pseudo-random corruption, therefore, in some examples the technique is used to produce R repetitions of the dataset with different corruption {tilde over (x)}r,k so as to produce a large dataset representative of the corrupted dataset. The least squares (LS) problem then becomes:







{

W
,
b

}

=

arg







min

W
,
b





1

2

KR







r
=
1

R






k
=
1

K











W



x
~


r
,
k



+
b
-

t
k




2










The ideal position is to perform the optimization with a very large number of repetitions R→∞. Clearly in a practical embodiment, R will not be infinite, but for the purposes of the mathematical derivation the limit R→∞ is considered, in which case the solution of the LS problem is:






W=E[Ct{tilde over (x)}]E[C{tilde over (x)}{tilde over (x)}]−1


Construction of E[Ct{tilde over (x)}]:

The coefficients of (tk−μt)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p) or are set to zero.


Therefore, the expected corrupted correlation matrix can be expressed as:






E[Ct{tilde over (x)}]=(1−p)Ctx


Construction of E[C{tilde over (x)}{tilde over (x)}]:


The off-diagonal coefficients of ({tilde over (x)}k−μx)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p)2 (they are corrupted if any of the two dimension is corrupted).


The diagonal coefficients of ({tilde over (x)}k−μx)({tilde over (x)}k−μx)T keep their “non-corrupted” value with a probability of (1−p).


Therefore, the expected corrupted correlation matrix can be expressed as:








E


[

C


x
~



x
~



]



α
,
β


=

{







(

1
-
p

)

2



[

C
xx

]



α
,
β






if





α


β








(

1
-
p

)



[

C
xx

]



α
,
β






if





α

=
β









The optimization is ideally performed with a very large number of repetitions R→∞.


When R→∞ the solution of the LS problem is:






W=E[Ct{tilde over (x)}]E[C{tilde over (x)}{tilde over (x)}]−1


By taking (1−p) out, the solution can also be expressed with a simple weighting of Cxx:






W=C
tx(A∘Cxx)−1


with A being a weighting matrix with ones in the diagonal and the off-diagonal coefficients being (1−p).






A
=

(



1






(

1
-
p

)

















(

1
-
p

)






1



)





Therefore, W and b can be computed in a closed-form solution directly from the original input data xk without in fact having to construct any corrupted data {tilde over (x)}k. This requires a relatively small modification to the LS solution implementation of the network decreasing operation.


This provides an example of the further weighting step 530, or in other words an example of adding a further weighting to the least squares approximation of the weights to simulate the addition of dropout noise in the ANN.


The techniques discussed above relate to fully-connected or Affine layers. In the case of a convolutional layer a further technique can be applied to reformulate the convolutional layer as an Affine layer for the purposes of the above technique. In a convolutional layer a set of one or more learned filter functions is convolved with the input data. Referring to FIG. 9, the paper “From Data to Decisions” (https://iksinc.wordpress.com/tag/transposed-convolution/), which is incorporated in the present description by reference, explains in its first paragraph how a convolution operation can be rewritten as a matrix product. The context here is different but the basic idea is the same: using the same techniques, convolutions can be written as matrix products, and matrix products are affine layers, so, if an affine layer can be morphed and a convolutional layer can be written as an affine, it means it is also possible to morph convolutional layers. Accordingly, a convolutional layer 1300 defined by a set of individual layer inputs x, individual layer outputs y and activations t can be approximated for an Affine layer having a function y=Wx+b by considering the convolutional layer 1300 as a series of so-called “tubes” 1310 linking an input to an output. The resulting Affine layer can then be processed as discussed above.


So, in this example, at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.


So, in this example, at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.



FIG. 10 provides a schematic example of such a data processing apparatus for performing either or both of: performing the LSM technique discussed above to derive an ANN from a base ANN; and executing the resulting ANN. The data processing apparatus comprises a bus structure 700 linking one or more processing elements 710, a random access memory (RAM) 720, a non-volatile memory 730 such as a hard disk, optical disk or flash memory to store (for example) program code and/or configuration data; and an interface 740, for example (in the case of the apparatus executing the ANN) to interface with a supervisory program.


In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Similarly, a data signal comprising coded data generated according to the methods discussed above (whether or not embodied on a non-transitory machine-readable medium) is also considered to represent an embodiment of the present disclosure.


It will be apparent that numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended clauses, the technology may be practised otherwise than as specifically described herein.


Various respective aspects and features will be defined by the following numbered clauses:


1. A computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer,


the method comprising:


detecting the data signals for a first position and a second position in the ordered series of layers of neurons;


generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN;


deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; and


processing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.


2. A method according to clause 1, in which the two or more successive layers are fully connected layers in which each neuron in a fully connected layer is connected to receive data signals from each neuron in a preceding layer and to pass data signals to each neuron in a following layer.


3. A method according to clause 1 or clause 2, in which at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.


4. A method according to any one of the preceding clauses, in which the training data comprises a set of data having set of known input data and corresponding output data, and in which the processing step comprises varying at least the weighting of at least the introduced layer to so that, for an instances of known input data, the output data of the modified ANN is closer to the corresponding known output data.


5. A method according to clause 4, in which, for each instance of input data the set of known input data, the corresponding known output data are output data of the base ANN for that instance of input data.


6. A method according to any one of the preceding clauses, in which the generating step comprises providing the introduced layer to replace one or more layers of the base ANN.


7. A method according to clause 6, in which the introduced layer has a different layer size to that of the one or more layers it replaces.


8. A method according to any one of the preceding clauses, in which the first position and the second position are the same and the generating step comprises providing the introduced layer in addition to the layers of the base ANN.


9. A method according to any one of the preceding clauses, comprising adding a further weighting to the least squares approximation of the weights to simulate the addition of dropout noise in the ANN.


10. A method according to any one of the preceding clauses, in which the neurons of each layer of the ANN process the data signals received from the preceding layer according to a bias function for that layer, the method comprising deriving an initial approximation of at least a bias function for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position


11. Computer software which, when executed by a computer, causes the computer to implement the method of any one of the preceding clauses.


12. A non-transitory machine-readable medium which stores computer software according to clause 11.


13. An Artificial neural network (ANN) generated by the method of any one of the preceding clauses.


14. Data processing apparatus comprising one or more processing elements to implement the ANN of clause 13.

Claims
  • 1. A computer-implemented method of generating a modified artificial neural network (ANN) from a base ANN having an ordered series of two or more successive layers of neurons, each layer passing data signals to the next layer in the ordered series, the neurons of each layer processing the data signals received from the preceding layer according to an activation function and weights for that layer, the method comprising:detecting the data signals for a first position and a second position in the ordered series of layers of neurons;generating the modified ANN from the base ANN by providing an introduced layer of neurons to provide processing between the first position and the second position with respect to the ordered series of layers of neurons of the base ANN;deriving an initial approximation of at least a set of weights for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position; andprocessing training data using the modified ANN to train the modified ANN including training the weights of the introduced layer from their initial approximation.
  • 2. A method according to claim 1, in which the two or more successive layers are fully connected layers in which each neuron in a fully connected layer is connected to receive data signals from each neuron in a preceding layer and to pass data signals to each neuron in a following layer.
  • 3. A method according to claim 1, in which at least one of the two or more successive layers is a convolutional layer, the method comprising deriving a fully connected layer from the convolutional layer.
  • 4. A method according to claim 1, in which the training data comprises a set of data having set of known input data and corresponding output data, and in which the processing step comprises varying at least the weighting of at least the introduced layer to so that, for an instances of known input data, the output data of the modified ANN is closer to the corresponding known output data.
  • 5. A method according to claim 4, in which, for each instance of input data the set of known input data, the corresponding known output data are output data of the base ANN for that instance of input data.
  • 6. A method according to claim 1, in which the generating step comprises providing the introduced layer to replace one or more layers of the base ANN.
  • 7. A method according to claim 6, in which the introduced layer has a different layer size to that of the one or more layers it replaces.
  • 8. A method according to claim 1, in which the first position and the second position are the same, and the generating step comprises providing the introduced layer in addition to the layers of the base ANN.
  • 9. A method according to claim 1, comprising adding a further weighting to the least squares approximation of the weights to simulate the addition of dropout noise in the ANN.
  • 10. A method according to claim 1, in which the neurons of each layer of the ANN process the data signals received from the preceding layer according to a bias function for that layer, the method comprising deriving an initial approximation of at least a bias function for the introduced layer using a least squares approximation from the data signals detected for the first position and a second position
  • 11. Computer software which, when executed by a computer, causes the computer to implement the method of claim 1.
  • 12. A non-transitory machine-readable medium which stores computer software according to claim 11.
  • 13. An Artificial neural network (ANN) generated by the method of claim 1.
  • 14. Data processing apparatus comprising one or more processing elements to implement the ANN of claim 13.
Priority Claims (1)
Number Date Country Kind
18158171.1 Feb 2018 EP regional