TRAINING OF ARTIFICIAL NEURAL NETWORKS

Information

  • Patent Application
  • 20200293855
  • Publication Number
    20200293855
  • Date Filed
    March 14, 2019
    5 years ago
  • Date Published
    September 17, 2020
    4 years ago
Abstract
Methods and apparatus are provided for training an artificial neural network, having a succession of neuron layers with interposed synaptic layers each storing a respective set of weights {w} for weighting signals propagated between its adjacent neuron layers, via an iterative cycle of signal propagation and weight-update calculation operations. Such a method includes, for at least one of the synaptic layers, providing a plurality P1 of arrays of memristive devices, each array storing the set of weights of that synaptic layer S1 in respective memristive devices, and, in a signal propagation operation, supplying respective subsets of the signals to be weighted by the synaptic layer S1 in parallel to the P1 arrays. The method also includes, in a weight-update calculation operation, calculating updates to respective weights stored in each of the P1 arrays in dependence on signals propagated by the neuron layers.
Description
BACKGROUND

The present invention relates generally to training of artificial neural networks.


Artificial neural networks (ANNs) have been developed to perform computational tasks in a manner inspired by biological architectures of the nervous system. These networks are based on a fundamental principle of biological systems whereby neurons are interconnected via synapses which relay weighted signals between the neurons. ANNs are based on a logical structure comprising a succession of layers of neurons with interposed synaptic layers. The synaptic layers store the weights for weighting signals propagated between neurons in their adjacent neuron layers. A neuron n1 in a given layer can be connected to one or more neurons nj in the next layer, and a different weight wij can be associated with each neuron-neuron connection ni-nj for weighting signals transmitted from ni to nj. A neuron nj generates activation signals dependent on its accumulated inputs, whereby weighted signals can be propagated over successive layers of the network from an input to an output neuron layer.


ANNs have been successfully applied in various complex analysis tasks such as speech and image recognition, e.g. classification of hand-written digits based on the MNIST (Modified National Institute of Standards and Technology) dataset. An ANN undergoes a training phase in which the sets of weights for respective synaptic layers are determined. The network is exposed to a set of training data, e.g. image data for hand-written digits, in an iterative training scheme in which the weights are repeatedly updated as the network “learns” from the training data. Training involves an iterative cycle of signal propagation and weight-update calculation operations. In supervised learning of hand-written digits, for example, training examples from the MNIST dataset (for which the labels, here digit class from 0 to 9, are known) are repeatedly input to the network. In response to each network input, neuron activation signals are forward-propagated from the first to the last neuron layer, and errors are computed by comparing the network output signals to the expected network output for the input training example. The resulting error signals are then backpropagated through the network from the last neuron layer to the second neuron layer. Updates to the weights stored in each synaptic layer are calculated based on the activation and error signals propagated by the neuron layers in this signal propagation operation. The weights are thus progressively updated until a convergence condition is achieved. The resulting trained network, with weights defined via the training operation, can then be applied to perform tasks based on new (unseen) data for the application in question.


Training of ANNs, which can have multiple neuron layers and millions of synaptic weights, is a compute- and time-intensive task. The parallel computational capabilities of analog resistive memory arrays, in which the synaptic weights are stored in arrays of memristive devices, can be exploited to expedite training.


SUMMARY

According to at least one embodiment of the present invention there is provided a method for training an artificial neural network, having a succession of neuron layers with interposed synaptic layers each storing a respective set of weights {w} for weighting signals propagated between its adjacent neuron layers, via an iterative cycle of signal propagation and weight-update calculation operations. The method includes, for at least one of the synaptic layers, providing a plurality P1 of arrays of memristive devices, each array storing the set of weights of that synaptic layer S1 in respective memristive devices, and, in a signal propagation operation, supplying respective subsets of the signals to be weighted by the synaptic layer S1 in parallel to the P1 arrays. The method also includes, in a weight-update calculation operation, calculating updates to respective weights stored in each of the P1 arrays in dependence on signals propagated by the neuron layers and, for each weight w in the set {w}, accumulating the updates calculated for that weight in the P1 arrays in an accumulation value χw for that weight. The method further comprises periodically programming the memristive devices storing each weight w in all of the P1 arrays to update the stored weight in dependence on the accumulation value χw for that weight.


At least one further embodiment of the invention provides apparatus for implementing an artificial neural network in an iterative training cycle of signal propagation and weight-update calculation operations. The apparatus comprises a processing unit and a memcomputing unit. The processing unit is adapted to generate signals propagated by the neuron layers in the propagation operations and to perform the weight-update calculation operations. The memcomputing unit stores the weights {w} in respective memristive devices for receiving and weighting signals propagated by the neuron layers to implement the synaptic layers of the network. The memcomputing unit includes a programming circuit for programming the memristive devices and, for at least one of the synaptic layers, a plurality P1 of arrays of memristive devices, each array storing the set of weights of that synaptic layer S1 in respective memristive devices. The memcomputing unit is adapted, in a signal propagation operation, to supply respective subsets of the signals to be weighted by the synaptic layer S1 in parallel to the P1 arrays. The processing unit is adapted, in a weight-update calculation operation, to calculate updates to respective weights stored in each of the P1 arrays in dependence on signals propagated by the neuron layers, and, for each weight w in the set {w}, to accumulate the updates calculated for that weight in the P1 arrays in an accumulation value χw for that weight. The processing unit is further adapted to control the programming circuit to periodically program the memristive devices storing each weight w in all of the P1 arrays to update the stored weight in dependence on the accumulation value χw for that weight.


Embodiments of the invention will be described in more detail below, by way of illustrative and non-limiting example, with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic representation of a fully-connected ANN;



FIG. 2 is a schematic representation of a layer of a convolutional neural network;



FIG. 3 is a schematic block diagram of apparatus embodying the invention for implementing an ANN in a training operation;



FIG. 4 illustrates structure of a memristive crossbar array in the FIG. 3 apparatus;



FIG. 5 indicates steps of a training method performed by the FIG. 3 apparatus;



FIG. 6 illustrates operation of the training method for a layer of a convolutional neural network;



FIG. 7 illustrates operation of the training method for a layer of a fully-connected network;



FIG. 8 indicates more detailed steps of the training operation in an embodiment of the invention;



FIG. 9 indicates steps of a weight calibration process in an embodiment of the invention;



FIG. 10 illustrates pipelining of the training operation in an embodiment of the invention; and



FIG. 11 illustrates cyclic recalibration of memristive device arrays in a pipelined embodiment.





DETAILED DESCRIPTION


FIG. 1 shows the logical structure of an example of a fully-connected ANN. The ANN 1 comprises a succession of neuron layers with interposed synaptic layers. In the simple example shown, the network has three neuron layers: a first layer N1 of input neurons which receive the input data signals; a last layer N3 of output neurons which provide the output signals of the network; and an intermediate (“hidden”) layer N2 of neurons between the input and output layers. Neurons in layer N1 are denoted here by n1i (1≤i≤11), neurons in layer N2 are denoted by n2j (1≤j≤12), and neurons in layer N3 are denoted by n3k (1≤k≤13), where 1x is the number of neurons in layer Nx. All neurons in each layer are connected to all neurons in the next layer as indicated, whereby neuron activation signals from one layer are transmitted to neurons in the next layer. Synaptic layers S1 and S2, interposed with the neuron layers, store respective sets of weights {wij} and {wjk} for weighting the signals propagated between their adjacent neuron layers. A weight wij is defined for each connection between an N1 neuron n1i and an N2 neuron n2j, whereby a signal propagated from n1i to n2j is weighted according to the corresponding weight wij for that neuron pair. The set of weights {wij} for synaptic layer S1 can thus be represented by a matrix W having l2 rows and l1 columns of weights wij as indicated in the figure. A signal propagated by an N2 neuron n2j to an N3 neuron n3k is similarly weighted by a corresponding weight wjk of synaptic layer S2, and the set of weights {wjk} for synaptic layer S2 can be represented by a matrix having l3 rows and l2 columns of weights wjk.


Input layer neurons can simply transmit their received input data signals as the activation signals for layer N1. For subsequent layers N2 and N3, each neuron n2j, n3k generates an activation signal dependent on its accumulated inputs, i.e. the accumulated weighted activation signals from its connected neurons in the previous layer. Each neuron applies a non-linear activation function f to the result A of this accumulation operation to generate its neuron activation signal for onward transmission. For example, the accumulated input Aj to a neuron n2j is given by a dot product computation Aji=1l1(wijx1i), where x1i is the activation signal from neuron n1i. Computing the vector A of accumulated inputs (A1 , A2 . . . Al2) to neurons n2j can thus be represented by a matrix-vector multiplication Wx between the matrix W of weights wij and the vector x of activation signals x1i=(x11, x12 . . . x1l1) from neurons n1i. Each N2 neuron n2j then generates its activation signal x2j as x2j=f(Aj) for propagation to layer N3.


While a simple example of a fully-connected network is shown in FIG. 1, in general a neuron in any given layer can be connected to one or more neurons in the next layer, and networks can include one or more (commonly up to 30 or more) successive layers of hidden neurons. Neuron layers can include one or more bias neurons (not shown) which do not receive input signals but transmit bias signals to the next neuron layer. Other computations can also be associated with some ANN layers. In some ANNs, e.g. convolutional neural networks (CNNs), neuron layers can comprise three-dimensional volumes of neurons, with associated three-dimensional weight arrays in synaptic layers. This is explained briefly below with reference to FIG. 2.



FIG. 2 illustrates implementation of a synaptic layer S in an exemplary CNN 2. The input volume to the synaptic layer can be an input training example (e.g. an image of n pixels by n pixels and depth d corresponding to d different color components of the image), or the neuron activations from a preceding neuron layer of dimensions n×n×d neurons. The synaptic layer comprises M weight kernels, each comprising a volume of k×k×d weights as illustrated. Weighted signals are generated by sliding each kernel through positions of the input volume. At each position, a dot product is computed over the kernel weights and signal elements at corresponding locations in the k×k×d slice (shown dotted in the figure for a first kernel position) of the input volume. With a stride of 1, each kernel can slide through (n−k+1) positions both horizontally and vertically in the input volume to produce one plane of the output volume shown on the right of the figure. The complete output volume for all M kernels thus has dimensions (n−k+1)×(n−k+1) and depth M. The elements of this output volume are mapped to neurons in the next layer for further processing.


The set of weights {w} for the M kernels of the convolutional layer can be represented by a matrix W having M rows and k2d columns as indicated in the figure, where each row m=1 to M represents the weights (w1m, w2m, . . . , wk2dm) of one kernel. The weighting computation can be implemented as a matrix-vector multiplication Wx between the matrix W of weights and a vector x of the k2d signals in each slice of the input volume. The entire computation consists of (n−k+1)2 of these matrix-vector multiplications for the (n−k+1)2 slices of the input volume.


ANN training involves an iterative cycle of signal propagation and weight-update calculation operations in response to a set of training examples which are successively supplied as inputs to the network. For each training example, the signal propagation operation comprises a forward propagation operation and a backpropagation operation. In the forward propagation operation, activation signals are propagated from the first (input) to the last (output) neuron layer in response to the current network input. Hence, activation signals x are weighted and propagated, layer-by-layer through the network, as described above. For each neuron in the output layer, the output signal after forward propagation is compared with the expected output for the current training example to obtain an error signal δ for that neuron. The error signals for the output layer neurons are then backpropagated through all layers of the network except the input layer, i.e. up to the second neuron layer. Error signals backpropagated between adjacent neuron layers are weighted by the appropriate weights of the interposed synaptic layer. Backpropagation thus results in computation of error signals for each neuron layer except the input layer. Updates to the weights stored in each synaptic layer are then calculated based on the signals propagated by the neuron layers in the forward and backpropagation operations. For example, the update Δwij to a weight wij between a neuron i in one layer and a neuron j in the next layer can be calculated as





Δwij=η×iδj


where xi is the forward-propagated activation signal from neuron i; δj is the back-propagated error signal for neuron j; and η is a predefined learning parameter for the network.



FIG. 3 shows apparatus embodying the invention for implementing an ANN 1, 2 in a training operation. The apparatus 10 comprises a processing unit 11 and a memcomputing unit 12. The processing unit 11 of this embodiment is a digital processing unit (DPU), comprising a central processing unit (CPU) 13 and memory 14, which is operatively coupled via a system bus 15 to memcomputing unit 12. Memory 14 stores one or more program modules 16 comprising program instructions executable by CPU 13 to implement functional steps of operations described below. Memory 14 also stores the operating data which is processed by apparatus 10 in the training operation. The operating data 17 comprises activation signals x and error signals 6, as well as accumulation values χw discussed further below. The memcomputing unit 12 comprises a set of memristive device arrays 18, and a controller 19 for controlling operation of the memcomputing unit. Controller 19 includes a programming circuit 20 for programming devices in memristive arrays 18 as described further below.


The DPU 11 is adapted to generate the activation and error signals propagated by the neuron layers in the forward and backpropagation operations, and to perform the weight-update calculations of the training operation. The propagated signals are supplied via bus 15 to memcomputing unit 11 where controller 19 supplies the signals to memristive device arrays 18. These arrays store the ANN weights in respective memristive devices for receiving and weighting the signals to implement the synaptic layers of the network. The weighted signals from a synaptic layer are returned via bus 15 to DPU 11 which then generates the signals for propagation to the next neuron layer.


In memcomputing unit 12, the set of weights {w} for each synaptic layer is stored in an array 18 of memristive devices in which each device stores a respective weight w of the set. Such an array 18 can be conveniently implemented by a crossbar array of devices, connected between row and column lines, as illustrated in FIG. 4. This example shows a crossbar array 25 storing the weights matrix W for weights {wij} of synaptic layer S1 in the FIG. 1 ANN. Each memristive device 26 in array 25 stores a respective weight wij of the matrix W as indicated. A weight wij corresponds to electrical conductance of the memristive device 26 storing that weight. The devices 26 are arranged in logical rows and columns with each device connected between a particular row line and column line for application of signals to that device. The row and column lines are connected to controller 7 of the memcomputing unit 2 via row and column digital-to-analog/analog-to-digital converters (not shown) which convert array input/output signals between the digital and analog domains.


In a forward propagation operation, controller 19 supplies the activation signals xi to the column lines ci of the array 25. The resulting output signals on row lines rj correspond to weighted sums of the activation signals Σi(wijxi) as indicated. The array 25 thus implements the matrix-vector multiplication Wx for the forward propagation operation. The backpropagation computation can be similarly performed by applying error signals to the row lines to obtain weighted sum signals on the column lines. In particular (although backpropagation is not performed for the first synaptic layer S1 in the specific example of FIG. 1), backpropagation over an array 25 can be performed by supplying error signals δj to respective row lines to obtain the weighted error sums Σj(wijδj) on the column lines as indicated.


A memristive device array 18 storing the weights matrix W for a synaptic layer of an ANN 1, 2 can be implemented in one or more such crossbar arrays 25 in memcomputing unit 12. Dense memristive cross-bar arrays can be fabricated as integrated nanoscale systems using well-known material processing techniques, and offer massively parallel and highly area- and energy-efficient implementation of synaptic layer computations. Controller 19 provides the control circuitry for controlling application of signals to memristive devices via the row and column lines of the appropriate arrays. Signals are input to and output from the arrays in the propagation operations, and DPU 11 computes the weight update Δwij=ηxiδj for each weight in each synaptic layer as described above. The memristive devices 26 are periodically programmed by programming circuit 20 to update the stored weights accordingly as explained in detail below. The control and programming circuitry of controller 19 can be implemented in generally known manner according to the particular memristive device implementation in arrays 18. A variety of memristive devices are known in the art, e.g. based on resistive memory cells such as phase-change memory (PCM) cells. These devices comprise one or more memory cells which can be arranged in various circuit configurations to store weights wij in the programmable conductance state of the cells. The conductance state, and hence stored weight wij, can be varied in operation by application of programming signals to a device.


CPU 13 controls performance by apparatus 10 of the iterative training process, described generally above and detailed further below, via execution of program instructions loaded from program modules 16 in memory 14. CPU 13 performs the calculations required in DPU 11 during the training process, and controls input and output of data to/from system bus 15 by DPU 11. CPU 13 also controls programming operations by programming circuit 20 of memcomputing unit 12 via control instructions sent via bus 15 to controller 19.


While an exemplary embodiment of apparatus 10 is described, DPU 11 can comprise one or more CPUs (including GPUs (graphics processing units)) which can be implemented by one or more microprocessors. Memory 14 can comprise one or more data storage entities, and can comprise main memory, e.g. DRAM (dynamic random access memory) and/or other storage which is physically separated from CPU 13, as well as cache and/or other memory local to CPU 13. DPU 11 can be implemented by one or more (general- or special-purpose) computers/programmable data processing apparatus. In general, functional steps performed by a processing unit of apparatus embodying the invention can be implemented by hardware or software or a combination thereof. Controller 19 can also comprise one or more processors which are configurable by software instructions to control memcomputing unit 12 to perform functions described below. In some embodiments, DPU 11 and/or controller 19 can include electronic circuitry such as programmable logic circuitry, field-programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) for implementing functions described. System bus 15 can comprise one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. Some examples include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.


In embodiments of this invention, for at least one of the synaptic layers of the ANN, the memcomputing unit 12 includes a plurality P1 of memristive devices arrays 18, and each of these arrays stores the set of weights {w} of that synaptic layer. Respective pluralities P1 of device arrays can be provided for more than one synaptic layer. Embodiments of the invention can provide a plurality of device arrays for every synaptic layer in the network. A synaptic layer implemented by P1 device arrays is denoted by S1 in the following. The network training operation involves an iterative cycle of operations generally as described above. However, for each synaptic layer S1, the training operation includes particular steps as indicated in FIG. 5.


In step 30 of FIG. 5, the initial ANN weights {w} for the synaptic layer S1 are programmed in each of P1 device arrays 18 provided for that layer in memcomputing unit 12. Step 31 represents input to synaptic layer S1 of a set activation signals xi, corresponding to a number of activation vectors x, to synaptic layer S1 in a first cycle of the training operation. The input activation vectors x here can correspond to one or more training examples as explained below. Step 32 represents signal propagation via synaptic layer S1 for this set of signals. In the forward propagation operation, respective subsets of the set of signal vectors (corresponding to respective activation vectors x) are supplied in parallel to the P1 arrays for weighting by the synaptic layer S1 as described above. In the subsequent backpropagation operation, respective subsets (vectors δ) of the error signals δj to be weighted by synaptic layer S1 are similarly supplied in parallel to the P1 arrays. In step 33, DPU 11 performs the weight-update calculation for synaptic layer S1. Here, DPU 11 calculates updates to respective weights stored in each of the P1 arrays in dependence on the signals propagated to those arrays by the adjacent neuron layers in the forward and backpropagation operations. A weight-update Δwijixiδj is thus calculated for a weight wij in each of the P1 arrays storing weight-set {w}. (In some embodiments, weight-updates Δwij can be computed for all weights wij in the weight-set {w}. In others, Δwij can be computed for only a subset of weights, e.g. a randomly-selected subset, or weights stored by devices for which xi and/or δj exceed a threshold level.) In step 34, for each weight wij in weight-set {w}, the DPU 11 accumulates the updates Δwij calculated for that weight in all of the P1 arrays in an accumulation value χw for that weight. The accumulation values χw are stored in data memory 17. After this step, DPU can overwrite the activations x and errors δ used for calculating the weight updates in memory 17.


Step 35 represents a decision step in which DPU 11 determines if all signals for a current batch of network inputs have been processed in the network. If not (“N” at step 35), operation reverts to step 31 for the next set of activation signals. When processing of the current batch is complete (“Y”) at decision step 35, operation proceeds to step 36. Here, DPU 11 controls the programming circuit 20 of memcomputing controller 19 to program the memristive devices storing each weight wij in all of the P1 arrays to update the stored weight in dependence on the accumulation value χw for that weight. The programming operation is described further below. Step 37 represents a further decision step in which DPU 11 determines if a predetermined convergence condition for the training operation has been achieved. (Convergence can be defined in various known ways, and the particular convergence condition is orthogonal to the operation described herein). If not, operation reverts to step 31 and the training iterations continue. The training operation terminates when convergence is detected at step 37.


In the above operation, the processing of signals by a synaptic layer S1 is performed in parallel by P1 arrays each storing a copy of the weights {w} for the layer. Updates to each individual weight w are calculated and accumulated for all arrays in a single accumulation value χw, and all arrays are periodically programmed to update the stored weights based on the accumulation values. This offers significantly faster processing by the synaptic layer, while providing a simple weight-update operation based on the accumulation values. A single, “shared” accumulation value is provided for each weight w, and activations and errors can be overwritten after accumulation operations, whereby weight updates can be managed with minimal memory overhead. The weight updates and accumulation values can be calculated with high-precision in DPU 11, and weight-update programming can be performed after a desired number of training iterations, e.g. after processing a batch of training examples. This offers an exceptionally efficient training operation using memristive device arrays for synaptic layer implementation.



FIG. 6 illustrates operation of the above method for the convolutional layer of FIG. 2. P1 device arrays 18 each store the M by k2d weights-matrix W for the layer. P1 activation vectors x, corresponding to respective slices of the input volume, are supplied in parallel to the P1 arrays. The activations x are stored in memory 17. Once the P1 vectors of error signals δj have been computed for the next neuron layer on backpropagation, these error vectors δ are again supplied in parallel to the P1 arrays. The error signals δi for the preceding neuron layer are then computed in known manner from the weighted error sums output from the arrays as described above. A weight-update (Δwij)p=η(xiδj)p is then calculated for each weight wij in weights matrix W in each of the P1 arrays, and the accumulation value χij for weight wij is updated as χijijp=1pl(Δwij)pij+ηΣp=1pl(xiδj)p. This process continues until all (n−k+1)2 input slices of B input volumes for a batch of B≥1 training examples have been processed. The devices storing weights wij in all P1 arrays are then programmed to update the weights as described below, and operation continues for the next batch.



FIG. 7 illustrates the corresponding operation for a fully-connected network layer of FIG. 1. Here the P1 device arrays 18 each store an lj by li weights-matrix W. P1 activation vectors x, corresponding to respective input examples, are supplied in parallel to the P1 arrays during forward propagation, and the resulting P1 error vectors δ are backpropagated in parallel by the arrays. The weight-update calculation then accumulates the updates over all P1 arrays in the accumulation values χijij+ηΣp=1pl(xiδj)p as before. The process continues for a batch of B training examples, and the devices storing weights wij in all P1 arrays are then programmed to update the weights as described below. Operation then continues for the next batch.



FIG. 8 indicates more detailed steps of a programming operation to update the weights wij in embodiments of the invention. In step 40, for each weight wij in weight-set {w} for a layer S1, DPU 11 calculates a number q of programming pulses to be applied to the memristive devices storing that weight. The number q is dependent on the accumulation value χij for the weight wij. In this embodiment, q is calculated as






q
=




χ
w

ɛ







rounded towards zero, where ε is a predetermined step-size indicative of weight-change of a memristive device in response to application of one programming pulse. In step 41, DPU 11 instructs controller 19 to apply, via programming circuit 20, q programming pulses to the devices storing wij in all of the P1 arrays in parallel. In general, χij can be positive or negative, requiring positive or negative updates to the weights. Hence the programming pulses applied here can be potentiation pulses (to increase wij for positive weight updates) or depression pulses (to decrease wij for negative weight updates). Programming pulses, and the step size ε, can differ for potentiation and depression depending on characteristics of the memristive devices. Different programming pulses and step sizes c can also be used for devices implementing different synaptic layers in some embodiments. In step 42, DPU 11 then updates the magnitude of the accumulation value χij for weight wij to |χij|−qε. Hence, the accumulation value χij is updated to χij−sign(χij)qε.


With the above programming scheme, all P1 arrays storing a weight-set {w} undergo the same programming sequence. After programming updates, residual weight-update information is retained in the updated accumulation values χij which are stored in high precision in DPU 11. No weight-update information is therefore lost on programming.


While all P1 arrays undergo the same programming sequence, the actual state of corresponding devices in different arrays can vary due to variabilities inherent in memristive devices and/or programming stochasticity. Devices can also saturate due to limited conductance range. This is a particular problem for differential memristive devices which are implemented by a differential configuration of memristive elements. In these devices, the stored weight is represented by the difference in conductance of two elements, one of which receives potentiation pulses and the other of which receives depression pulses. These devices require a frequent refresh operation to reprogram the stored weights. To address these issues, embodiments of apparatus 10 can perform a periodic calibration process to recalibrate weights in the P1 arrays. An example of this process is indicated in FIG. 9. In this embodiment, DPU 11 stores a reference set of digital weight values {W}, corresponding to the set of weights {w} for a synaptic layer S1, in memory 17 as indicated at step 45. In step 46, each time the stored weights wij are updated on programming the P1 arrays, the DPU updates the reference digital weights {W} in accordance with the update of each weight wij. In this example with the programming scheme of FIG. 6, each digital weight Wij is updated to Wij=Wij±qε, where the sign depends on whether potentiation (+) or depression (−) pulses are applied on programming the devices. Step 47 of FIG. 9 represents a periodic recalibration of the stored weights in the P1 arrays in dependence on the digital weight values. In this step, DPU reads the reference weights {W}, and the devices storing each weight in all P1 arrays are programmed to the corresponding weight value W.


Periodic recalibration inhibits alteration of the behavior of a network training algorithm due to divergence of weights in the different arrays. In embodiments of the invention employing differential memristive devices, the periodic recalibration can be used to implement the refresh operation for the differential devices, whereby additional refresh operations are unnecessary. Alternatively, refresh can be performed as usual, with recalibration performed less frequently than refresh.


Other recalibration schemes can be envisaged. For example, one of the P1 arrays can be used as a reference for the other arrays. The weights stored in this reference array can be periodically read and used as reference values for reprogramming the corresponding weights in the remaining (P1−1) arrays. Here, the recalibration process can implement a refresh operation by refreshing only the reference array, and then programming the other arrays based on the refreshed weights in the reference array. As a further example, reference weights could be calculated as the mean weights read-back from a plurality, e.g. all, of the P1 arrays. Calibration can be performed after a predetermined number of training batches, or in a pipelined fashion described further below.


The iterative cycles of the training operation can be pipelined in some embodiments. FIG. 10 illustrates pipelining in an embodiment implementing an ANN with three synaptic layers S1, S2 and S3 having respective weight-sets {wij}, {wjk} and {wkl}. Columns in the figure correspond to successive time slots t of the pipelined operation. Lines indicate processing operations for different sets of P1 input vectors x, where P1 is the number of arrays storing weights {wij} of the first synaptic layer S1. For each synaptic layer S2 and S3 after the first, two sets of P1 arrays are provided to store the weight-set of that layer. Hence, two sets of P2 arrays store weight-set {wjk}, and two sets of P3 arrays store weight-set {wkl}. Forward propagation across a layer S1 is denoted by F1 in the figure; backpropagation across S1 is denoted by B1; the output error computation is denoted by Err; and the accumulation-value update process for weight-sets {wij}, {wjk} and {wkl} is denoted by χij·χjk and χkl respectively. With the pipelined operation shown, forward and backpropagation operations across layers S2 and S3 occur concurrently for different sets of input vectors. Hence, the signals to be weighted by each of these layers in forward and backpropagation operations are supplied by controller 19 to respective sets of the (P2 or P3) arrays for that layer. The signal vectors for each line are stored in DPU 11 and only overwritten when that line is finished (after the χij computation).


Pipelined operation continues through the iterative cycle of operations for a batch B of input vectors. At the end of the batch, the pipeline is flushed and the weight arrays are programmed to update all weights based on the resulting accumulation values. For layers S2 and S3 here, devices storing each weight in all arrays of the two sets of P2 or P3 arrays are programmed to update the weights.


The pipelined implementation reduces latency and increases throughput for the training operation. Different numbers P1 of arrays for different synaptic layers S1 can be provided in such embodiments to balance (i.e. equalize as far as possible) propagation delays between successive ANN layers.


In pipeline embodiments of the invention, recalibration of weight arrays for a synaptic layer can be performed cyclically during the pipelined operation. This is illustrated schematically in FIG. 11 for a layer S1 having a total of P arrays, including an additional array (i.e. in addition to P1 or 2P1 arrays required for the pipelined implementation above) storing the weight-set for that layer. The arrays are denoted by p1, p2, . . . , P in the figure. In the operation shown, each of the P arrays is selected in turn for recalibration. The training operations are pipelined as described above using the remaining (P−1) arrays, while the selected array is recalibrated. Recalibration is based on reference values (which can be obtained in a variety of ways as described above) for the weight-set for the synaptic layer. When recalibration of the selected array is complete, this array is returned to the pipeline and the next array is recalibrated. Arrays are thus cyclically selected for recalibration while the pipeline runs continuously using (P−1) arrays. In this way, the pipeline need not be interrupted for recalibration and runs continuously until all vectors have been processed for the current batch.


It will be seen that the above embodiments offer highly-efficient implementation of mixed-precision ANN training. Operation is accelerated via analog processing for synaptic layers in parallel memristive arrays 18, and an efficient weight-update procedure is based on accumulation values calculated by processing unit 11. Significantly faster training can be achieved compared to high-precision digital computing implementations, even for convolutional network layers where the speed-up should be similar to that for fully-connected layers.


Numerous changes and modifications can of course be made to the exemplary embodiments described. For example, while the accumulation values are stored in memory 17 of DPU 11 above, these values can be generated and stored in separate accumulators of a processing unit in other embodiments. Such accumulators can be implemented by any convenient digital or analog (e.g. capacitor-based) circuitry. Different programming schemes can also be envisaged. For example, in some embodiments a single (potentiation or depression) programming pulse can be applied to devices for which the magnitude of the accumulation value χij exceeds some defined threshold.


In general, memristive devices 26 can be implemented in any desired manner, e.g. based on one or a plurality of PCM cells and/or other memristive memory elements in various circuit arrangements. Any type of memristive memory element can be used, e.g. PCM or other resistive memory cells such as resistive RAM (RRAM, or ReRAM) cells including conductive bridge RRAM cells, oxide or metal-oxide RRAM cells, carbon RRAM cells, and magneto-resistive random access memory (MRAM) elements, ferroelectric random access memory (FeRAM) elements, optical memory elements, and circuit devices, e.g. CMOS circuits, comprising transistors, resistors, capacitors, and/or inductors, emulating behavior of memristive elements.


Steps of flow diagrams can be implemented in a different order to that shown and some steps can be performed in parallel where appropriate. In general, where features are described herein with reference to a method embodying the invention, corresponding features can be provided in apparatus embodying the invention, and vice versa.


The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.


The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.


The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.


Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.


Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instruction by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.


Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.


These computer readable program instructions may be provided to a processor of a general-purpose computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.


The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special-purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special-purpose hardware and computer instructions.


The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments described herein.

Claims
  • 1. A method for training an artificial neural network, having a succession of neuron layers with interposed synaptic layers each storing a respective set of weights {w} for weighting signals propagated between its adjacent neuron layers, via an iterative cycle of signal propagation and weight-update calculation operations, the method comprising for at least one of said synaptic layers: providing a plurality P1 of arrays of memristive devices, each array storing the set of weights of that synaptic layer S1 in respective memristive devices;in said signal propagation operation, supplying respective subsets of the signals to be weighted by the synaptic layer S1 in parallel to the P1 arrays;in said weight-update calculation operation, calculating updates to respective weights stored in each of the P1 arrays in dependence on signals propagated by the neuron layers and, for each weight w in said set, accumulating the updates calculated for that weight in the P1 arrays in an accumulation value χw for that weight; andperiodically programming the memristive devices storing each weight w in all of the P1 arrays to update the stored weight in dependence on the accumulation value χw for that weight.
  • 2. A method as claimed in claim 1 including: programming the memristive devices storing said weight w by applying a number q, dependent on said accumulation value χw, of programming pulses to those devices; andupdating the magnitude of the accumulation value χw for that weight w to |χw|−qε, where c is a predetermined step-size indicative of weight-change of a memristive device in response to application of said programming pulse.
  • 3. A method as claimed in claim 2 wherein
  • 4. A method as claimed in claim 1 including periodically recalibrating the stored weights in the P1 arrays in dependence on reference values for the set of weights for the synaptic layer S1.
  • 5. A method as claimed in claim 4 including determining said reference values in dependence on read-back weights from at least one of the P1 arrays.
  • 6. A method as claimed in claim 4 including: storing a set of digital weight values corresponding to the set of weights for the synaptic layer S1;updating the digital weight values in accordance with said update of each stored weight w on programming the memristive devices in the P1 arrays; andperiodically recalibrating the stored weights in the P1 arrays in dependence on the digital weight values.
  • 7. A method as claimed in claim 4 wherein each memristive device in the P1 arrays comprises a differential memristive device, the method including performing the periodic recalibration of the stored weights in the P1 arrays to implement a refresh operation for the differential devices.
  • 8. A method as claimed in claim 1 including providing said plurality P1 of arrays of memristive devices for every synaptic layer in the network.
  • 9. A method as claimed in claim 1 wherein each said signal propagation operation comprises a forward propagation operation, in which activation signals are propagated from a first to a last neuron layer of the network in response to a network input, and a backpropagation operation in which error signals are propagated from the last neuron layer to the second neuron layer, the method including: for each said synaptic layer S1 after the first synaptic layer of the network, providing two sets of P1 arrays of memristive devices storing the set of weights of that synaptic layer; pipelining said iterative cycle of operations for a batch of network inputs such that signals to be weighted by each synaptic layer S1 in said forward propagation and backpropagation operations are supplied to respective sets of P1 arrays for that synaptic layer; andat the end of said iterative cycle of operations for the batch, programming the memristive devices storing each weight w in all arrays of the two sets of P1 arrays for each synaptic layer S1 to update the stored weight in dependence on said accumulation value χw for that weight.
  • 10. A method as claimed in claim 9 including providing different numbers P1 of arrays for different synaptic layers S1 to balance propagation delays between successive neuron layers of the network.
  • 11. A method as claimed in claim 9 including, for each synaptic layer S1: providing an additional array of memristive devices storing the set of weights for that layer;during said iterative cycle of operations for the batch, cyclically selecting one of the arrays for that synaptic layer, pipelining said operations using the remaining arrays for that synaptic layer, and recalibrating the stored weights in the cyclically selected array in dependence on reference values for the set of weights for the synaptic layer.
  • 12. Apparatus for implementing an artificial neural network, having a succession of neuron layers with interposed synaptic layers each storing a respective set of weights {w} for weighting signals propagated between its adjacent neuron layers, in an iterative training cycle of signal propagation and weight-update calculation operations, the apparatus comprising: a processing unit adapted to generate signals propagated by the neuron layers in said propagation operations and to perform said weight-update calculation operations; anda memcomputing unit storing said weights in respective memristive devices for receiving and weighting signals propagated by the neuron layers to implement the synaptic layers of the network, the memcomputing unit including a programming circuit for programming the memristive devices and, for at least one of the synaptic layers, a plurality P1 of arrays of memristive devices, each array storing the set of weights of that synaptic layer S1 in respective memristive devices;wherein the memcomputing unit is adapted, in said signal propagation operation, to supply respective subsets of the signals to be weighted by the synaptic layer S1 in parallel to the P1 arrays;wherein the processing unit is adapted, in said weight-update calculation operation, to calculate updates to respective weights stored in each of the P1 arrays in dependence on signals propagated by the neuron layers, and, for each weight w in said set, to accumulate the updates calculated for that weight in the P1 arrays in an accumulation value χw for that weight;and wherein the processing unit is adapted to control said programming circuit to periodically program the memristive devices storing each weight w in all of the P1 arrays to update the stored weight in dependence on the accumulation value χw for that weight.
  • 13. Apparatus as claimed in claim 12, the apparatus being adapted: to program the memristive devices storing said weight w in the P1 arrays by applying a number q, dependent on said accumulation value χw, of programming pulses to those devices; andto update the magnitude of the accumulation value χw for that weight w to |χw|−qε, where ε is a predetermined step-size indicative of weight-change of a memristive device in response to application of said programming pulse.
  • 14. Apparatus as claimed in claim 13 wherein
  • 15. Apparatus as claimed in claim 12, the apparatus being further adapted to periodically recalibrate the stored weights in the P1 arrays in dependence on reference values for the set of weights for the synaptic layer S1.
  • 16. Apparatus as claimed in claim 15, the apparatus being adapted to determine said reference values in dependence on read-back weights from at least one of the P1 arrays.
  • 17. Apparatus as claimed in claim 15 wherein: the processing unit is adapted to store a set of digital weight values corresponding to the set of weights for the synaptic layer S1, and to update the digital weight values in accordance with said update of each stored weight w on programming the memristive devices in the P1 arrays; andthe apparatus is adapted to periodically recalibrate the stored weights in the P1 arrays in dependence on the digital weight values.
  • 18. Apparatus as claimed in claim 12 wherein the memcomputing unit includes said plurality P1 of arrays of memristive devices for every synaptic layer in the network.
  • 19. Apparatus as claimed in claim 12 wherein each said signal propagation operation comprises a forward propagation operation, in which activation signals are propagated from a first to a last neuron layer of the network in response to a network input, and a backpropagation operation in which error signals are propagated from the last neuron layer to the second neuron layer, and wherein, for each said synaptic layer S1 after the first synaptic layer of the network, the memcomputing unit includes two sets of P1 arrays of memristive devices storing the set of weights of that synaptic layer, the apparatus being adapted: to pipeline said iterative cycle of operations for a batch of network inputs such that signals to be weighted by each synaptic layer S1 in said forward propagation and backpropagation operations are supplied to respective sets of P1 arrays for that synaptic layer; andat the end of said iterative cycle of operations for the batch, to program the memristive devices storing each weight w in all arrays of the two sets of P1 arrays for each synaptic layer S1 to update the stored weight in dependence on said accumulation value χw for that weight.
  • 20. Apparatus as claimed in claim 19 wherein, for each synaptic layer S1, the memcomputing unit includes an additional array of memristive devices storing the set of weights for that layer, the apparatus being adapted, during said iterative cycle of operations for the batch, to cyclically select one of the arrays for that synaptic layer S1, to pipeline said operations using the remaining arrays for that synaptic layer, and to recalibrate the stored weights in the cyclically selected array in dependence on reference values for the set of weights for the synaptic layer.