The present disclosure relates in general to the field of neuromorphic systems based on crossbar array structures, and to methods of operating such neuromorphic systems. In particular, the present disclosure is directed to techniques to compensate for temporal conductance variations (such as conductance drifts) in electronic devices (e.g., phase-change memory (PCM) devices) of crossbar array structures of such systems.
Machine learning often relies on artificial neural networks (ANNs), which are computational models inspired by biological neural networks in human or animal brains. Such systems progressively and autonomously learn tasks by means of examples, and they have successfully been applied to, for example, speech recognition, text processing and computer vision.
Many types of neural networks are known, starting with feedforward neural networks, such as multilayer perceptrons, deep neural networks and convolutional neural networks. Neural networks are typically implemented in software. However, a neural network may also be implemented in hardware, for example, as a resistive processing unit using a crossbar array structure or as an optical neuromorphic system. Such systems may be used as external memory for memory-augmented systems. The basic idea of memory-augmented neural networks is to enhance a neural network with an external memory. Memory-augmented neural networks (MANNs) benefit from a powerful architecture combining advantages from neural network data processing and persistent storage.
Computational memories based on crossbar arrays using electronic devices such as PCM devices can be used for ANN computations, for example, for training a deep neural network (DNN) and/or as inference accelerators for inferences with such networks. However, certain electronics devices (e.g., PCM devices) may suffer from temporal variations (e.g., drifts) in their conductance values, which may lead to errors in the computations. Being able to correct such variations in the conductance values with reliable precision may be desirable, especially for DNN inference accelerators.
In certain embodiments, a method of operating a neuromorphic system includes applying voltage signals across input lines of a crossbar array structure, the crossbar array structure including rows and columns interconnected at junctions via programmable electronic devices, the rows including the input lines for applying voltage signals across the electronic devices and the columns including output lines for outputting currents. The method also includes correcting, via a correction unit connected to the output lines, each of the output currents obtained at the output lines according to an affine transformation to compensate for temporal conductance variations in the electronic devices.
In other embodiments, a neuromorphic system includes a crossbar array structure that includes rows and columns interconnected at first junctions via electronic devices, wherein the rows include input lines for applying voltage signals across the electronic devices and the columns include output lines for outputting currents. The neuromorphic system also includes a correction unit connected to the output lines and configured to enable an affine transformation of currents outputted from each of the output lines. The neuromorphic system also includes a control unit configured to apply voltage signals across the input lines, and configured operate the correction unit to correct each of the output currents obtained at the output lines according to the affine transformation, to compensate for temporal conductance variations in the electronic devices.
Systems and methods embodying the present invention will now be described, by way of non-limiting examples, and in reference to the accompanying drawings
The drawings included in the present application are incorporated into, and form part of, the specification. They illustrate embodiments of the present disclosure and, along with the description, explain the principles of the disclosure. The drawings are only illustrative of certain embodiments and do not limit the disclosure.
It should be appreciated that elements in the figures are illustrated for simplicity and clarity. Well-understood elements that may be useful or necessary in a commercially feasible embodiment may not be shown for the sake of simplicity and to aid in the understanding of the illustrated embodiments. Technical features depicted in the drawings are not necessarily to scale. Similar or functionally similar elements in the figures have been allocated the same numeral references, unless otherwise indicated.
As discussed above, certain electronics devices (e.g., PCM devices) may suffer from temporal variations (e.g., drifts) in their conductance values, which may lead to errors in the computations. The drift in the conductance value of a PCM device limits its applications in DNN inference hardware accelerators and potentially in DNN training hardware accelerators as well. An example drift correction technique includes a method for correcting conductance drifts in a crossbar implementation of PCM devices. The method relies on computing a single scalar factor to correct for drifted conductance values of all currents outputted from the crossbar array. The method is based on the observation that the conductance of an electronic device such as a PCM device has an exponential relation to the time. Certain of these methods may be able to achieve a limited amount of error reduction when computing a matrix multiplication (e.g., in performing multiply-accumulate operations). However, a more precise method for correcting the error caused by the drift in the conductance is needed for maintaining high DNN inference accuracies.
The present embodiments describe improved methods for error reduction, where the output currents are corrected according to an affine transformation. In addition, the methods of the present embodiments may be used to separately correct output currents of distinct subsets of output lines (or distinct output lines), or as part of a batch normalization procedure, as discussed below in detail.
An embodiment relates to a hardware-implemented method of operating a neuromorphic system. The method relies on a neuromorphic system, which comprises a crossbar array structure and a correction unit. The crossbar array structure includes rows and columns interconnected at junctions via programmable electronic devices. The rows include input lines for applying voltage signals across the electronic devices and the columns include output lines for outputting currents. The correction unit is connected to said output lines. This unit is configured to enable an affine transformation of currents outputted from each of said output lines, in operation. The method involves, on the one hand, applying voltage signals across the input lines and, on the other hand, correcting (via the correction unit) each of the output currents obtained at said output lines according to said affine transformation to compensate for temporal conductance variations in said electronic devices.
In embodiments, the correction of the output currents is achieved by programming the correction unit according to programmable parameters of the affine transformation. These parameters include a multiplicative coefficient γ and an additive parameter β. Thus, each of the output currents is corrected according to programmable parameters. The correction unit is preferably integrated in the crossbar array structure. It may for example be connected to the output lines of the crossbar array structure via second electronic devices at second junctions. In this case, each output current can be corrected via the correction unit by programming the latter based on signals coupled into each of the second junctions. One or more set of values may possibly be computed for the programmable parameters. For example, the correction unit may be programmed according to two or more sets of values, so as to separately correct two or more sets of output currents, respectively. Interestingly, the crossbar array structure may further include one or more additional columns for outputting one or more reference currents. Such reference currents may then be used to compute the sets of values for the affine parameters.
In other embodiments, the crossbar array structure is used to execute a layer of nodes of an artificial neural network. Batch normalization parameters can advantageously be computed in view of performing a batch normalization of this layer. This can be achieved by scaling the multiplicative coefficient γ and the additive parameter β of the affine transformation according to the computed batch normalization parameters.
Other embodiments relate to a neuromorphic system. The system comprises a crossbar array structure and a correction unit as described above. The system further includes a control unit configured to apply voltage signals across the input lines. The control unit is further configured to operate the correction unit, in order to correct each of the output currents obtained at the output lines according to an affine transformation, so as to compensate for temporal conductance variations in said electronic devices. The correction unit may possibly be integrated in the crossbar array structure. So may the control unit, although it is preferably implemented as a digital, processing unit.
In reference to
The system 1 includes a crossbar array structure 10, and a correction unit 20. In addition, in certain embodiments, the system 1 includes a signal generator unit (not shown), coupled to the crossbar array structure 10 and the correction unit 20, as well as a control unit 30, to apply voltage signals to the crossbar array structure 10 and operate the correction unit 20. The same control unit 30 may further be used to program electronic devices of the crossbar array structure 10, as discussed below. As shown in
As shown in
As shown in
In the present embodiments, the correction unit 20 is configured to enable an affine transformation of currents outputted from each of the output lines 18, in operation. That is, the correction performed by the unit 20 is an affine function, which operates an affine transformation of the currents outputted from the crossbar array structure 10. An affine transformation includes a linear transformation and a translation. Thus, the correction may be regarded as involving a multiplicative coefficient γ and an additive parameter β (i.e., any output current I gives rise to a corrected current γI+β). The coefficient γ and the constant term β are together referred to as the “affine transformation parameters.” Such parameters are scalars in the present embodiments, and may be set to any desired value.
The present embodiments revolve around applying voltage signals (step S15) across the input lines 11, to produce output currents. This is normally achieved via a multiply-accumulate operation, as per the design of the crossbar array structure. In addition, the correction unit 20 is operated to correct at step S40 each of the output currents obtained at the output lines 18 according to the affine transformation, in order to compensate for temporal conductance variations in the electronic devices 12.
In operation, the voltage signals applied at step S15 are read voltages, which are lower-intensity signals compared to programming voltages as initially used to program at step S10 the electronic devices 12 of the neuromorphic system 1 to set the electronic devices to predefined conductance values. In certain embodiments, the devices 12 are assumed to have already been programmed at step S10 to given conductance states at the time of applying the step S15 read voltage signals. That is, certain of the present methods comprise an initial step S10 of programming the electronic devices 12 of the neuromorphic system 1 by applying programming voltage signals across the input lines 11. Read voltage signals are normally applied at step S15 after having programmed the devices 12 at step S10, as shown in
As discussed above, the correction unit 20 is operated to compensate for temporal conductance variations in the electronic devices 12. Such variations include conductance drifts (as may occur in PCM devices) but may also include conductance variations due to changes in the temperature of the neuromorphic device or electronic noise, for example. However, it should be appreciated that other sources of errors in the output currents may also be corrected with the present methods.
The present embodiments make it possible to retain accuracy in the values stored in the electronic devices 12 over longer time periods, compared to known methods of drift correction in similar applications, in particular when applied to crossbar-based inference accelerators. This approach is applicable to any crossbar implementation of electronic devices 12 that exhibit temporal conductance variations. Such devices 12 are typically memristive devices, such as phase-change memory (PCM) devices, resistive random-access memory (RRAM) devices, or static random-access memory (SRAM) devices. In other examples, flash cells may be used. Flash cells can be used for multi-level (MLC) storage, and they are not subject to conductance drifts. However, the conductance of a Flash cell is temperature dependent. Therefore, gradual changes in the ambient temperature will alter the stored conductance values, which can also be compensated for according to the present methods. Conductance variations due to temperature changes will occur in most of memristive devices, including PCM and RRAM devices.
The memristive crossbar structure may use low precision (i.e., a single memristive element 12 may be involved at any junction of the array 10). More generally, however, each junction may include one or more memristive devices. Also, dual output lines 16, 17 (in upper columns) may be involved, yielding double junctions (one to store positive values, and another to store negative values), as shown in
In certain embodiments, the electronic devices 12 may be programmed S10 so that the electronic devices 12 store synaptic weights of an artificial neural network (ANN). In other embodiments, the neuromorphic system 1 forms part of a memory-augmented neural network system, to enhance a neural network with an external memory. In each of the embodiments, output currents obtained from the output lines 18 will be obtained according to a multiply-accumulate operation, based on read voltage signals coupled at step S15 into the input lines 11 and values stored on the electronic devices 12, as described in detail below.
Initially, the correction at step S40 performed in respect of each of the output currents may be achieved by programming at steps S22, S24 the correction unit 20, according to programmable parameters of the affine transformation. As discussed earlier, such parameters include a multiplicative coefficient γ and an additive parameter β. Thus, each output current may be corrected at step S40 according to programmable parameters. Such parameters used may be the same for all the output currents, as shown in the example of
In the embodiments shown in
The “programming” of the first electronic devices 12 and the second electronic device 21, 22 is achieved by coupling suitable signals into relevant input lines. For instance, the first electronic devices 12 of the crossbar array structure 10 can be programmed by applying at step S10 programming voltage biases to the input lines. Similarly, the second electronic devices 21, 22 of the correction unit 20 are programmed by coupling signals into each of the second electronic devices 21, 22 at the second junctions.
In certain embodiments, the correction unit 20 may be operated based on known good values or based on pre-computed values for the affine transformation. In other embodiments, however, the present methods further comprise computing at step S26 suitable values for the parameters of the affine transformation. In particular, one or more sets of values may be computed over time for the programmable parameters, in which case the correction unit 20 may be programmed at step S24 according to such sets of computed values. Methods for how these sets of values may be computed are described in detail below.
As discussed above, the same affine parameters γ and β may be used for all output currents, as shown in
The same principle can be extended to three or more subsets of output currents, or even to each individual current, as shown in
In the example of
In particular, the third junctions may be programmed for the two subsets 191, 192 of the third electronic devices to exhibit distinct electrical conductances. The latter impacts the reference current outputted by the additional output line 195. That is, the electrical conductances of the third electronic devices vary across the dual lines of the additional column. Two or more additional columns may be used to output respective reference currents, if necessary. Detailed examples of how to compute affine parameters based on the reference currents are described below.
In certain embodiments, the programming of the correction unit 20 is performed as follows. The programmable parameters of the correction units are first initialized at step S22 to initial values (e.g., corresponding to an initial time t0). The initialization step S22 may be regarded as a first programming step of the correction unit 20. The correction unit parameters are typically initialized after having programmed S10 the electronic devices 12 of the crossbar array structure 10, but prior to computing at step S26 subsequent values for the programmable parameters (corresponding to a subsequent time t) of the correction unit 20. The latter is subsequently reprogrammed at step S24 according to the set of values subsequently computed at step S26.
In certain embodiments, the process repeats. That is, several time-dependent sets of affine parameter values may be computed at step S26 over time, whereby the correction unit 20 is repeatedly programmed at step S24 according to each time-dependent set of values computed at step S26, as shown in the flowchart of
Algorithms according to certain embodiments are now discussed, which address particular ways of computing the affine transformation parameters.
Assume first that output currents are individually corrected, as in the example of
Īj=γj×Ij+βj (7)
Due to the exponential drift factor (t/t0)−v (see, equation 1), the spread of the distribution changes over time, and that can be corrected by γj. That is, the multiplicative coefficient γj can be used to compensate for the change in the spread of the conductance distribution over time. In addition, due to the conductance state dependence and variability in v (see, equation 2), the output current changes over time. This phenomenon can be corrected by the additive parameter βj. As discussed above, the values of γj and βj can be periodically calibrated (i.e., adjusted) at step S26, during the operation of the system 1.
The affine transformation parameters may first be initialized, as reflected in
In certain embodiments, the scaling factor γj may be calibrated. To update γj, the quantity Γj|t0 is first computed (see, equation 7.1.a below) right after programming all the devices 12 in the array 10:
Γj|t0=Îj|t0V
This quantity is then stored for future computations.
Next, another quantity Γj|tc is computed (see, equation 7.1.b below) at a time tc after programming all the devices in the array 10, and γj is then updated based on the ratio of Γj|t0 to Γj|tc, as given by equation 7.1.c below:
These operations can be repeated multiple times throughout the operation of the crossbar array 10 (i.e., at distinct time instants tc, as shown in
In certain embodiments, the additive factor βj may be updated. As described above, the additive factor βj compensates for errors in the expected value of the output current of the jth column. To update βj, the quantity Bj|t0 is first computed (see, equation 7.1.d below) after programming all the devices 12 in the array 10. This quantity is then stored for future computations. Then, additional quantities Bj|tc and BM+1|tc are computed at desired time instants tc (see, equations 7.1.e and 7.1.f below). Next, another quantity ωj is computed (see, equation 7.1.g below) for the desired time tc and stored for future computations. The additive factor βj can accordingly be updated, at any time t, as a product of ωj computed at tc and the current outputted from the (M+1)th column, using a given input voltage at time t. Note, M=3 in the examples of
The procedure can be repeated throughout the operation of the system 1, at several distinct desired time instants tc. For simplicity, all devices in a same line (i.e., two subsets 191, 192) of the (M+1)th column can be set to same conductance values. For example, devices in the first line 191 can be set to a conductance state of 20 μS, while the remaining devices in the second line 192 can be set to a conductance state of 0 μS. Such a scheme may improve the classification accuracy on the CIFAR-10 benchmark by up to 10%, according to tests performed using a 32-layer deep convolution neural network of a residual network family referred to as ResNet-32.
The above explanations pertain to cases where output currents are individually corrected. In other embodiments, all output currents may be subject to a same correction, as shown in
Ījγ×Ij+β (7.2)
Only two scalar coefficients γ and β are thus needed to correct the drift for all columns of the crossbar array 10 in that case. Note, the term β is set to a value β≠0. However, the values of γ and β can be periodically adjusted during the operation of the crossbar.
In other variants, subsets of the output currents are individually corrected, as described above. That is, an output current Ij is corrected according to parameters γk and βk. In particular, all columns may be divided into K separate groups (which do not necessarily correspond to contiguous subsets of columns in the array) and all columns of the kth group share the same parameters γk and βk. Thus, for the j∈kth group:
Īj=γk×Ij+βk (7.3)
It should be appreciated that the groups may be dynamically defined, if needed.
In certain embodiments, the system 1 can be used as an external memory for a neural network or for executing a neural network (be it for training the network or for inference purposes). In certain embodiments, the electronic devices 12 are programmed at step S10 for the electronic devices 12 to store synaptic weights pertaining to connections to nodes of a single layer of an ANN. The output currents obtained at the output lines 18 are obtained according to a multiply-accumulate operation. This operation is based on read voltage signals applied at step S15 across the input lines 11 and values stored on each of the electronic devices 12 as per the programming at step S10 of the electronic devices 12.
In that respect, in the example of
One or more readout circuits (not shown) are coupled to read out the M output signals (electrical currents) obtained from the M output lines. For example, a first readout circuit may be needed to read currents as directly outputted from the crossbar array 10, in order to feed corresponding values to a digital correction unit 20. A second readout circuit may then be needed to exploit the corrected values produced by this unit 20. In other cases, a single readout circuit may be needed to read currents as compensated by the unit 20 for conductance variations, should this unit 20 be directly connected to the array 10. The readout may be carried out according to a multiply-accumulate operation, which takes into account voltage signals coupled into each of the input lines 11, as well as signals coupled into the second junctions 21, 22. As per the multiply-accumulate operations performed, values stored on each of the electronic devices 12 impact the readout. The multiply-accumulate operation typically causes the signals coupled into the input lines to be respectively multiplied by values stored on the devices 12 at the junctions.
The architecture shown in the array 10 of
The synaptic weights as stored on the devices 12 are constant for inference purposes, whereas they need be iteratively reprogrammed for learning purposes. The computation of the weight updates is normally performed by the controller 30, whereas the crossbar array structures 10 are used to perform all the basic operations needed for the ANN (i.e., matrix vector products for the forward evaluation, products of transposed matrices and error gradient vectors for the backward evaluation, and vector outer products for updating weights, which involve large vector-matrix multiplications). For the learning phase, the controller 30 may be used to re-program the devices 12, to alter synaptic weights stored thereon according to any suitable automatic learning process. Thus, a system 1 such as shown in
In certain embodiments, the present methods may compute batch normalization parameters to perform a batch normalization of the ANN layer implemented by the array 10. Batch normalization is achieved by scaling the multiplicative coefficient γ and the additive term β according to the computed batch normalization parameters. The latter normally includes batch normalization statistic parameters σj and μj and batch normalization coefficients Aj and Bj. Preferably, several time-dependent batch normalization statistic parameters are computed, while freezing the batch normalization coefficients (i.e., maintaining the batch normalization coefficients constant).
In other words, the above conductance variation correction methods, which involve two coefficients γj and βj per column, may be combined with a batch normalization layer that has coefficients Aj, Bj and statistic parameters σj and μj. Values of Aj, Bj, μj and σj are obtained from the training of the ANN.
In general, a batch normalization layer normalizes a current Ij from the jth column to have zero mean and a unit standard deviation, and then applies a different scale and shift:
Applying this principle to the corrected currents ĪJ=γjIj+βj results in a new formulation of the multiplicative coefficients and the additive terms, respectively {tilde over (γ)}J and {tilde over (β)}j (see, equation 7.4.b):
As discussed above, {tilde over (γ)}J and {tilde over (β)}j can be recomputed by computing γj and βj (see, equations 7.4.c and 7.4.d).
The above method may be used for conductance variation correction without having to recompute the batch normalization parameters. However, in order to ensure that optimal values of the batch normalization parameters are used are used over time in spite of conductance variations, batch normalization statistic parameters σj and μj may be recomputed over time to compensate for electrical conductance variations. Optimal values of batch normalization statistics σj, μj and coefficients Aj, Bj are obtained after convergence of the DNN training. During inferences, the batch normalization operation in DNNs may be expressed in the form of an affine equation 7.5.b by adjusting the layer's coefficients as given by equations 7.5.c and 7.5.d below:
The representation of equation 7.5.b, which combines all the batch normalization parameters into a single affine transformation, may be adopted as a common choice to implement batch normalization in hardware. With such a representation, γj and βj can be calibrated by updating batch normalization layer statistics σj and μj, while freezing the values of the coefficients Aj and Bj. Such a method may be applied to batch-normalized DNNs implemented with a crossbar array 10.
In DNNs with a batch normalization layer following the crossbar outputs, batch normalization statistics σj and μj may be updated to compensate for errors caused by conductance variations in the devices 12. G2 W represents a single scalar factor derived from the device conductance to synaptic weight mapping by computing the ratio of maximum absolute weight value to maximum mean conductance value. Initially, the value of statistics σj and μj (as obtained after programming all devices 12 a first time) is obtained from the converged DNN training. At any desired time tc, the mean and variance (see, equations 7.6.a and 7.6.c) of the outputs of the crossbar at every jth column is computed for P desired inputs to the crossbar. P distinct inputs need be obtained from a similar distribution than the training examples that were used to originally train the DNN in software.
Statistics computed at time tc may be used to update μj and σj at time tc with a desired accumulation coefficient Q as given by equations 7.6.b and 7.6.d, respectively. Then, γj and βj are recomputed at time tc, as given by equations 7.6.e and 7.6.f. This procedure may be repeated for desired sets of P distinct inputs to obtain optimal values for γj and βj. In practice, the accumulation coefficient Q may be sensitive to the value of P. Experiments show that for residual family DNNs, the relationship between Q and P may be represented by equation 7.6.g. However, Q can also be derived from any other heuristic or optimization algorithms.
Batch normalization statistics updates may require global drift compensation to be applied at the output of the crossbar array. In this case, along with batch normalization statistics update, drift compensation may also be applied (e.g., using a method discussed above). To calibrate drift correction coefficients γj and βj, one may update statistics of the batch normalization layer, namely σj and μj. At any desired time tc, mean and variance (see, equations 7.6.a and 7.6.c) of the outputs of the crossbar at every jth column is computed for P desired inputs to the crossbar. Along with G2 W scaling, the drift compensation scale α is used, where α is computed and stored at desired time td (see, equations 7.7.a and 7.7.b). When recomputing the mean and variance at time tc, the α computed at tdc is used, where tdc is the last time before tc at which α was computed. Statistics computed at tc are used to update σj and μj at time tc with a desired accumulation coefficient Q as given by equations 7.6.d and 7.6.f. Then, γj and βj are recomputed at time tc using updated values of σj and μj as given by equations 7.6.g and 7.6.h. This procedure may then be repeated for some batches of P distinct inputs to obtain optimal values for γj and βj. For example, one batch of 200 (=P) images or 10 batches of 50 (=P) images usually works well in practice. Other different combinations of (number of batches, number of images per batch) may work just as well.
Other procedures may be contemplated for DNNs implementations that have a normalization layer (called group normalization). During the inference phase, the group normalization layer in DNNs may be expressed in the form of drift correction equation by adjusting the layer's coefficients. Such a representation can potentially be adopted as a common choice to implement group normalization layer in hardware. Output columns of the crossbar are divided in K groups and outputs of all columns in a same group are used to compute normalization statistics. First, for the kth group of columns, mean and variance are computed from outputs of all the columns in that group as given by equations 8.1 and 8.2. Additionally, the drift compensation scaling factor α (as described above) may be used at the output of the crossbar. Then, γj and βj are computed using statistics σk, μk and the trained parameters Aj and Bj as given by equations 8.3 and 8.4. σk and μk are computed for every input to the crossbar (unlike batch normalization). Finally, γj and βj may correct drift error in the output of the crossbar, owing to the nature of the normalization introduced by the group normalization. This method can be applied to group-normalized DNNs implemented with crossbar arrays. Finally, the layer normalization is a special case of a group normalization with K=M, and an instance normalization is a special case of group normalization with K=1.
Referring now to
The crossbar array structure 10 includes rows and columns 16-18 interconnected at first junctions via programmable electronic devices 12. The rows include input lines 11 for applying voltage signals across the electronic devices 12 and the columns 16-18 form output lines for outputting currents, in operation. The correction unit 20 is connected to the output lines 18 and otherwise configured to enable an affine transformation of currents outputted from each output line 18. The control unit 30 is generally configured to apply read voltage signals across the input lines 11 and operate the correction unit 20 to correct each output current obtained from the output lines, as per application of the read voltage signals. The correction is operated according to said affine transformation, so as to compensate for temporal conductance variations in said electronic devices 12. And as explained above, the control unit 30 may further be configured to program the electronic devices 12, this time by applying programming voltage signals across the input lines 11.
In certain embodiments, the control unit 30 is further configured to program the correction unit 20 according to programmable parameters of the affine transformation (i.e., including a multiplicative coefficient γ and an additive term β), so that each output current may be corrected S40 according to said programmable parameters, in operation.
As seen in
In addition, the crossbar array structure 10 may include one or more additional columns, which are connected to said rows at third junctions via third electronic devices. As explained above, these additional columns may be used to obtain reference currents, and the control unit 30 may thus compute said sets of values based on said reference currents. In particular, the control unit 30 may be used to program said third junctions so as for at least two subsets of the third electronic devices to exhibit at least two, distinct electrical conductances, respectively.
In certain embodiments, the columns 16-18 include at least two sets of output lines for outputting at least two sets of output currents, respectively, and the control unit 30 is further configured to compute at least two sets of values for said programmable parameters and program the correction unit 20 according to said at least two sets of values, so as to separately correct said at least two sets of output currents according to respective ones of said at least two sets of values, in operation. Also, the control unit 30 may be configured to compute M sets of values for said programmable parameters, so that the M output currents may be corrected S40 according to respective ones of the M set of values, in operation.
The control unit 30 may be configured to program the correction unit 20 by initializing the programmable parameters after having programmed S10 the electronic devices 12, but prior to computing a set of values for the programmable parameters. Next, at a subsequent time, the unit 30 may reprogram the correction unit 20 according to the set of computed values. The control unit 30 may compute several, time-dependent sets of values for the programmable parameters, and thus repeatedly program the correction unit 20 according to the time-dependent sets of values, in operation.
The descriptions of the various embodiments have been presented for purposes of illustration and are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
As used herein, a “module” or “unit” may include hardware (e.g., circuitry, such as an application specific integrated circuit), firmware and/or software executable by hardware (e.g., by a processor or microcontroller), and/or a combination thereof for carrying out the various operations disclosed herein. For example, a correction unit may include one or more integrated circuits configured to enable an affine transformation of currents outputted from each of the output lines, while a control unit may include circuitry configured to apply voltage signals across the input lines (e.g., a signal generator) and operate the correction unit to correct each of the output currents obtained at the output lines according to the affine transformation, to compensate for temporal conductance variations in the electronic devices.
Number | Name | Date | Kind |
---|---|---|---|
10079058 | Eleftheriou | Sep 2018 | B1 |
20200234480 | Volkov | Jul 2020 | A1 |
20200380350 | Lesso | Dec 2020 | A1 |
20210365765 | Kim | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
101979987 | May 2019 | KR |
Entry |
---|
Le Gallo et al., “Mixed-Precision In-Memory Computing”, IBM Research, 10 pages, Dated Oct. 3, 2018. |
Le Gallo et al., “Compressed Sensing Recovery Using Computational Memory”, 4 pages, IBM Research, IEDM17-664, 2017. |
Number | Date | Country | |
---|---|---|---|
20210319300 A1 | Oct 2021 | US |