The present invention generally relates to neuromorphic computing and more particularly to a mixed-signal neuron architecture that can mitigate an impairment.
As shown in
A neuron can be implemented in a digital scheme, or an analog scheme. Digital neurons can offer high accuracy, but are subject to high power consumption due to the need of using a large quantity of transistors that can lead to a very high static leakage current. Analog neurons, on the other hand, can be highly power efficient but are by nature limited in accuracy due to various impairments in a manufacturing process. For instance, in the absence of impairments, the output of a neuron should be zero when all the inputs are zero. Due to device mismatch in the manufacturing process, however, in practice the output may not be zero. This leads to an impairment known as “offset.” Analog neurons can be a viable choice for applications that do not demand high accuracy, provided the impairments pertaining to the analog nature can be mitigated.
What is desired is a mixed-signal neuron architecture that can mitigate an impairment pertaining to the analog nature.
In an embodiment, artificial neural network comprises: a set of gain cells configured to receive a set of input voltages and output a set of local currents in accordance with a set of control signals, respectively, wherein each respective control signal in said set of control signals comprises a respective set of binary signals; a global summing circuit configured to sum said set of local currents into a global current; and a load configured to convert the global current into an output voltage; wherein: each respective gain cell in said set of gain cells comprises a respective set of voltage-to-current converters configured to convert a respective input voltage associated with said respective gain cell into a respective set of interim currents, a respective set of multipliers configured to multiply said respective set of interim currents with the respective set of binary signals pertaining to the respective control signal associated with the respective input voltage to output a respective set of conditionally inverted currents, and a respective local summing circuit configured to sum said respective set of conditionally inverted currents into a respective local current in said set of local currents.
In an embodiment, a method for an artificial neural network comprises: receiving a set of input voltages; converting a respective input voltage in said set of input voltages into a respective set of local currents using a voltage-to-current conversion; multiplying said respective set of local currents by a respective set of binary signals to establish a respective set of conditionally inverted currents; summing said respective set of conditionally inverted currents into a respective local current; summing all respective local currents into a global current; and converting the global current into an output voltage using a load circuit.
The present invention relates to neuromorphic computing for deep learning based inference applications. While the specification describes several example embodiments of the invention considered favorable modes of practicing the invention, it should be understood that the invention can be implemented in many ways and is not limited to the particular examples described below or to the particular manner in which any features of such examples are implemented. In other instances, well-known details are not shown or described to avoid obscuring aspects of the invention.
Persons of ordinary skill in the art understand terms and basic concepts related to microelectronics that are used in this disclosure, such as “signal,” “voltage,” “current,” “CMOS (complementary metal oxide semiconductor),” “PMOS (P-channel metal oxide semiconductor) transistor,” “NMOS (N-channel metal oxide semiconductor) transistor,” “current source,” and “bias.” Terms and basic concepts like these are apparent to those of ordinary skill in the art and thus will not be explained in detail here. Those of ordinary skill in the art can also recognize symbols of PMOS transistor and NMOS transistor, and identify the “source,” the “gate,” and the “drain” terminals thereof.
A logical signal is a signal of two states: a first logical state (or a “high” state), and a second logical state (or a “low” state). When a logical signal is said to be high (low), it means it is in the “high” (“low”) state, and it occurs when the logical signal is sufficiently above (below) a threshold level that is called a “trip point.” Every logical signal has a trip point, and two logical signals may not necessarily have the same trip point.
The present disclosure is presented from an engineering perspective (as opposed to a theoretical perspective). For instance, “X is equal to Y” means: “a difference between X and Y is smaller than a specified engineering tolerance.” “X is substantially smaller than Y” means: “a ratio between X and Y is smaller than a specified engineering tolerance.” “X is substantially zero” means “an absolute value of X is less than a specified engineering tolerance.”
A functional block diagram of an artificial neuron 200 in accordance with an embodiment of the present invention is shown in
Ii=Wi(Ei)xi, (1)
for i=1, 2, 3, . . . , M. Here, Wi(Ei) denotes the weight determined by Ei.
The global current Is can be mathematically modeled by:
The output y can be modeled by
y=f(Is), (3)
where f (⋅) denotes the monotonic function provided by the load 250.
In a mixed-signal embodiment in accordance with the present disclosure, Ei is represented by a set of N binary signals denoted by Ei,1, Ei,2, Ei,3, . . . , Ei,N, such that Wi(Ei) term can be represented by the following equation:
Here, Gj (for j=1, 2, 3, . . . , N) denotes a voltage-to-current conversion gain. Substituting equation (4) into equation (1), we obtain
which can be rewritten as
where
Ii,j=Ei,jGjxi. (7)
Let
Ci,j≡Gjxi, (8)
then equation (7) can be rewritten as
Ii,j=Ei,jCi,j (9)
A schematic diagram of a gain cell 300 that can be instantiated to embody gain cells 211, 212, 213, . . . , 219 is shown in
In an embodiment, a differential signaling is used, wherein: x1 comprises two voltages, x1+ and x1−, and a value of x1 is represented by a difference between x1+ and x1−; likewise, Ii comprises two currents, I+ and I−, and a value of Ii is represented by a difference between I+ and Ii−; and so on.
A schematic diagram of a V2I 400 that can be instantiated to embody V2I 311, 312, 313, . . . , 319 of
A schematic diagram of a multiplier 500 that can be instantiated to embody multipliers 321, 322, 323, . . . , 329 of
NMOS transistors 531 and 532 form a first demultiplexer that directs Ci j+ into either Ii,j+ (when Ei,j is 1) or Ii,j− (when Ei,j is −1); while NMOS transistors 533 and 534 form a second demultiplexer that directs Ci,j− into either Ii,j− (when Ei,j is 1) or Ii,j+ (when Ei,j is −1). As a result, Ii,j is either equal to Ci,j (when Ei,j is 1) or −Ci,j (when Ei,j is −1), thus fulfilling equation (9). In other words, Ii,j is conditionally inverted from Ci,j.
Now refer to
Now refer to
An embodiment of a load 600 that can be used to embody load 250 of
The present disclosure offers some advantages. First, a weight Wi(Ei) of a gain cell, as embodied by gain cell 300 of
In an embodiment, neuron 200 of
After the calibration is concluded, FSM 241 withdraw the zero-forcing command ZF, and the zero-forcing circuitry 243 stops its zero-forcing action on x1, x2, x3, . . . , xM. Moreover, any value of bias and D can be input to the XFER 260 to produce the output value of D′. The transfer function used in XFER 260 can be any function of ZF, bias and D. A signal x1, which comprises x1+ and x1− in a differential signaling embodiment, can be forced to zero by turning on a switch that connects x1+ to x1− and force them to be equal. The sign of y, which comprises y+ and y− in a differential signaling embodiment, can be detected by a comparator that compares y+ with y−. Digital-to-analog converters are well known in the prior art and thus not described in detail here.
In an embodiment, values of Ei,j (for i=1, 2, 3, . . . , M and j=1, 2, 3, . . . , N) are stored and in a memory device and can be retrieved when needed.
As illustrated by a flow diagram shown in
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should not be construed as limited only by the metes and bounds of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6411098 | Laletin | Jun 2002 | B1 |
9209809 | Geary | Dec 2015 | B1 |
9747546 | Ross et al. | Aug 2017 | B2 |
10389375 | Fick | Aug 2019 | B1 |
10528643 | Choi | Jan 2020 | B1 |
20050240647 | Banihashemi | Oct 2005 | A1 |
20060094361 | Darabi | May 2006 | A1 |
20130144821 | Heliot | Jun 2013 | A1 |
20180121796 | Deisher et al. | May 2018 | A1 |
20180253643 | Buchanan et al. | Sep 2018 | A1 |
20180285727 | Baum et al. | Oct 2018 | A1 |
20190035449 | Saida | Jan 2019 | A1 |
20190072633 | Steuer | Mar 2019 | A1 |
20190080230 | Hatcher | Mar 2019 | A1 |
20190213234 | Bayat | Jul 2019 | A1 |
20200134419 | Manipatruni | Apr 2020 | A1 |
20200167402 | Newns | May 2020 | A1 |
Number | Date | Country |
---|---|---|
201830296 | Aug 2018 | TW |
201833824 | Sep 2018 | TW |
Entry |
---|
TW Office Action dated Jun. 3, 2021 in Taiwan application (No. 109109867). |
Chatterjee et al., “Exploiting Inherent Error-Resiliency of Neuromorphic Computing to Achieve Extreme Energy-Efficiency through Mixed-Signal Neurons,” arXiv preprint arXiv:1806.05141, Oct. 22, 2018. |
Number | Date | Country | |
---|---|---|---|
20200320373 A1 | Oct 2020 | US |