The present disclosure relates to a method and apparatus for training memristive learning systems with stochastic learning algorithms.
Recently, significant advances were made in the machine learning algorithms that impact several application domains. It is imperative to map these learning algorithms to physical hardware, in order to achieve orders of magnitude improvement in performance, energy efficiency and form factor. Mapping to the hardware requires redesigning and optimizations to the algorithms at different abstractions.
Memristive learning systems (MLS) are designed using hybrid CMOS/memristor technologies, sometimes referred to in the literature as neuromemristive systems or neuromorphic systems. An MLS is an adaptable electronic system composed of at least one memristor. Memristors' behavioral similarity to biological synapses enables them to be incorporated into MLS as hardware synapse circuits. These systems are attractive for the design of power and area-constrained hardware neural network architectures. These architectures offer real-time parallel computation and high learning capacity for applications such as pattern classification, image and video processing, autonomous surveillance, and medical diagnostics, among others.
One of the most difficult challenges to overcome when designing an MLS is identifying a training algorithm which effectively minimizes a cost function and also has a low hardware overhead. Software implementations of artificial neural networks have many sophisticated algorithms available, such as backpropagation, resilient backpropagation, Levenberg-Marqaurdt, genetic algorithms, etc. At the core of many of these algorithms is the computation of error gradients and other complex operations in the space of the system's parameters. Current implementations of such operations in an MLS are deterministic and expensive due to the design complexity, area overhead, and power requirements of associated analog and digital circuitry.
The art lacks a method and apparatus for training MLSs which leverages stochastic approximations to deterministic training algorithms in order to reduce the design complexity, area, and power overhead of these systems.
In accordance with an aspect of the present disclosure, there is provided a method for training a memristive learning system including identifying a task that the memristive learning system is to be trained to perform; identifying a cost function; choosing a deterministic update equation; deriving a stochastic update equation from the deterministic update equation; designing a training system hardware to implement the derived stochastic update equation; coupling the memristive learning system to the training system; and training the memristive learning system.
In accordance with an aspect of the present disclosure, there is provided a training system apparatus designed to implement stochastic learning algorithms.
The present disclosure encompasses a method and apparatus for training memristive learning systems (MLSs), sometimes referred to in the literature as neuromemristive systems or neuromorphic systems. An MLS is an adaptive electronic system composed of at least one memristor, which is an electronic circuit device with a top electrode layer, a bottom electrode layer, and a switching layer that follows a state-dependent Ohm's law (see
im(t)=Gm(γ)νm(t), (1)
where im(t) is the current flowing through the memristor, Gm(γ) is the memristor's conductance, γ is a state variable, νm(t) is the voltage applied across the memristor, and t is time. Upon application of a voltage, a memristor will “switch” according to
where χ is a switching function governing the switching dynamics. The precise form of χ and the physical meaning of γ are dependent on the specific materials used for the three memristor layers. A number of different materials and material combinations have been explored for this purpose Kuzum, D, Yu, S. and Wong, H-S. P. (2013). “Synaptic Electronics: Materials, Devices and Applications.” Nanotechnology 24 (38), which is hereby incorporated by reference in its entirety. The top and bottom electrodes are typically implemented with metals, such as copper, aluminum, silver, and gold. Implementation of the switching layer has been demonstrated with transition metal oxides such as titanium dioxide and tantalum oxide, and other metal oxides, as well as chalcogenide materials like germanium selenide. For a specific material choice, γ may be related to tunneling barrier width, cross sectional area of metallic filaments, doping front location, etc. In addition, the switching function χ may be linear, exponential, probabilistic, etc., depending on the material choice. Critically, however, every χ must follow χ(γ, νm=0, t)=0, meaning that the device is non-volatile (i.e., it will retain its state if no voltage is applied). It is this non-volatility, in conjunction with memristors' small footprint/high density, low power consumption, and ability to closely couple memory and processing that makes memristors an attractive technology choice for hardware learning systems.
Let Gm be the set of all memristor conductance values in the MLS. Then, the MLS's overall functionality is defined by a hypothesis function h as
ŷ=h(u, x, t; θ(Gm)), (3)
where ŷ is a vector of system outputs, u is a vector of system inputs, x is a state vector that represents the current state of the system, t is time, and θ(Gm) is a parameter vector, each component of which depends on zero or more members of Gm (see
An embodiment of a method for training an MLS is illustrated in
The present method includes identifying a cost function J(θ), which quantifies the MLS's performance on the identified task(s). Various cost functions are defined in the literature for different tasks. For classification tasks, cross entropy cost functions are usually used, which quantify the error between the MLS's class prediction and the real class of an input (e.g., an image). Other tasks, such as regression, may use a mean square error cost function to quantify how well the MLS's hypothesis function fits a set of data. Other cost functions may take additional information into account, such as the complexity of the hypothesis function, by including regularization terms that limit the size of parameter values.
The present method includes identifying one or more deterministic update equations that set (either iteratively or in one step) θ(Gm) to a value that approximately minimizes, in either a global or local minimum sense, the chosen cost function. A particularly well-suited approach in the context of the current invention is gradient descent, where each parameter value is adjusted in the opposite direction of the cost gradient:
which can be written as
where α is called the learning rate and xi* is shorthand for each of the partial derivatives. The value of α is chosen heuristically through trial and error. Generally, large α values may prevent the training process from converging, while very small values of α will cause training to be very time consuming. Note that other approaches to minimization of the cost function are also applicable, including but not limited to simulated annealing, genetic algorithms, Moore-Penrose inverse methods, and the like. Equation (5) is referred to as the deterministic update equation, and during the learning process, the update equation may be applied one or more times until the cost function reaches an acceptable value for the identified task(s). The deterministic equation may be supervised or unsupervised. A supervised equation includes a target final or intermediate result y that the MLS should try to achieve. For example, if the MLS task is image classification, then Equation (5) will often include the class label of the current input image. If the MLS task is regression, then Equation (5) will include the values of dependent variables associated with each observed data point. Examples of supervised equations include least-mean-squares and backpropagation, among others. An unsupervised equation does not include target values. Examples of unsupervised equations include spike time dependent plasticity, k-means clustering, and the like. One or more supervised equations can be combined with one or more unsupervised equations in a semi supervised learning process. Each of the partial derivatives in Equation (5) is usually represented as an analog voltage or current in an MLS. As a result, prior computing of the result required expensive (in terms of design complexity, area, and power consumption) arithmetic circuitry such as Gilbert and other transconductance multipliers, which have previously been used in the implementation of such algorithms.
The present method includes transforming the deterministic equation to a stochastic equation by changing each continuous analog value to a digital value Xi drawn from a probability distribution D:
Δθi=−αX1X2 . . . Xq. (6)
Since all of the values, except for α are now digital, Equation (7) can be computed using mostly digital logic gates, significantly reducing the design complexity and overhead of state-of-the-art methods. This allows the parameter update result to be computed using simpler logic circuits. Conversion of the variables in the deterministic equation to the stochastic equation depends on the probability distribution D. While it is possible to use any valid probability distribution (e.g. Normal, Beta, and the like), Bernoulli distributions work well because they are easy to implement in hardware, their probability mass functions are one-to-one, and their domain is 1 bit. For a Bernoulli distribution, each value of Xi is chosen as Xi˜Ber(xi*′), where xi*′ is a version of xi* scaled between 0 and 1. As will be demonstrated in the example, this conversion can be accomplished in hardware using simple comparator circuits that compare xi*′ and ri, which is an independent, uniform random number between 0 and 1. The hardware for random number generation may be designed following alternative methods, some of which are listed below.
The present method includes designing a training system to implement, either exactly or approximately, the stochastic equation. The specific hardware design depends on the exact form of the stochastic equation. A number of circuits exist for generating the random numbers ri required to convert the deterministic variables in the deterministic update equation to the stochastic variables in the stochastic update equation. In one embodiment of the disclosure, linear feedback shift registers may be used to generate a sequence of pseudorandom binary numbers that can be converted to analog voltages or currents, as needed. Cellular automata methods, thermal noise, metastability, and the like may also be used to generate random numbers. The required arithmetic and other operations in the stochastic update equation may be implemented with simple logic gates, since all of the stochastic variables are digital, and, in the case where a Bernoulli distribution is used, each stochastic variable is just one bit. The general approach is to ensure that every partial computation in the deterministic update equation, when using the scaled variables xi*′, is well-approximated by the expected output of a circuit with the associated stochastic inputs. For example, it can be shown that the expected output of an AND gate with inputs X1 and X2 is x1*′x2*′. Therefore, an AND gate is used for multiplication operations. It can also be shown that XOR gates approximate absolute differences, multiplexers perform scaled addition and, in general, the stochastic function of any circuit can be determined by deriving its expected output when its inputs are stochastic. Other hardware components may also be used for computing, e.g., the sign of a value, which can be determined using a comparator circuit. It is understood this example is one of several possible realizations of the stochastic training system hardware in accordance with the present disclosure.
The present method includes training the MLS by applying write voltages to one or more of the memristors that constitute the system's parameter vector θ(Gm). The interaction between the MLS and the training system hardware is illustrated in
a non-linear function in the applied flux ϕ≡∫νwdt. However, the training process is made simpler by assuming that these functions are approximately linear, allowing the memristor's conductance to be tuned by application of a constant write voltage for short period of time. To increase the conductance, a positive voltage is applied. To decrease the conductance, a negative voltage is applied. The write voltage must be above or below the positive or negative threshold voltage for increasing or decreasing the conductance, respectively. The threshold voltages are functions of material properties. The magnitude of the write voltages and the time that the write voltages are applied defines the learning rate α. Typical values for write voltages are on the order of 1 V, while typical values for write times vary widely for different devices, from sub-nanosecond to milliseconds. Values of α typically range between 0.001 and 1, and are determined through trial and error by one of ordinary skill in the art.
MLSs trained using the proposed method are applicable to a wide range of problem domains including but not limited to pattern classification (e.g., in images, videos and sounds), anomaly detection (e.g., identification of suspicious activities, anomalies in internet traffic, and the like.), data compression (e.g., using autoencoders), clustering problems in domains including big data, cloud computing, data mining, health care, bioinformatics, visual processing, natural language processing, and datacenters, and the like.
The present disclosure is an improvement over existing deterministic training algorithms used in hardware/embedded platforms. Stochastic update equations afford the ability to replace complex (in terms of design, area, and power consumption costs) training circuits with relatively simple digital logic circuits, leading to reduced training costs.
Merkel, C., & Kudithipudi, D. (2014). A current-mode CMOS/memristor hybrid implementation of an extreme learning machine. In ACM Great Lakes Symposium on VLSI Design (pp. 241-242); Merkel, C., & Kudithipudi, D. (2014). Neuromemristive Extreme Learning Machines for Pattern Classification. In International Symposium on VLSI (pp. 77-82); and Merkel, C., & Kudithipudi, D. (2014). A Stochastic Learning Algorithm for Neuromemristive Systems. In System on Chip Conference (pp. 359-364) are hereby incorporated by reference in their entirety.
The disclosure will be further illustrated with reference to the following specific examples disclosed therein. It is understood that these examples are given by way of illustration and are not meant to limit the disclosure or the claims to follow.
As a concrete example, consider the task of classifying handwritten images from the MNIST dataset, LeCun, Y. (2016) http://yann.lecun.com/exdb/mnist/, which is hereby incorporated by reference in its entirety. A feasible MLS for this task is shown in
which will approximately range between −1 and 1 when the ratio of the maximum memristor conductance Gmon to the minimum memristor conductance Gmoff is much greater than unity
Depending on the memristor materials used, this ratio can be on the order of 103 or larger. The overall functionality of the ELM is described by its hypothesis function as
ŷ=h(u, x, t;θ(Gm))=fsig(Θ(3)[fsig(Θ(2)[u|b]T)|b]T), (8)
where fsig is the logistic sigmoid activation function and | denotes augmentation. Here, Θ(2) and Θ(3) are Nx×N+1 and M×Nx+1 matrices containing θ values corresponding to the second and third layer weights, respectively. As described below, the ELM is trained to perform a particular task via modification of its parameter vector θ(Gm). This corresponds to modification of the memristor conductances associated with each weight between the second and third neuron layers. This is accomplished by applying a write voltage νw to terminal ‘1’ of the weight circuit, while grounding terminal ‘2’. This process is discussed in more detail below.
The cost function for the handwritten digit classification task is the mean square error (MSE), defined as
where m is the number of training inputs. The number p is an index specifying the current training input. In the case of handwritten digit classification, each training input is an image of a handwritten digit that is used as an example to tune the MLS's parameters.
One of the deterministic update equations for the cost function in Equation (9) is the online least-mean-squares (LMS) algorithm:
Δθi,j(p,3)=αxj(p)(yi(p)−ŷi(p)), (10)
where the index i refers to the output neurons (layer 3 in
Now, Equation (10) is transformed into a stochastic update equation using the method discussed above, where the distribution D is a Bernoulli distribution:
Δθij(p,3)=αsgn(yi(p)−ŷi (p))Xj(p)|Yi(p)−Ŷi(p)|, (11)
where sgn(·) is −1 when the argument is negative, 0 when the argument is 0, and +1 otherwise.
One proposed hardware implementation of (11) is shown in
As an alternative example, consider
Although various embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions, and the like can be made without departing from the spirit of the disclosure and these are therefore considered to be within the scope of the disclosure as defined in the claims which follow.
This application claims the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 62/164,776, filed May 21, 2015, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7287014 | Chen et al. | Oct 2007 | B2 |
8433665 | Tang et al. | Apr 2013 | B2 |
8676734 | Aparin | Mar 2014 | B2 |
8725658 | Izhikevich et al. | May 2014 | B2 |
8750065 | Merkel et al. | Jun 2014 | B2 |
9015093 | Commons | Apr 2015 | B1 |
20130311413 | Rose et al. | Nov 2013 | A1 |
20130325774 | Sinyayskiy et al. | Dec 2013 | A1 |
20130325775 | Sinyavskiy et al. | Dec 2013 | A1 |
20140156576 | Nugent | Jun 2014 | A1 |
20150019468 | Nugent | Jan 2015 | A1 |
20150170025 | Wu et al. | Jun 2015 | A1 |
20150206050 | Talathi | Jul 2015 | A1 |
20150347899 | Nugent | Dec 2015 | A1 |
20150358151 | You | Dec 2015 | A1 |
20160004959 | Nugent | Jan 2016 | A1 |
20170109628 | Gokmen | Apr 2017 | A1 |
20180174035 | Nugent | Jun 2018 | A1 |
Number | Date | Country |
---|---|---|
WO2014151926 | Sep 2014 | WO |
Entry |
---|
International Search Report and Written Opinion in correspondence application (PCT/US2016/033428) dated Aug. 25, 2016. |
Number | Date | Country | |
---|---|---|---|
20160342904 A1 | Nov 2016 | US |
Number | Date | Country | |
---|---|---|---|
62164776 | May 2015 | US |