Embodiments are generally related to the field of AHaH (Anti-Hebbian and Hebbian) learning computing-based devices, methods, and systems. Embodiments are additionally related to the field of thermodynamic RAM (Random Access Memory). Embodiments also relate to the field of machine learning.
Machine learning (ML) systems are composed of (usually large) numbers of adaptive weights. The goal of ML is to adapt the values of these weights based on exposure to data to optimize a function, for example, temporal prediction, spatial classification, or reward. The foundation objective of ML creates friction with modern methods of computing, since every adaptation event necessarily reduces to a communication procedure between memory and processing resources separated by a distance. The power required to simulate the adaptive network grows impractically large, owing to the tremendous energy consumed shuttling information back and forth.
Nature, on the other hand, does not separate memory and processing. Rather, the act of memory access is the act of computing is the act of adaptation. The memory processing distance goes to zero and power efficiency explodes by factors exceeding a billion.
Modern computing allows us to explore the universe of all possible ways to adapt. Creating intrinsically adaptive hardware implies that we give up this flexibility and rely on just one method. After all, neurobiological researchers have unearthed dozens of plasticity methods in a brain, which would seem to imply that they are all important in some way or another. If we take a step back and look at all of Nature, however, we find that a viable solution is literally all around us in both biological and non-biological systems. The solution is remarkably simple and it is obviously universal.
We find the solution around us in rivers, lightning, and trees, but also deep within us. The air that we breathe is coupled to our blood through thousands of bifurcating channels that form our lungs. Our brain is coupled to our blood through thousands of bifurcating channels that form our circulatory system, and our neurons are coupled to our brain through the thousands of bifurcating channels forming our axons and dendrites. At all scales we see flow systems built of a very simple fractal building block.
The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
It is, therefore, one aspect of the disclosed embodiments to provide for a thermodynamic circuit formed of differential pairs of memristors.
It is another aspect of the disclosed embodiments to provide for a thermodynamic RAM Core comprising collections of differential pairs of memristors.
It is another aspect of the disclosed embodiments to provide a kT-RAM processor composed of one or more core IST-Cores.
It is another aspect of the disclosed embodiments to provide an instruction set for a kT-RAM processor.
It is yet another aspect of the disclosed embodiments to provide for an AHaH technology computing stack.
It is yet another aspect of the disclosed embodiments to provide a specification for a general-purpose adaptive computing resource
The aforementioned aspects and other objectives and advantages can now be achieved as described herein. An AHaH (Anti-Hebbian and Hebbian) circuit is disclosed, which includes a collection of differential pairs of memristors. A kT-Core can be implemented, which includes an AHaH Circuit with a RAM interface, and is capable of partitioning via time multiplexing. A kT-RAM processor is composed of a collection of kT-Cores. AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes, and kT-RAM is a learning processor providing random access to AHaH learning. At this level of development, solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization, and universal logic.
The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description of the invention, serve to explain the principles of the present invention.
The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
The kT-RAM approach offers the unique possibility of providing a specification for a general-purpose adaptive computing resource, since the components that it is built from can be rigorously defined and their function abstracted or “black-boxed” at each level of the technology stack. This allows individuals to specialize at one or more levels of the stack. Improvements at various levels of the stack can propagate throughout the whole technology ecosystem, from materials to markets, without any single technology vendor having to bridge the whole stack—a herculean feat that would be close to impossible. The rest of this disclosure outlines the levels of the technology stack needed to support an AHaH Computing industry.
A Meta Stable Switch (MSS) is an idealized two-state element that switches probabilistically between its two states as a function of applied voltage bias and temperature. A memristor is modeled a collection of MSSs evolving in time. The total current through the device comes from both a memory-dependent current component, Im, and a Schottky diode current, Is, in parallel:
I=ϕIm(V,t)+(1−ϕ)Is(V),
where ϕ∈[0,1]. A value of ϕ=1 represents a device that contains no diode effects. The MSS model can be made more complex to account for failure modes, for example, by making the MSS state potentials temporally variable. Multiple MSS models with different variable state potentials can be combined in parallel or in series to model increasingly more complex state systems.
kT-RAM provides a generic substrate from which any topology can be constructed. AHaH nodes can have as few or as many synapses as the application requires and can be connected in whatever way desired. This universality is possible because of a RAM interface and temporal partitioning or multiplexing.
The kT-Core exposes a simple instruction set describing the direction of applied bias voltage: forward (F) or reverse (R), as well as the applied feedback: float (F), high (H), low (L), unsupervised (U), anti-unsupervised (A), and Zero (Z). The kT-Core instruction set allows emulation with alternate or existing technologies, for example, with traditional digital processing techniques coupled to Flash memory, a program running on a CPU, or emerging platforms like Epiphany processors.
Emulators allow developers to commence application development while remaining competitive with competing machine learning approaches. In other words, we can build a market for kT-RAM across all existing computing platforms while we simultaneously build the next generations of kT-RAM hardware.
Thus, in a preferred embodiment a thermodynamic RAM circuit can be implemented, which includes a collection of kT-Core circuits. Each kT-Core among the collection of core kT-Core circuits can include an AHaH circuit with a RAM interface. In another embodiment, an instruction set for a kT-Core learning circuit among the collection of kT-Core circuits can be implemented, which includes the following instructions: FF, FH, FL, FU, FA, FZ, RF, RH, RL, RU, RA, RZ. In yet another embodiment, at least one kT-RAM circuit can be implemented, which includes at least one kT-Core among the collection of the kT-Core circuits partitioned into AHaH nodes of any size via time multiplexing. In another embodiment, at least one kT-Core circuit among the collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among the collection of kT-Core circuits.
It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. It will also be appreciated that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art, which are also intended to be encompassed by the following claims.
This patent application claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No. 61/975,028, entitled “AHaH Computing with Thermodynamic RAM,” which was filed on Apr. 4, 2014, the disclosure of which is incorporated herein by reference in its entirety.
The United States Government has certain rights in this invention pursuant to Contract No. FA8750-13-C-0031 awarded by the United States Air Force
Number | Name | Date | Kind |
---|---|---|---|
6889216 | Nugent | May 2005 | B2 |
6995649 | Nugent | Feb 2006 | B2 |
7028017 | Nugent | Apr 2006 | B2 |
7039619 | Nugent | May 2006 | B2 |
7107252 | Nugent | Sep 2006 | B2 |
7392230 | Nugent | Jun 2008 | B2 |
7398259 | Nugent | Jul 2008 | B2 |
7409375 | Nugent | Aug 2008 | B2 |
7412428 | Nugent | Aug 2008 | B2 |
7420396 | Nugent | Sep 2008 | B2 |
7426501 | Nugent | Sep 2008 | B2 |
7502769 | Nugent | Mar 2009 | B2 |
7599895 | Nugent | Oct 2009 | B2 |
7752151 | Nugent | Jul 2010 | B2 |
7827130 | Nugent | Nov 2010 | B2 |
7827131 | Nugent | Nov 2010 | B2 |
7930257 | Nugent | Apr 2011 | B2 |
8022732 | Nugent | Sep 2011 | B2 |
8041653 | Nugent | Oct 2011 | B2 |
8156057 | Nugent | Apr 2012 | B2 |
8311958 | Nugent | Nov 2012 | B2 |
8332339 | Nugent | Dec 2012 | B2 |
8781983 | Nugent | Jul 2014 | B2 |
8909580 | Nugent | Dec 2014 | B2 |
8918353 | Nugent | Dec 2014 | B2 |
8972316 | Nugent | Mar 2015 | B2 |
8983886 | Nugent | Mar 2015 | B2 |
8990136 | Nugent | Mar 2015 | B2 |
9099179 | Nugent | Aug 2015 | B2 |
9104975 | Nugent | Aug 2015 | B2 |
20030177450 | Nugent | Sep 2003 | A1 |
20030236760 | Nugent | Dec 2003 | A1 |
20040039717 | Nugent | Feb 2004 | A1 |
20040153426 | Nugent | Aug 2004 | A1 |
20040162796 | Nugent | Aug 2004 | A1 |
20040193558 | Nugent | Sep 2004 | A1 |
20050015351 | Nugent | Jan 2005 | A1 |
20050149464 | Nugent | Jul 2005 | A1 |
20050149465 | Nugent | Jul 2005 | A1 |
20050151615 | Nugent | Jul 2005 | A1 |
20050256816 | Nugent | Nov 2005 | A1 |
20060036559 | Nugent | Feb 2006 | A1 |
20060184466 | Nugent | Aug 2006 | A1 |
20070005532 | Nugent | Jan 2007 | A1 |
20070022064 | Nugent | Jan 2007 | A1 |
20070176643 | Nugent | Aug 2007 | A1 |
20080258773 | Nugent | Oct 2008 | A1 |
20090043722 | Nugent | Feb 2009 | A1 |
20090138419 | Nugent | May 2009 | A1 |
20090228415 | Nugent | Sep 2009 | A1 |
20090228416 | Nugent | Sep 2009 | A1 |
20100280982 | Nugent | Nov 2010 | A1 |
20100287124 | Nugent | Nov 2010 | A1 |
20110031999 | Beat | Feb 2011 | A1 |
20110145177 | Nugent | Jun 2011 | A1 |
20110145179 | Nugent | Jun 2011 | A1 |
20110161268 | Nugent | Jun 2011 | A1 |
20110302119 | Nugent | Dec 2011 | A1 |
20120078827 | Nugent | Mar 2012 | A1 |
20120150780 | Nugent | Jun 2012 | A1 |
20120175583 | Nugent | Jul 2012 | A1 |
20120191438 | Nugent | Jul 2012 | A1 |
20130073497 | Akopyan | Mar 2013 | A1 |
20130218815 | Nugent | Aug 2013 | A1 |
20130258905 | Nugent | Oct 2013 | A1 |
20130275358 | Nugent | Oct 2013 | A1 |
20130289902 | Nugent | Oct 2013 | A1 |
20140006323 | Nugent | Jan 2014 | A1 |
20140156576 | Nugent | Jun 2014 | A1 |
20140192587 | Nugent | Jul 2014 | A1 |
20150019467 | Nugent | Jan 2015 | A1 |
20150019468 | Nugent | Jan 2015 | A1 |
20150074029 | Nugent et al. | Mar 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20150286926 A1 | Oct 2015 | US |
Number | Date | Country | |
---|---|---|---|
61975028 | Apr 2014 | US |