Embodiments are generally related to the field of AHaH (Anti-Hebbian and Hebbian) learning computing-based devices, methods and systems. Embodiments are additionally related to the field of thermodynamic RAM (Random Access Memory). Embodiments also relate to the field of machine learning.
Machine learning (ML) systems are composed of (usually large) numbers of adaptive weights. The goal of ML is to adapt the values of these weights based on exposure to data to optimize a function, for example temporal prediction, spatial classification or reward. The foundation objective of ML creates friction with modern methods of computing, since every adaptation event necessarily reduces to a communication procedure between memory and processing resources separated by a distance. The power required to simulate the adaptive network grows impractically large, owing to the tremendous energy consumed shuttling information back and forth.
Nature, on the other hand, does not separate memory and processing. Rather, the act of memory access is the act of computing is the act of adaptation. The memory processing distance goes to zero and power efficiency explodes by factors exceeding a billion.
Modern computing allows us to explore the universe of all possible ways to adapt. Creating intrinsically adaptive hardware implies that we give up this flexibility and rely on just one method. After all, neurobiological researchers have unearthed dozens of plasticity methods in a brain, which would seem to imply that they are all important in some way or another. If we take a step back and look at all of Nature, however, we find that a viable solution is literally all around us in both biological and non-biological systems. The solution is remarkably simple, and it is obviously universal.
We find the solution around us in rivers, lightning and trees but also deep within us. The air that we breathe is coupled to our blood through thousands of bifurcating channels that form our lungs. Our brain is coupled to our blood through thousands of bifurcating channels that form our circulatory system, and our neurons are coupled to our brain through the thousands of bifurcating channels forming our axons and dendrites. At all scales we see flow systems built of a very simple fractal building block.
The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
It is, therefore, one aspect of the disclosed embodiments to provide for a thermodynamic circuit formed of differential pairs of memristors.
It is another aspect of the disclosed embodiments to provide for a thermodynamic RAM Core comprising collections of differential pairs of memristors.
It is another aspect of the disclosed embodiments to provide a kT-RAM processor composed of one or more core kT-Cores.
It is another aspect of the disclosed embodiments to provide an instruction set for a kT-RAM processor.
It is yet another aspect of the disclosed embodiments to provide for an AHaH technology computing stack.
It is yet another aspect of the disclosed embodiments to provide a specification for a general-purpose adaptive computing resource
The aforementioned aspects and other objectives and advantages can now be achieved as described herein. An AHaH (Anti-Hebbian and Hebbian) circuit is disclosed, which includes a collection of differential pairs of memristors. A kT-Core can be implemented, which includes an AHaH Circuit with a RAM interface, and is capable of partitioning via time multiplexing. A kT-RAM processor is composed of a collection of kT-Cores. AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes, and kT-RAM is a learning processor providing random access to AHaH learning. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description of the invention, serve to explain the principles of the present invention.
The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
The kT-RAM approach offers the unique possibility of providing a specification for a general-purpose adaptive computing resource, since the components that it is built from can be rigorously defined and their function abstracted or “black-boxed” at each level of the technology stack. This allows individuals to specialize at one or more levels of the stack. Improvements at various levels of the stack can propagate throughout the whole technology ecosystem, from materials to markets, without any single technology vendor having to bridge the whole stack—a herculean feat that would be close to impossible. The rest of this disclosure outlines the levels of the technology stack need to support an AHaH Computing industry.
A Meta Stable Switch (MSS) is an idealized two-state element that switches probabilistically between its two states as a function of applied voltage bias and temperature. A memristor is modeled a collection of MSSs evolving in time. The total current through the device comes from both a memory-dependent current component, Im, and a Schottky diode current, Is in parallel:
I=ϕI
m(V, t)+(1−ϕ)Is(V),
where ϕ∈ [0,1]. A value of ϕ=1 represents a device that contains no diode effects. The MSS model can be made more complex to account for failure modes, for example by making the MSS state potentials temporally variable. Multiple MSS models with different variable state potentials can be combined in parralell or series to model increasingly more complex state systems.
kT-RAM provides a generic substrate from which any topology can be constructed. AHaH node's can have as few or as many synapses as the application requires and can be connected in whatever way desired. This universality is possible because of a RAM interface and temporal partitioning or multiplexing.
The kT-Core exposes a simple instruction set describing the direction of applied bias voltage: forward (F) or reverse (R), as well as the applied feedback: float (F), high (H), low(L), unsupervised (U), anti-unsupervised (A), and Zero (Z). The kT-Core instruction set allows emulation with alternate or existing technologies, for example with traditional digital processing techniques coupled to Flash memory, a program running on a CPU or emerging platforms like Epiphany processors.
Emulators allow developers to commence application development while remaining competitive with competing machine learning approaches. In other words, we can build a market for kT-RAM across all existing computing platforms while we simultaneously build the next generations of kT-RAM hardware.
Thus, in a preferred embodiment a thermodynamic RAM circuit can be implemented, which includes a collection of kT-Core circuits. Each kT-Core among the collection of core kT-Core circuits can include an AHaH circuit with a RAM interface. In another embodiment, an instruction set for a kT-Core learning circuit among the collection of kT-Core circuits can be implemented, which includes the following instructions: FF,FH,FL,FU,FA,FZ, RF,RH,RL,RU,RA,RZ. In yet another embodiment at least one at least one kT-RAM circuit can be implemented, which includes at least one kT-Core among the collection of the kT-Core circuits partitioned into AHaH nodes of any size via time multiplexing. In another embodiment, at least one kT-Core circuit among the collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among the collection of kT-Core circuits.
It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. It will also be appreciated that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art, which are also intended to be encompassed by the following claims.
This patent application is a continuation of U.S. patent application Ser. No. 14/674,428 entitled “Anti-Hebbian and Hebbian Computing with Thermodynamic RAM,” which was filed on Mar. 31, 2015, the disclosure of which is incorporated herein by reference in its entirety. U.S. patent application Ser. No. 14/674,428 in turn claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No. 61/975,028, entitled “AHaH Computing with Thermodynamic RAM,” which was filed on Apr. 4, 2014, the disclosure of which is also incorporated herein by reference in its entirety.
The United States Government has certain rights in this invention pursuant to Contract No. FA8750-13-C-0031 awarded by the United States Air Force.
Number | Date | Country | |
---|---|---|---|
61975028 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14674428 | Mar 2015 | US |
Child | 16010439 | US |