1. Field
Certain aspects of the present disclosure generally relate to neural networks and, more particularly, to a continuous-time, event-based model for neurons and synapses.
2. Background
An artificial neural network, which may comprise an interconnected group of artificial neurons (i.e., neuron models), is a computational device or represents a method to be performed by a computational device. Artificial neural networks may have corresponding structure and/or function in biological neural networks. However, artificial neural networks may provide innovative and useful computational techniques for certain applications in which traditional computational techniques are cumbersome, impractical, or inadequate. Because artificial neural networks can infer a function from observations, such networks are particularly useful in applications where the complexity of the task or data makes the design of the function by conventional techniques burdensome.
One type of artificial neural network is the spiking neural network, which incorporates the concept of time into its operating model, as well as neuronal and synaptic state, thereby providing a rich set of behaviors from which computational function can emerge in the neural network. Spiking neural networks are based on the concept that neurons fire or “spike” at a particular time or times based on the state of the neuron, and that the time is important to neuron function. When a neuron fires, it generates a spike that travels to other neurons, which, in turn, may adjust their states based on the time this spike is received. In other words, information may be encoded in the relative or absolute timing of spikes in the neural network.
Certain aspects of the present disclosure provide a method for updating a state of an artificial neuron. The method generally includes determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, and updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime.
Certain aspects of the present disclosure provide an apparatus for updating a state of an artificial neuron. The apparatus generally includes a processing system configured to determine a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, determine an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, and update the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime.
Certain aspects of the present disclosure provide an apparatus for updating a state of an artificial neuron. The apparatus generally includes means for determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, means for determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, and means for updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime.
Certain aspects of the present disclosure provide a computer-program product for updating a state of an artificial neuron. The computer-program product generally includes a computer-readable medium comprising code for determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, and updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime.
Certain aspects of the present disclosure provide a method for producing neural behaviors of an artificial neuron. The method generally includes determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein linear dynamics of the neuron model are divided into two or more regimes, determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime, and generating a variety of neural behaviors of the artificial neuron by utilizing the linear dynamics of neuron model.
Certain aspects of the present disclosure provide an apparatus for producing neural behaviors of an artificial neuron. The apparatus generally includes a processing system configured to determine a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein linear dynamics of the neuron model are divided into two or more regimes, determine an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, update the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime, and generate a variety of neural behaviors of the artificial neuron by utilizing the linear dynamics of neuron model
Certain aspects of the present disclosure provide an apparatus for producing neural behaviors of an artificial neuron. The apparatus generally includes means for determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein linear dynamics of the neuron model are divided into two or more regimes, means for determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, means for updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime, and means for generating a variety of neural behaviors of the artificial neuron by utilizing the linear dynamics of neuron model.
Certain aspects of the present disclosure provide a computer-program product for producing neural behaviors of an artificial neuron. The computer-program product generally includes a computer-readable medium comprising code for determining a first state of the artificial neuron, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein linear dynamics of the neuron model are divided into two or more regimes, determining an operating regime for the artificial neuron from the two or more regimes based, at least in part, on the first state, updating the state of the artificial neuron based, at least in part, on the first state of the artificial neuron and the determined operating regime, and generating a variety of neural behaviors of the artificial neuron by utilizing the linear dynamics of neuron model.
Certain aspects of the present disclosure provide a method for updating a state of an artificial neuron. The method generally includes updating the state of the artificial neuron based, at least in part, on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, updating the state of the artificial neuron at time intervals, and updating the state of the artificial neuron if an event occur at or between time instants.
Certain aspects of the present disclosure provide an apparatus for updating a state of an artificial neuron. The apparatus generally includes a processing system configured to update the state of the artificial neuron based, at least in part, on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, update the state of the artificial neuron at time intervals, and update the state of the artificial neuron if an event occur at or between time instants.
Certain aspects of the present disclosure provide an apparatus for updating a state of an artificial neuron. The apparatus generally includes means for updating the state of the artificial neuron based, at least in part, on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, means for updating the state of the artificial neuron at time intervals, and means for updating the state of the artificial neuron if an event occur at or between time instants.
Certain aspects of the present disclosure provide a computer-program product for updating a state of an artificial neuron. The computer-program product generally includes a computer-readable medium comprising code for updating the state of the artificial neuron based, at least in part, on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, updating the state of the artificial neuron at time intervals, and updating the state of the artificial neuron if an event occur at or between time instants.
So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
As illustrated in
In biological neurons, the output spike generated when a neuron fires is referred to as an action potential. This electrical signal is a relatively rapid, transient, all- or nothing nerve impulse, having an amplitude of roughly 100 mV and a duration of about 1 ms. In a particular embodiment of a neural system having a series of connected neurons (e.g., the transfer of spikes from one level of neurons to another in
The transfer of spikes from one level of neurons to another may be achieved through the network of synaptic connections (or simply “synapses”) 104, as illustrated in
Biological synapses may be classified as either electrical or chemical. While electrical synapses are used primarily to send excitatory signals, chemical synapses can mediate either excitatory or inhibitory (hyperpolarizing) actions in postsynaptic neurons and can also serve to amplify neuronal signals. Excitatory signals typically depolarize the membrane potential (i.e., increase the membrane potential with respect to the resting potential). If enough excitatory signals are received within a certain time period to depolarize the membrane potential above a threshold, an action potential occurs in the postsynaptic neuron. In contrast, inhibitory signals generally hyperpolarize (i.e., lower) the membrane potential. Inhibitory signals, if strong enough, can counteract the sum of excitatory signals and prevent the membrane potential from reaching threshold. In addition to counteracting synaptic excitation, synaptic inhibition can exert powerful control over spontaneously active neurons. A spontaneously active neuron refers to a neuron that spikes without further input, for example due to its dynamics or a feedback. By suppressing the spontaneous generation of action potentials in these neurons, synaptic inhibition can shape the pattern of firing in a neuron, which is generally referred to as sculpturing. The various synapses 104 may act as any combination of excitatory or inhibitory synapses, depending on the behavior desired.
The neural system 100 may be emulated by a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, a software module executed by a processor, or any combination thereof. The neural system 100 may be utilized in a large range of applications, such as image and pattern recognition, machine learning, motor control, and alike. Each neuron in the neural system 100 may be implemented as a neuron circuit. The neuron membrane charged to the threshold value initiating the output spike may be implemented, for example, as a capacitor that integrates an electrical current flowing through it.
In an aspect, the capacitor may be eliminated as the electrical current integrating device of the neuron circuit, and a smaller memristor element may be used in its place. This approach may be applied in neuron circuits, as well as in various other applications where bulky capacitors are utilized as electrical current integrators. In addition, each of the synapses 104 may be implemented based on a memristor element, wherein synaptic weight changes may relate to changes of the memristor resistance. With nanometer feature-sized memristors, the area of neuron circuit and synapses may be substantially reduced, which may make implementation of a very large-scale neural system hardware implementation practical.
Functionality of a neural processor that emulates the neural system 100 may depend on weights of synaptic connections, which may control strengths of connections between neurons. The synaptic weights may be stored in a non-volatile memory in order to preserve functionality of the processor after being powered down. In an aspect, the synaptic weight memory may be implemented on a separate external chip from the main neural processor chip. The synaptic weight memory may be packaged separately from the neural processor chip as a replaceable memory card. This may provide diverse functionalities to the neural processor, wherein a particular functionality may be based on synaptic weights stored in a memory card currently attached to the neural processor.
The neuron 202 may combine the scaled input signals and use the combined scaled inputs to generate an output signal 208 (i.e., a signal y). The output signal 208 may be a current, or a voltage, real-valued or complex-valued. The output signal may comprise a numerical value with a fixed-point or a floating-point representation. The output signal 208 may be then transferred as an input signal to other neurons of the same neural system, or as an input signal to the same neuron 202, or as an output of the neural system.
The processing unit (neuron) 202 may be emulated by an electrical circuit, and its input and output connections may be emulated by wires with synaptic circuits. The processing unit 202, its input and output connections may also be emulated by a software code. The processing unit 202 may also be emulated by an electric circuit, whereas its input and output connections may be emulated by a software code. In an aspect, the processing unit 202 in the computational network may comprise an analog electrical circuit. In another aspect, the processing unit 202 may comprise a digital electrical circuit. In yet another aspect, the processing unit 202 may comprise a mixed-signal electrical circuit with both analog and digital components. The computational network may comprise processing units in any of the aforementioned forms. The computational network (neural system or neural network) using such processing units may be utilized in a large range of applications, such as image and pattern recognition, machine learning, motor control, and alike.
During the course of training of neural network, synaptic weights (e.g., the weights w1(i,i+1), . . . , wP(i,i+1) from
In hardware and software models of neural networks, processing of synapse related functions can be based on synaptic type. Synapse types may comprise non-plastic synapses (no changes of weight and delay), plastic synapses (weight may change), structural delay plastic synapses (weight and delay may change), fully plastic synapses (weight, delay and connectivity may change), and variations thereupon (e.g., delay may change, but no change in weight or connectivity). The advantage of this is that processing can be subdivided. For example, non-plastic synapses may not require plasticity functions to be executed (or waiting for such functions to complete). Similarly, delay and weight plasticity may be subdivided into operations that may operate together or separately, in sequence or in parallel. Different types of synapses may have different lookup tables or formulas and parameters for each of the different plasticity types that apply. Thus, the methods would access the relevant tables for the synapse's type.
There are further implications of the fact that spike-timing dependent structural plasticity may be executed independently of synaptic plasticity. Structural plasticity may be executed even if there is no change to weight magnitude (e.g., if the weight has reached a minimum or maximum value, or it is not changed due to some other reason) since structural plasticity (i.e., an amount of delay change) may be a direct function of pre-post spike time difference. Alternatively, it may be set as a function of the weight change amount or based on conditions relating to bounds of the weights or weight changes. For example, a synapse delay may change only when a weight change occurs or if weights reach zero but not if they are maxed out. However, it can be advantageous to have independent functions so that these processes can be parallelized reducing the number and overlap of memory accesses.
Neuroplasticity (or simply “plasticity”) is the capacity of neurons and neural networks in the brain to change their synaptic connections and behavior in response to new information, sensory stimulation, development, damage, or dysfunction. Plasticity is important to learning and memory in biology, as well as for computational neuroscience and neural networks. Various forms of plasticity have been studied, such as synaptic plasticity (e.g., according to the Hebbian theory), spike-timing-dependent plasticity (STDP), non-synaptic plasticity, activity-dependent plasticity, structural plasticity and homeostatic plasticity.
STDP is a learning process that adjusts the strength of synaptic connections between neurons. The connection strengths are adjusted based on the relative timing of a particular neuron's output and received input spikes (i.e., action potentials). Under the STDP process, long-term potentiation (LTP) may occur if an input spike to a certain neuron tends, on average, to occur immediately before that neuron's output spike. Then, that particular input is made somewhat stronger. On the other hand, long-term depression (LTD) may occur if an input spike tends, on average, to occur immediately after an output spike. Then, that particular input is made somewhat weaker, and hence the name “spike-timing-dependent plasticity”. Consequently, inputs that might be the cause of the post-synaptic neuron's excitation are made even more likely to contribute in the future, whereas inputs that are not the cause of the post-synaptic spike are made less likely to contribute in the future. The process continues until a subset of the initial set of connections remains, while the influence of all others is reduced to zero or near zero.
Since a neuron generally produces an output spike when many of its inputs occur within a brief period, i.e., being cumulative sufficient to cause the output, the subset of inputs that typically remains includes those that tended to be correlated in time. In addition, since the inputs that occur before the output spike are strengthened, the inputs that provide the earliest sufficiently cumulative indication of correlation will eventually become the final input to the neuron.
The STDP learning rule may effectively adapt a synaptic weight of a synapse connecting a pre-synaptic neuron to a post-synaptic neuron as a function of time difference between spike time tpre of the pre-synaptic neuron and spike time tpost of the post-synaptic neuron (i.e., t=tpost−tpre). A typical formulation of the STDP is to increase the synaptic weight (i.e., potentiate the synapse) if the time difference is positive (the pre-synaptic neuron fires before the post-synaptic neuron), and decrease the synaptic weight (i.e., depress the synapse) if the time difference is negative (the post-synaptic neuron fires before the pre-synaptic neuron).
In the STDP process, a change of the synaptic weight over time may be typically achieved using an exponential decay, as given by,
where k+ and k− are time constants for positive and negative time difference, respectively, a+ and a− are corresponding scaling magnitudes, and μ is an offset that may be applied to the positive time difference and/or the negative time difference.
As illustrated in the graph 300 in
There are some general principles for designing a useful spiking neuron model. A good neuron model may have rich potential behavior in terms of two computational regimes: coincidence detection and functional computation. Moreover, a good neuron model should have two elements to allow temporal coding: arrival time of inputs affects output time and coincidence detection can have a narrow time window. Finally, to be computationally attractive, a good neuron model may have a closed-form solution in continuous time and have stable behavior including near attractors and saddle points. In other words, a useful neuron model is one that is practical and that can be used to model rich, realistic and biologically-consistent behaviors, as well as be used to both engineer and reverse engineer neural circuits.
A neuron model may depend on events, such as an input arrival, output spike or other event whether internal or external. To achieve a rich behavioral repertoire, a state machine that can exhibit complex behaviors may be desired. If the occurrence of an event itself, separate from the input contribution (if any) can influence the state machine and constrain dynamics subsequent to the event, then the future state of the system is not only a function of a state and input, but rather a function of a state, event, and input.
In an aspect, a neuron n may be modeled as a spiking leaky-integrate-and-fire neuron with a membrane voltage vn(t) governed by the following dynamics,
where α and β are parameters, wm,n is a synaptic weight for the synapse connecting a pre-synaptic neuron m to a post-synaptic neuron n, and ym(t) is the spiking output of the neuron m that may be delayed by dendritic or axonal delay according to Δtm,n until arrival at the neuron n's soma.
It should be noted that there is a delay from the time when sufficient input to a post-synaptic neuron is established until the time when the post-synaptic neuron actually fires. In a dynamic spiking neuron model, such as Izhikevich's simple model, a time delay may be incurred if there is a difference between a depolarization threshold vt and a peak spike voltage vpeak. For example, in the simple model, neuron soma dynamics can be governed by the pair of differential equations for voltage and recovery, i.e.,
where v is a membrane potential, u is a membrane recovery variable, k is a parameter that describes time scale of the membrane potential v, a is a parameter that describes time scale of the recovery variable u, b is a parameter that describes sensitivity of the recovery variable u to the sub-threshold fluctuations of the membrane potential v, vr is a membrane resting potential, I is a synaptic current, and C is a membrane's capacitance. In accordance with this model, the neuron is defined to spike when v>vpeak.
The Hunzinger Cold neuron model is a minimal dual-regime spiking linear dynamical model that can reproduce a rich variety of neural behaviors. The model's one- or two-dimensional linear dynamics can have two regimes, wherein the time constant (and coupling) can depend on the regime. In the sub-threshold regime, the time constant, negative by convention, represents leaky channel dynamics generally acting to return a cell to rest in biologically-consistent linear fashion. The time constant in the supra-threshold regime, positive by convention, reflects anti-leaky channel dynamics generally driving a cell to spike while incurring latency in spike-generation.
As illustrated in
Linear dual-regime bi-dimensional dynamics (for states v and u) may be defined by convention as,
where qρ and r are the linear transformation variables for coupling.
The symbol ρ is used herein to denote the dynamics regime with the convention to replace the symbol ρ with the sign “−” or “+” for the negative and positive regimes, respectively, when discussing or expressing a relation for a specific regime.
The model state is defined by a membrane potential (voltage) v and recovery current u. In basic form, the regime is essentially determined by the model state. There are subtle, but important aspects of the precise and general definition, but for the moment, consider the model to be in the positive regime 404 if the voltage v is above a threshold (v+) and otherwise in the negative regime 402.
The regime-dependent time constants include τ− which is the negative regime time constant, and τ+ which is the positive regime time constant. The recovery current time constant τu is typically independent of regime. For convenience, the negative regime time constant τ− is typically specified as a negative quantity to reflect decay so that the same expression for voltage evolution may be used as for the positive regime in which the exponent and τ+ will generally be positive, as will be τu.
The dynamics of the two state elements may be coupled at events by transformations offsetting the states from their null-clines, where the transformation variables are
q
ρ=−τρβu−vρ (7)
r=δ(v+ε) (8)
where δ is a coupling conductance-time parameter, ε is a coupling offset voltage, and β is a coupling resistance. The two values for vρ are the base for reference voltages for the two regimes. The parameter v− is the base voltage for the negative regime, and the membrane potential will generally decay toward v− in the negative regime. The parameter v+ is the base voltage for the positive regime, and the membrane potential will generally tend away from v+ in the positive regime.
The null-clines for v and u are given by the negative of the transformation variables qρ and r, respectively. The parameter δ is a scale factor controlling the slope of the u null-cline. The parameter ε is typically set equal to −v−. The parameter β is a resistance value controlling the slope of the v null-clines in both regimes. The τρ time-constant parameters control not only the exponential decays, but also the null-cline slopes in each regime separately.
The model is defined to spike when the voltage v reaches a value vS (spike voltage). Subsequently, the state is typically reset at a reset event (which technically may be one and the same as the spike event):
v={circumflex over (v)}
− (9)
u=u+Δu (10)
where {circumflex over (v)}− is a reset voltage and Δu is a recovery current reset offset. The reset voltage {circumflex over (v)}− is typically set to v−.
By a principle of momentary coupling, a closed form solution is possible not only for state (and with a single exponential term), but also for the time required to reach a particular state. The close form state solutions are
Therefore, the model state may be updated only upon events such as upon an input (pre-synaptic spike) or output (post-synaptic spike). Operations may also be performed at any particular time (whether or not there is input or output).
Moreover, by the momentary coupling principle, the time of a post-synaptic spike may be anticipated so the time to reach a particular state may be determined in advance without iterative techniques or Numerical Methods (e.g., the Euler numerical method). Given a prior voltage state v0, the time delay until voltage state vf is reached is given by
If a spike is defined as occurring at the time the voltage state v reaches vS, then the closed-form solution for the amount of time, or relative delay, until a spike occurs as measured from the time that the voltage is at a given state v is
where {circumflex over (v)}+ is a regime threshold and is typically set to parameter v+, although other variations may be possible.
The above definitions of the model dynamics depend on whether the model is in the positive or negative regime. As mentioned, the coupling and the regime ρ may be computed upon events. For purposes of state propagation, the regime and coupling (transformation) variables may be defined based on the state at the time of the last (prior) event. For purposes of subsequently anticipating spike output time, the regime and coupling variable may be defined based on the state at the time of the next (current) event.
There are several possible implementations of the Cold model, and executing the simulation, emulation or model in time. This includes, for example, event-update, step-event update, and step-update modes. An event update is an update where states are updated based on events or “event update” (at particular moments). A step update is an update when the model is updated at intervals (e.g., 1 ms). This does not necessarily require iterative methods or Numerical methods. An event-based implementation is also possible at a limited time resolution in a step-based simulator by only updating the model if an event occurs at or between steps or by “step-event” update.
More details about the Hunzinger Cold model and possible implementations will follow in the present disclosure.
A useful neural network model, such as one comprised of the artificial neurons 102, 106 of
If a neuron model can perform temporal coding, then it can also perform rate coding (since rate is just a function of timing or inter-spike intervals). To provide for temporal coding, a good neuron model should have two elements: (1) arrival time of inputs affects output time; and (2) coincidence detection can have a narrow time window. Connection delays provide one means to expand coincidence detection to temporal pattern decoding because by appropriately delaying elements of a temporal pattern, the elements may be brought into timing coincidence.
In a good neuron model, the time of arrival of an input should have an effect on the time of output. A synaptic input—whether a Dirac delta function or a shaped post-synaptic potential (PSP), whether excitatory (EPSP) or inhibitory (IPSP)—has a time of arrival (e.g., the time of the delta function or the start or peak of a step or other input function), which may be referred to as the input time. A neuron output (i.e., a spike) has a time of occurrence (wherever it is measured, e.g., at the soma, at a point along the axon, or at an end of the axon), which may be referred to as the output time. That output time may be the time of the peak of the spike, the start of the spike, or any other time in relation to the output waveform. The overarching principle is that the output time depends on the input time.
One might at first glance think that all neuron models conform to this principle, but this is generally not true. For example, rate-based models do not have this feature. Many spiking models also do not generally conform. A leaky-integrate-and-fire (LIF) model does not fire any faster if there are extra inputs (beyond threshold). Moreover, models that might conform if modeled at very high timing resolution often will not conform when timing resolution is limited, such as to 1 ms steps.
An input to a neuron model may include Dirac delta functions, such as inputs as currents, or conductance-based inputs. In the latter case, the contribution to a neuron state may be continuous or state-dependent.
Certain aspects of the present disclosure provide general principles for designing a useful spiking neuron model. A useful neuron model is one that is practical and that can be used to model rich, realistic and biologically consistent behaviors. Further, a useful neuron model can be used to both engineer and reverse-engineer (interpret) neural circuits in terms of well-defined stable computational relations.
Behavior from Events
Natural cells appear to exhibit various behaviors including tonic and phasic spiking and bursting, integrating inputs, adapting to inputs, oscillating sub-threshold, resonating, rebounding, accommodating input features, and more. Often, different behaviors may be induced in the same cell by inputs with different characteristics. In the present disclosure, these behaviors may be considered from the point of view of the events that cause the behaviors.
Accordingly, cells may be viewed as abstract event state machines in which cell dynamics between events are determined by machine state, and transitions between machine states are determined by events. In this context, the machine state is a meta-state that governs characteristics of the dynamics rather than the model state variables themselves. The model state variables evolve according to the dynamics in the meta-state.
In this view, events set cell dynamics in motion until the next event. Dynamics between a prior event and a next event are defined depending on machine state, which is determined based on the prior event. However, it is important to distinguish that this does not prevent state from being determined at any time between events, but it only prevents the dynamics definition from changing. This abstract event state machine concept allows the definition of dynamics to differ across machine states. One may thus define or parameterize dynamics in one way for one subset of machine states or regimes, and in another way, independently, for another subset of machine states. This means that dynamics in different machine states or regimes can be defined independently. In an aspect, behavior does not need to differ for all machine states, nor is it necessary for every event to cause a significant change, or even any change, in dynamics.
In an aspect, three consecutive events can be considered, where the first and last events comprise instantaneous inputs that significantly alter a cell's state at their time of occurrence. For simplicity, these events can be Dirac delta functions added to a voltage state representative of some equivalent integrated current or conductance-based input affecting a cell's membrane potential. However, the interim event between the first and last input events can be considered as having no associated input or output. According to certain aspects of the present disclosure, a hypothetical neuron model can be considered in which the behavior of the model is defined to change depending on whether the interim event is included or omitted, even though the event has no associated input or output.
In order to achieve a rich behavioral repertoire representative of a biological cell, one would need an event state machine that can exhibit complex behaviors. Yet, a computationally well-defined well-behaved low-complexity model is also a desirable objective. If the occurrence of an event itself, separate from any contribution to state variables (such as input, if any), can influence the state machine and change constraints on dynamics subsequent to the event, then the future state of the system is not only a function of a prior state, time and input but rather a function of a state, time, input and the event. Given more dependencies, there is more freedom to design individual model dependency relations and dynamics in different meta-states or regimes. Hence, the model elements can potentially be of lower complexity and yet yield an overall model that may achieve a richer behavioral repertoire.
Moreover, the model can be defined theoretically in formal terms of events. Since machines are amenable to event-based processing as opposed to continuous-time processing, the model may then be implemented as theoretically defined rather than implementation approximating theory. The present disclosure provides a new harmonious approach, embracing events as a foundation of a model as opposed to dealing with events as inconvenient practical constraints.
The above abstract principles can be expressed mathematically by defining a univariate or multivariate state S that evolves absent of any events between time t0 and tf according to some function of the prior state and elapsed time. The state at any time t between an event at time t0 and an event at time tf>t may be expressed as,
S(t)=f(t−t0,S(t0)). (15)
By definition, there is a closed-form expression for the state as it evolves between events. However, if there is an event at time t between the two events, then the state at time tf>t may be,
S(tf)=f(tf−t,f(t−t0,S(t0)))≠f(tf−t0,S(t0)). (16)
The significance of this is that the dynamics of the model may follow a different path regardless of input or even whether there is an input only because of the existence of an event.
A neuron model with a multi-variate state is not fundamentally incompatible with such a notion. Counter-intuitively, dynamics may be defined independently for the state variables. If dynamics are decoupled, the question is what the advantage of a multi-variate state is. In an aspect, a multi-variate state may provide the opportunity to transition to a new meta-state upon an event where the dynamics in the new meta-state are determined based on the state at the time of the event. In other words, a multi-variate state may provide another dimension for determining behavioral changes at the time of events.
This event-based view may provide a framework for designing a neuron model with rich behavioral capabilities with smaller computational complexity. This framework can be the basis for further principles that may shape design of a spiking neuron model.
The ability of a spiking neuron model to detect fine timing coincidences may depend on memory characteristics. Leaky-integrate-and-fire neuron models with exponential decay are able to detect temporal coincidences to a certain degree depending on the rate of leak (or decay), which determines how fast the neuron forgets about the prior input. Such models may be limited in terms of other computational properties or biological behaviors that may be reproduced. However, while considering alternate or more sophisticated models, it may be beneficial to retain advantages of this leaky-integrate-and-fire behavior, including desirable memory characteristics.
A basic exponential leaky-integrate-and-fire model may be defined as having a state which decays over time Δt toward rest (which can be defined without loss of generality as 0), absent any input, according to,
v(t+Δt)=v(t)e−Δt/τ, (17)
where v is the state (e.g., voltage) and τ is a time constant. The rate of decay is decelerating so the difference in the decay curves for two different starting states is greatest at the starting point and converges thereafter. Therefore, any change in input magnitude at a particular time, which would alter state at that time, has a monotonically decreasing effect in the cell's memory.
These memory behaviors are relevant in learning because input magnitudes are typically variable during learning due to adaptation of the weight (strength) applied to an input. Spike-timing-dependent plasticity typically involves adjusting a synaptic (connection) weight (strength) by an amount depending on the time difference between input event and spike output event. With a monotonically decreasing memory behavior, a weight adjustment has a stable impact in the sense of a decreasing impact over time. However, with a non-monotonic memory behavior, a weight adjustment would have an unstable effect in the sense that it may have either a larger or smaller impact on a future event depending on the latency between the input event and the future event.
Nevertheless, one might consider a broad coincidence detection capability desirable in a particular circumstance. Rather than modifying neuron model dynamics to suit such an objective, input or input filtering may be defined to suit the circumstance. There are many ways to shape input into broader forms. However, it would be difficult to get a cell to detect a narrower coincidence than permitted by dampened dynamics. Dulling a neuron model's temporal coincidence detection capability would limit resolution for all inputs.
An excitatory or inhibitory post-synaptic potential (input effect) may be modeled as a linear combination of exponentials,
i(t)=Σjaje−(t−t
Typically, the difference between two exponentials may provide a sufficiently realistic waveform. In an aspect, there may be one or more events associated with the input. For example, the input event may be considered as coinciding with the peak or start of the waveform (or any other arbitrary reference point or point(s)). Since the input itself can be spread over time, the time window over which coincidence detection can be performed can be controlled by input definition or filtering.
An important distinction is that the abstract event state machine concept does not prevent a model from receiving input broader than Dirac delta functions. Whether input is defined in discrete or continuous form, equivalent input can be applied to a model at the time of an input event or events.
Retaining fine timing coincidence detection capability in a neuron model, such as afforded by leaky-integrative behavior, may allow both narrow and broad coincidence detection. However, temporal coincidence detection capability gives a framework for decoding information coded in time and outputting information coded in rate (spike or no spike). A basic leaky-integrate-and-fire model fires when the final element of input arrives (or some fixed latency thereafter). Thus, the output time may be constrained by the final contributing input. This is the reason for the distinction that the output information is coded in rate, and not in time. All else being equal, more information can be coded in a spike timing pattern than in a spiking rate. Information coded in time is more general. While spiking rate is a characteristic of a spike time pattern, there is no unique spike time pattern for any given spiking rate. Thus, a spike timing pattern yields a rate, but the reverse does not hold.
Spiking neuron models are often defined in terms of a threshold. Typically, when the threshold is exceeded, the model spikes, possibly with some latency. The question that arises is whether the concept of latency-to-fire is computationally useful.
If a cell would fire immediately or even with a particular latency once reaching a particular condition, the usefulness might be limited since a fixed delay can be achieved by other means such as by axonal, synaptic, or dendritic processes. Nevertheless, there might be an advantage if the relative delay between an invoking event (a sufficient input to transition machine state) and the spike event (output) was variable. In particular, there may be a significant advantage if that relative delay depends on inputs or events subsequent to (or at or before) the invoking event. Such a general computational ability would indeed be useful, particularly if the output timing was a well-defined function, because it would not only provide a framework for engineering a system with spiking neurons but also a framework for understanding what a network of spiking neurons is computing.
In particular, if the delay of an output spike is a well-defined function of relative delays from a reference event and any interim events (including inputs), a function would be defined that gives a relative output time Δtk,r as a function of relative input times Δti,r,
Δtk,r=f({Δti,r|iεIk}), (19)
where Ik is the set of inputs to cell k. Essentially, post-synaptic spikes can be viewed as coding information in the form of the relative time Δt of the spike as compared to a reference time tr. The reference time may be arbitrary or some prior spike time such as the time of a first pre-synaptic spike or last post-synaptic spike.
Coding information in time presents certain challenges. Cells may have limited memory of inputs and timing may be consistent only to a particular finite resolution. This means that coding more information requires more time. However, the more important the information, the sooner that information should be conveyed. Accordingly, certain aspects of the present disclosure support viewing the information in relative time as x having a normalized range [0, 1], without loss of generality, where the time delay to convey the information may be expressed as,
Δt=−α log x, (20)
such that the larger the value x is (more significant information), the shorter the time delay (response) is. A value of one corresponds to instantaneous response (no delay) and a value of zero (no significance) corresponds to infinite delay (no input or never outputting a spike). Viewing both input and output timing in this context can motivate a dynamic in which the sooner an input occurs or the more significant an input, the sooner an output occurs.
A variable output latency relative to a last contributing input or invoking event may be achieved with an anti-leaky dynamic, i.e., a dynamic in which state moves toward a spiking condition or has positive feedback. According to the aforementioned information time coding principle, dynamics may be desirable where inputs occurring earlier have a larger impact on output time. This differs from a dynamics in which feedback is negative as in leaky-integrative behavior. However, rather than attempting to combine leaky-integrative behavior with anti-leaky-integrative behavior, the abstract event state machine concept presents a framework for defining multiple separate behavioral regimes or sets of meta-states so that dynamics in different machine states may be defined or parameterized independently.
In a simple example, leaky-integrative behavior may be defined in one machine state regime and anti-leaky-integrative behavior may be defined in another machine state regime. These can be referred to as sub-threshold and super-threshold behaviors. However, the term threshold may be misleading in the context of an abstract event state machine because the transition between machine states occurs on events, not on a state condition per se. A state condition may not be related with a machine state transition, as discussed above, unless there is an event.
It can be observed that a leaky-integrate-and-fire model is insufficient. For example, the leaky-integrate-and-fire model is not sufficiently flexible. In an aspect, inputs may be expressed as linear combinations of quanta of discrete input, such as Dirac delta functions. Two input events can be considered occurring at times t0 and t1, neither of which is sufficient to bring a leaky-integrate-and-fire model above a threshold alone. However, together, if the input magnitudes were summed, the voltage change would bring the model above the leaky-integrate-and-fire model's threshold.
In another aspect, inputs are not discrete Dirac delta functions. In an abstract event state machine, inputs may contribute to machine state transition only on events. Accumulated or equivalent integrated input may be applied to transition machine state only at events. Therefore, the same principles apply to input magnitude as well as input timing.
A basic exponential anti-leaky-integrate-and-fire model may be defined in a conventional modeling sense as having a state which increases if it is above a threshold (which can be defined without loss of generality as 0) over time Δt toward a peak, absent any input, according to,
v(t+Δt)=v(t)eΔt/τ. (21)
The term conventional refers to the formulation not strictly according to the proposed abstract event state machine framework in order to demonstrate a basic principle. This model can be viewed as demonstrative to make a point regarding timing behavior in reference to a threshold. With such an anti-leaky-integrate-and-fire model, timing differences can be distinguished because the time to reach the peak may depend on the input timing.
On a related note, the range of magnitude of synaptic weights (connection strengths) is typically constrained. Therefore, without loss of generality, a weight may be defined to have a sign (excitatory or inhibitory) and normalized range (e.g., range [0,1]). Moreover, plasticity rules for learning may tend to result in clustering of weights bi-modally (i.e., near 0 and near 1).
Certain aspects of the present disclosure support a conceptual model in the spirit of an abstract event state machine based on combining leaky-integrate-and-fire behavior with anti-leaky-integrate-and-fire behavior by defining behavior in two regimes separately. The term threshold can be used loosely to reflect a condition applying at an event that transitions from one regime to the other, i.e., the transition event linking a state before and after the event.
A neuron model based on the above concepts is presented in this disclosure. The dynamics and mathematical expressions for computation functionalities described above will be discussed in detail in subsequent sections as pertaining to such a model defined according to the principles described in this section, including temporal computation.
One important question is whether anti-leaky-integrate-and-fire behavior might exist in biological cells. Biological cells have regenerative upstroke dynamics generating the voltage spike. The A-channel is a voltage-gated transient Potassium ion channel that activates quickly, which can counteract rapid sodium influx and slow the regenerative upstroke. As a result, it is possible to achieve very long latencies (hundreds of milliseconds or longer) from the time the upstroke is invoked until the spike peak occurs. The voltage level to trigger the A-channel may be slightly below the “threshold” for the sodium channel chain reaction so the actual timing of events and input may alter the latency properties as well.
It should be noted that there is no ideal theoretical model that is the baseline or metric by which all other models should be judged or strive to be like. Models are their designer's constructs, designed for their modeler's purposes. From a biological modeling perspective, a neuron model is desired for which it can be easy to determine parameters to match a biological cell's behavior over a range of conditions, i.e., richness in behavioral capability is an advantage.
A good model would allow designing an instance to represent particular behavior, such as a biologically observed behavior. Further, a good model would not change dynamics in one regime away from biologically desired behavior every time one tries to design the behavior in another regime. Behavior would not generally change dramatically with a small numerical parameter change unless on the border of a point of inflection. In addition, a good model would be defined theoretically and possible to implement as theoretically defined (and convenient to implement efficiently as theoretically defined) without incurring numerical differences depending on the implementation. In other words, a good model would behave the same in different implementations. In addition, a good model may be able to reproduce designed biological behavior without leveraging numerical artifacts and rather than based on theoretically defined dynamics.
A well-defined practical model is also desired that provide a stable computational framework, so that cells and networks can be designed rather than search parameter spaces and so that existing networks can be understood in computational terms. A model is desired that, as implemented, behaves as theoretically defined.
The goal is a behaviorally rich biologically-consistent computationally-convenient model of a neuron which adheres to the principles described above. Certain aspects of the present disclosure support design of a neuron model that can achieve these principles. The general neuron model is unique in that it is defined by the state at the time of events and by operations governing the change in that state from one event to the next event.
Studies of large-scale biological spiking neural networks are motivated to rely on neuron models with rich representation of electrophysiological behaviors and mathematical tractability. Recently, a variety of non-linear models have been shown to reproduce a substantial set of characteristic neuro-computational properties. However, while it has been suggested that the quadratic model may be the simplest model able to reproduce such properties, adding sustained sub-threshold oscillation to the repertoire requires another model. Moreover, solving such models may require numerical methods due to coupled non-linear dynamics. Ensuring stability with plasticity can also constrain modeling parameters. In the present disclosure, a different approach is taken, proposing a new parsimonious linear spiking neuron model. It reproduces a rich variety of neural dynamics including sustained sub-threshold oscillation and the behavioral collection exhibited by non-linear models. The model has dual-regimes governed by sub-threshold leaky-integrate and supra-threshold anti-leaky-integrate-and-fire dynamics along with linear coupling. Time constants and coupling in each regime may be independent and thus relatable to physiological properties on a per-regime basis (e.g., behavior of voltage-gated K+ channels; transient A-currents). Due to linearity, spike-timing-dependent plasticity (STDP) alters post-synaptic potential monotonically over time yielding stability advantages for temporal coding. Closed-form solutions provide future state given delay and delay to reach a given future state (spike). A principle of momentary coupling can also be applied. The model is uniquely amenable to theoretical analysis and event-based hardware simulation of large-scale biological spiking networks with arbitrarily fine timing. The linear supra-threshold dynamics is leveraged to show that a spiking network can compute a general linear system of equations by encoding information in spike timing.
The present disclosure formulates a minimal dual-regime linear spiking neuron Model based on neuro-computational dynamics principles and demonstrates that the Model can reproduce a rich variety of behaviors including such sub-threshold features as sustained oscillation. The Model's one- or two-dimensional linear dynamics may have two regimes. The time constant (and coupling) can depend on the regime. In the sub-threshold regime, the time constant, negative by convention, represents leaky channel dynamics generally acting to return a cell to rest in biologically-consistent linear fashion. The time constant in the supra-threshold regime, positive by convention, reflects anti-leaky channel dynamics generally driving a cell to spike while incurring latency in spike-generation due to the role of voltage-gated K+ channels and transient A-current. The linear sub- and supra-threshold dynamics are analyzed in relation to biological evidence and compared to prior non-linear models. It is also analyzed how long-term potentiation (LTP) and long-term depression (LTD) change the shape of post-synaptic potential (PSP) in a temporal coding context. In the new Model, the PSP difference for a given change in synaptic strength is monotonically decreasing over time, revealing a potential stability advantage during plasticity.
A unique method of engineering and reverse-engineering (interpreting) spiking neural networks using the new Model is discussed. Methods of engineering rate-based networks have been proposed. However, whether spiking or not, the information in such networks can be coded in firing rates. Moreover, a variety of tuning curves and filters are often required. A consequence of the Model is that there are closed-form solutions for both future state and spike latency. Remarkably, if considering relative spiking timing as encoding negative log information values, then, in the linear supra-threshold regime of the Model, information in post-synaptic spike latency may become a linear function of information in relative pre-synaptic spike timings. This means that a linear system of equations can be effectively computed in real-time by a spiking neural network. This is possible even without precise synaptic weights because the coefficients of the linear system can alternatively be represented by synaptic delays. Positive and negative coefficients may translate to excitation and inhibition, respectively. Conveniently, larger coefficients may translate to shorter delays so significant responses naturally occur (spike) sooner. Precision may depend on timing resolution, and the linear model is particularly amenable to large-scale event-based modeling of spiking neural networks at arbitrarily fine time resolution.
In accordance with certain aspects of the present disclosure, the designed Model is defined in terms of events, wherein the events are fundamental to the defined behavior. The behavior may depend on the events, and the inputs and outputs may occur upon events and the dynamics are coupled at events.
The dynamics of the neuron Model can be divided into two regimes called the negative regime (also referred to as the leaky-integrate-and-fire regime) and the positive regime (also referred to as the anti-leaky-integrate-and-fire regime). Formulation of dynamics in terms of events and separation of the dynamics into these two regimes are fundamental characteristics of the presented neuron Model. In the negative regime, the state generally tends toward a target condition at the time of a future event. In this negative regime, the neuron Model can exhibit temporal input detection properties and other so-called sub-threshold behavior as well as other more complex dynamics. In the positive regime, the state generally tends away from an anti-target condition. In this positive regime, the neuron Model can exhibit computational properties such as incurring latency to spike depending on subsequent input events as well as other more complex dynamics.
The symbol ρ will be used in the present disclosure to denote the dynamics regime with the convention to replace the symbol ρ with the sign ‘−’ or ‘+’ for the negative and positive regimes respectively when discussing or expressing a relation for a specific regime. The multivariate Model state is comprised of a membrane potential (voltage) v and an abstract recovery current u. In basic form, the regime ρ is essentially determined by this state at the time of an event. The neuron Model is defined to be in the positive regime upon a next event if the voltage v at the prior event is above a threshold v>{circumflex over (v)}+, and otherwise the neuron Model is defined to be in the negative regime. For now, the typically setting can be considered where {circumflex over (V)}+ is set equal to a constant v+.
The dynamics of the Model state are conveniently described in terms of dynamics of a transformed state pair {v′,u′}. The state transformations at the time of an event are,
v′=v+q
ρ, (22)
u′=u+r, (23)
where qρ and r are the linear transformation variables. The voltage transformation may depend on the regime ρ. The Model dynamics, which are also dependent on the regime, are defined by differential equations in terms of the transformed state pair,
where, as aforementioned, τ− is the negative regime voltage time constant, τ+ is the positive regime voltage time constant and τu is the recovery current time constant. These are constant values although variable time constants are also possible. For convenience, the negative regime time constant τ− can be specified as a negative quantity so that the voltage dynamics can be expressed in the same form for all regimes. The exponent τ+ as well as τu are generally positive.
The state dynamics of the neuron Model are defined in an event framework. Between events, the dynamics are defined by the ordinary decoupled differential equations above. The dynamics of the two state elements are coupled only at events by the transformations offsetting the states from their null-clines at the time of the event, where the transformation variables are,
q
ρ=−τρβu−vρ′, (26)
r=δ(v+ε), (27)
where, as aforementioned, δ is a coupling conductance-time, ε is a coupling offset voltage, and β is a coupling resistance. A more general formulation with βρ (where the transformations in the two regimes and the time constants are completely independent) can be also employed. Typically, β is a parameter common across regimes, but as βρ, can be configured separately for each regime ρ. Similarly, τu is typically common across regimes. The two values for vρ′ are typically the base or reference voltages for the two regimes, i.e., vρ′=vρ. As aforementioned, the parameter v− is the base voltage for the negative regime, and the membrane potential will generally tend toward v− in the negative regime. As aforementioned, the parameter v+ is the base voltage for the positive regime, and the membrane potential will generally tend away from v+ in the positive regime. The parameter ε can be typically set to −v−.
The Model has closed-form solutions for state evolution at time t+Δt given the state {v′,u′} at time t,
Therefore, the Model state need only to be, and is defined to be, updated upon events such as upon an input (pre-synaptic spike) or output (post-synaptic spike). This can be generalized because operations may also be performed on artificial events (whether or not there is input or output). By definition, transformations are defined at events, not between events. This means that qρ and r and even ρ should not be recomputed unless there is an event.
Hence, the Model is only coupled at events and the regime (whether positive or negative) is only determined at events. The Model state variables v and u are generally coupled as described above via variables qρ and r, which are only determined at events (or steps). The variables qρ and r are computed based on the prior state. The state elements are then evolved independently to the next event. In effect, this means that the state variables are ‘momentarily decoupled’ between events.
In a first case 1502, the state at the time of the first event is v=v++ε+ and u=0, where ε+ is a small positive voltage offset. Assuming typical settings, the Model will be in the positive regime since v>v+. As a result of the transformation variable q+=−v+, the voltage will increase away from v+. In a second case 1504, the state at the time of the first event is v=v+−ε− and u=0, where ε− is a small positive voltage offset. As a result of the transformation variable q−=−v−, the voltage will decrease toward v−.
The Model is defined to spike (i.e., produce an output event) when the voltage v reaches a threshold value vS. Subsequently, the state is typically reset at a reset event (which technically may be one and the same as the spike output event),
v={circumflex over (v)}
−, (30)
u=u+Δu, (31)
where Δu is a recovery current reset voltage. The reset voltage {circumflex over (v)}− can be typically set to v−.
Moreover, the time of a post-synaptic spike may be anticipated so the time to reach a particular state can be determined in advance in closed-form. Given a prior voltage state v0, the time delay until voltage state vf is reached (from voltage state v0) is given by,
If a spike is defined as occurring at the time the voltage state v reaches vS (spike voltage), then the closed-form solution for the amount of time, or relative delay, until a spike occurs as measured from the time that the voltage is at a given state v is,
where {circumflex over (v)}+ is a regime threshold and is typically set to v+. It should be noted that the regime threshold is not the null-cline.
The above definitions of the Model dynamics depend on whether the Model is in the positive or negative regime. As aforementioned, the coupling and the regime ρ are computed upon events. For purposes of state propagation, the regime and coupling (transformation) variables are defined based on the state at the time of the last (prior) event. However, once the state is propagated to the current event, anticipation of spike output time (a next event), the regime and coupling variable are all defined based on the state at the time of the current event after incorporating input.
For momentary coupling with event or step-event update, state evolution from event time t to event time t+Δt may be defined as above in equations (11), (12), (28) and (29). Step-based update may optionally be configured equivalent to event/step-event (by updating q and r only on events) or differently by defining events/moments (computing q and r) at every step. The difference is typically negligible. For computational efficiency, event update (or step-event update if limited to a step-based simulator) implementation may be preferred. In such a case, spike events may be anticipated using the above closed-form expressions (14) and (32). In step update, Δt is a constant (the simulation step) and substituting equations (26)-(27) in equations (28)-(29) accordingly,
Thus, converting an event update or step-event update implementation to step update form may comprise: (i) removal of the condition of update on an event (update at every step) and removal of anticipation of spike and (ii) optional simplification of equations given fixed Δt as described above.
Current and conductance models may be similarly defined and computed from event to event and may be incorporated into the spike anticipation. Alternatively, input models may be computed in a different implementation than the neuron model (e.g., step-updated conductance model with step-event updated neuron model).
Another property of the linear Model is monotonic difference in memory. The difference in the course of membrane potential diminishes over time so, for example, in the negative (LIF) domain (ignoring or omitting a second state for clarity) the difference is monotonically decreasing,
Thus, there is a stable effect of spike-timing-dependent plasticity over time: a change in synaptic weight due to spike-timing-dependent plasticity has a decreasing effect on subsequent behavior of a cell over time.
Another property of the linear Model is that relative spike-latency can be expressed as a function of relative input spike timing, which follows from equations (14) and (32). For simplicity of conceptual demonstration, a simple Dirac current spike inputs (weight of 1 or 0) using the heavy-side function θ with one-dimensional state β=0 (or no u) with the threshold at vρ′=0 (i.e., q=0) can be considered,
where τi is the time of input spike from pre-synaptic neuron i and Δτi is the connection delay from pre-synaptic neuron i. Consequently, the time from increment to voltage membrane state v0 to spike is,
τ=τ+ log vS−τ+ log └v0+Σie−(τ
Defining b=e1/τ
b
−τ
=Δw+Σ
i
w
i
b
−τ
, (39)
where wi=qΔτ
τi=−logbxi;Δτi=−logb(wivS). (40)
Therefore, neuron spike latencies as a function of input spike timing is equivalent to a linear system of equations in a negative log domain. The time at which the voltage state reaches v0 served as a reference for input and output spike times above, but any time reference may be used for either. The precision of the computation in terms of values in linear domain will depend on timing resolution in the neural spike-timing domain.
Input may only be applied to the Model at events. In a typical formulation, input may be applied to the Model state after the state has been advanced from a prior event to the time of an input event. Thus more generally,
v′(t+Δt)=hv(v′(t)eΔt/τ
u′(t+Δt)=hu(u′(t)eΔt/τ
where hv and hu are input channel functions. In a simple case, instantaneous current inputs may be modeled as discrete Dirac delta functions, i.e., i=θ(t), applied to voltage state. In this case, input to the Model state applies at the time of the event and,
h
v(x,i)=x+iβ;hu(x,i)=x, (43)
where β is the membrane resistance.
However, inputs may alternatively be continuous such as a sum of weighted exponential decays describing an excitatory or inhibitory post-synaptic potential, whether current or conductance based. In this case, input to the Model state is applicable at the time of the event but also potentially before or after the event. Thus, the equivalent total integrated input contribution, from the input for the next event and inputs from past events, as accumulated between a prior event and a next event, may be applied at the time of the next event. Thus, closed-form solutions for continuous input contributions are also an advantage but not required.
A continuous exponentially decaying excitatory or inhibitory input may be defined by,
which has a closed-form solution and thus may be evolved from event to event decoupled from the {v,u} state. The total contribution over a time period from a prior event at time t to a next event at time t+Δt is given by,
where g<0 for an inhibitory contribution and where g<0 for an excitatory contribution and the same input channel functions may be used as for Dirac inputs. The input and the state evolution are decoupled and this may, if desired, be compensated for at the time of the event, but is typically unnecessary.
Conductance-based input may also be integrated,
where Eg is the reference voltage for the input and V is a voltage level computed to compensate for removing the (v−Eg) term from the integral. The approximation V=v(t) (the voltage at the prior event) holds well if v changes by a relatively small amount across events or events occur at small inter-event-intervals. If not, more sophisticated formulations of V or use of artificial events, discussed below, can be used to achieve desired effects while keeping with the definition of the Model.
By definition, the Model does not account for future input between events in spike time anticipation. The general reason for this is the fact that anticipation has a closed-form solution. Even though closed-form solutions may be available for some continuous input formulations, the equivalent effects can be achieved if the rate of events is sufficiently high, by alternatively defining input or channel functions and, otherwise by using artificial events.
By definition, all forms of plasticity apply at events, just like everything else in the Model. Spike-timing-dependent plasticity is particularly suited to this because long-term potentiation and long-term depression can be viewed as triggered by a pre-synaptic (input) event preceding or following a post-synaptic (output) event respectively. Any form of structural plasticity would also, per the definition, apply at events. Structural plasticity may be thought of as modifying, creating or deleting synaptic connections, for example. Equivalently, the parameters of an abstract synapse, such as delay and weight may be changed as if the abstract synapse used to model a deleted synapse was reused to model a new synapse with different parameters. Thus, multiple forms of plasticity may be generalized in terms of variable synaptic parameters. The variability may be at discrete moments or continuous, but regardless are evolved in the Model from event to event.
The algorithmic solution to the Model may comprise: (i) advancing the state from a prior event to a next event; (ii) updating the state at the next event time given the input at the next event time (or applying, at the time of the next event, an input equivalent to that accumulated between the prior and next events); and (iii) anticipating when the next event will occur. Events may include input events, which are typically considered to occur at the time a synaptic input propagates to the neuron's soma. Events may also include output events, which are typically considered to occur at the time a spike is emitted by the neuron's soma and begins propagating along the axon. Since closed-form solutions are available, the Model state may also be determined between events if so desired, but regime and coupling are not to be updated unless there is an event.
The following algorithm describes the operations typically conducted upon an event. The state may advance depending on the event. Then, one may advance and apply input (if the event or inter-event interval has input), and anticipate next event (output). Operations needed to advance the state may differ depending on whether the event is an output event or other event (e.g., input or artificial event).
If the state advances based on non-output event, the time Δt since the prior event time t may be first determined. The regime may be determined based on the prior state {v,u} at the prior event. The transformation qρ and r may be determined given the regime and prior state. The transformed state {v′,u′} may be determined at the prior event. The advanced transformed state may be determined at time t+Δt. The advanced state {v,u} may be determined at time t+Δt. If the state advances based on output event, the state may be reset.
Advancing and applying input is only needed if there is input at the event or during the time between the prior event and the next event. For advancing and applying input, plastic input parameters may need first to be adapted. The advanced input(s) may be determined at time t+Δt. Any new input(s) may be incorporated at time t (if there is any new input event). Equivalent inter-event-interval integrated input contribution i may be determined. The new state {v,u} may be determined by applying equivalent input i.
If there is any chance that the next output event time has changed, the time to the next output event should be anticipated. For anticipating the next output event, the regime may be first determined based on the new state {v,u}. The updated transformation variable qρ may be determined, and the relative delay ΔtS until the next output spike event may be anticipated. Anticipation of the next output spike generally applies whether an event is an input event, and output event or an artificial event because any state updates may change the anticipated time of the spike.
By definition, the Model is updated only upon events. An artificial event represents an event defined for purposes of defining Model dynamics behavior rather than an input or output event per se. There are several reasons why a modeler may wish to define artificial events.
Modelers often wish to see traces of voltage and current state at a fine time resolution. Those states can be computed periodically between events without defining artificial events. This would require computation of the voltage and current using the transformation variables and regime as computed at the prior event (not the prior period). This means that at any inter-event time, the state can be determined, but there is no coupling so the state at the time of the next event does not change regardless of whether or not the state is computed between events.
By definition, the coupling transformations are defined at events, not between events. Transformation variables and regime are only computed at events. This means that qρ and r and even ρ should not be recomputed unless there is an event. Technically, one may define an artificial event to occur at the time the voltage crosses the regime threshold, which is a time computable in advance due to the closed-form expressions and decoupling between events. Hence, assuming no artificial events have been defined, the voltage and current state may be updated between events if so desired, but based on the parameters and offsets from the prior event. In other words, by definition, coupling should occur only at events.
Defining artificial events allows a modeler to alter the coupling. There may also be reasons for defining artificial events at convenient times. There is no wrong way to define or not define artificial events, but the modeler should understand that defining artificial events generally changes the behavior of the model.
One reason to define an artificial event is to achieve a behavior characteristic of a Model instance with a high rate of events with a Model instance with a lower rate of events. If there is a high rate of input events, the Model dynamics are advanced at small time intervals. However, if there is a low rate of input events, the Model dynamics, without artificial events, may be advanced by larger time intervals. Generally, the difference in behavior may not be significant unless the time intervals are much larger. Even then, adjustments to parameters such as the time constants can be made to compensate.
However, artificial events may be defined as an alternative. In particular, artificial events may be defined to occur between non-artificial events if the interval between the non-artificial events is large. Technically, this can be achieved in several ways. For example, an artificial event can be tentatively scheduled with some delay after each non-artificial event. If another non-artificial event occurs before the artificial event, then the artificial event is rescheduled with some delay after the latest non-artificial event. If the artificial event does occur first, then it is rescheduled again with the same delay. Alternatively, artificial events can be scheduled periodically. Artificial events can also be defined to occur conditionally, such as dependent on state or spiking rate or at a time computed in advance.
Artificial events are often unnecessary to achieve desired behaviors in networks with substantial fan-in or fan-out connectivity because there is a substantial number of inputs to each cell each contributing input events to the receiving cell. Consequently, if a particular behavior requiring periodic events is desired, the input events themselves may be more than sufficient to satisfy that role as well.
Generally, the Model is suited to be solved in event-based simulation as described above. However, the Model may also be solved in conventional step-based simulations. Nevertheless, there are fundamentally two ways to perform this: (i) without artificial events; and (ii) with artificial events. These are, by definition, different instances of the Model. The Model operations are defined to occur at events (whether artificial or other). Thus, without artificial events, the code executing at each step should be conditioned on the occurrence of an event at that time slot (effectively no operation may occur at steps where there is no event). Alternatively, with artificial events, an artificial event can be defined to occur at every time slot. Since the closed-form solutions are available, there is no need for numerical methods regardless of whether the time between events is constant or variable. Another issue is whether the time of events is quantized. Typically, in step-based simulations, time is quantized to the step time. Thus, these instances will differ (whether significantly or not) from event-based formulations regardless of whether artificial events are used or not.
While periodic artificial events would generally require more computations and would therefore be generally less desirable, there are some potential simplifications. For example, the time Δt since the prior event may be a constant (the time interval). Because of that, the transformed state update can be simplified to a single multiplication for each state element. The infinite impulse response filters are then given by,
v′(t+Δt)=cρv′(t), (47)
u′(t+Δt)=cuu′(t), (48)
where constants are defined as cρ=eΔt/τ
Another simplification is that spike anticipation may be replaced by checking if v≧{circumflex over (v)}S at each interval. The spiking condition was defined to be v≧vS. However, if the artificial events are defined to occur at a quantized time interval Δt, the spiking condition may actually be reached between intervals. As a result, it should be ensured that the spike occurs at the desired time, whether at the event before or at the event after.
The fundamental Model behavior is controlled by the aforementioned parameters τ+, τ−, τu, v+, v−, vS, β, δ and Δu. The time-constants control the rate of decay of the voltage or current toward or away from the null-clines, which intersect the current axis at the base voltage of the regime. Coupling at events is determined by the slope of the transformation equations. For the voltage transformation, the slope is given by parameters τ+ and β, whereas for the current transformation the slope is given by the parameter δ. This means that the slope of the null-cline in the positive regime border is dependent on the positive regime time constant but this can be compensated for with β.
In the standard parameterization, only one coupling resistance parameter β may be utilized, which is common for both regimes. However, βρ may also be configured separately for each regime. Additional behavioral aspects may be achieved with separate control of the aforementioned derivative parameters {circumflex over (v)}+, {circumflex over (v)}− and ε. However, typically, the default values may be used based on the above basic parameters. For example, the parameter ε is typically set to −v−.
The typical setting for the regime threshold given above is that {circumflex over (v)}+=v+. However, the voltage null-cline is not a vertical line in the {v,u} state-space. This characteristic can be advantageous because it allows for richer behaviors. Also, alternatively, the regime threshold may be defined to be the null-cline bordering the positive regime.
Given the state transformations, the null-clines for v and u are given by the negative of the transformation variables qρ and r respectively: v=−qρ and u=−r.
v=τ
ρ
βu+v
ρ or u=(v−vρ)/τρβ, (49)
u=−δ(v+ε) or v=−u/δ−ε. (50)
The parameter δ is a scale factor controlling the slope of the u null-cline, which may be positive (for δ<0) or negative (for δ>0). Assuming default settings, the u null-cline crosses the x-axis at v−. The parameter β is a resistance value controlling the slope of the v null-clines in both regimes. The τρ time-constant parameters control not only the exponential decays but also the null-cline slopes in each regime separately.
In an aspect of the present disclosure, the transformations control the temporal behavior of the Model. The voltage transformation offset variable qρ is defined by a linear equation depending on the recovery current. When the current is zero, the transformation is entirely due to offset vρ. Because of that, the voltage state may be shifted in the transformed state so that the base voltage state is zero. Revisiting the formula for anticipating the spike time it can be observed that the logarithm term at u=0 is,
Thus, the transformation to state x shifts and normalizes the state model to yield a time delay between zero and one in the positive regime. The information in a spike is in terms of its relative timing. From an information theoretical point of view, a normalized time can be viewed as coding an information value (or state) x having range [0,1] as,
such that the larger the value, the shorter the time delay (response) and a value of 0 corresponds to infinite delay (never spike). Thus, the parameterization of the Model allows control of the information representation in spike timing. This aspect is particularly advantageous for computational design purposes.
Finally, it should be noted that coupling with the current state may be diminished or eliminated. The coupling is controlled by the parameter β, so if β=0, there is no need for the u variable. Similarly, there is no need for the u variable if it has no dynamics, i.e., Δu=0 and τu→∞. The coupling from voltage is governed by the parameter δ, wherein small or zero δ may also diminish or eliminate impact of the voltage state on current state. The u variable may be omitted if the coupled dynamics are insignificant because of a small β, Δu or large τu. However, the u variable provides the opportunity for richer model dynamics.
Fundamentally important biological neural behaviors may be impossible to emulate or predict with typical spiking neuron models because those models do not capture: (i) fine (continuous) timing or (ii) continuous time dynamics. Even models that are expressed in continuous time differential equation form have no closed-form solution and thus are often approximated iteratively, for example, using the Euler method. The problem with iterative models is evident when observing how spike timing can change dramatically (e.g., by tens of milliseconds or more) merely by changing only the time step resolution and only by a small amount (e.g., from 1 ms to 0.1 ms). While attempting to approximate such models with fine time steps may also be computationally burdensome, more importantly it generally fails to account for continuous time dynamics, particularly if the model has multiple inter-dependent state variables (such as voltage and current) and multiple attractors. Thus, the goal is a continuous time dynamical neuron model that is capable of capturing biologically realistic temporal effects.
As aforementioned, the dynamical event neuron model has a state defined by variables {v,u} representing membrane potential (voltage) and recovery current respectively. A spike event may be anticipated in closed form (to any desired temporal precision) based on the current state given no further input using the conditional anticipation rule:
where
q
C=1/αC(−βuC+γC), (54)
and β, vpeak, α+, γ+ are parameters introduced above. This rule is effectively an anti-leaky integrate-and-fire (ALIF) model. The value uC is a conditional parameter and may be either constant or set as a function of u. When a spike occurs, the state is updated as follows: v=vpost and u=u+upost. A more sophisticated version comprises replacing the condition v>vt with a condition that also depends on the recovery current u: (v−vt)(v−vr)>u/k, where vt and vr are depolarization and rest potentials respectively.
The future state of the model can be determined using the independent conditional update rules. These rules include propagation of state variables independently:
v(t+Δt)=(v(t)+qC)eα
u(t+Δt)=(u(t)+r)e−at−r, (56)
where
r=δ(v−+ε), (57)
and δ, ε, a are parameters. Again, these are continuous time closed-form equations computable to any desired temporal precision. The value v− is a conditional parameter and may be either constant or set as a function of v. The factor c=+ if v>vt and c=− otherwise. In the latter case, computation of q− requires parameters α−, γ−. When c=+, the neuron is in a depolarizing region, or ALIF mode. When c=−, the neuron is in a LIF mode, returning to rest state unless input is supplied.
When there is an input, the state should be updated to the time of the input event using the state propagations and then updated to account for the input v=v+i. The state variables should also generally be bounded.
An event-based algorithm to compute the model includes processing two events: a synaptic input event and an anticipated spike event.
Upon synaptic input event, the following steps may be executed. The time since the last state update Δt may be determined. If the voltage condition is met, let c=+ and otherwise c=−, qC may be computed using equation (54), and the voltage may be updated using equation (55). Then, r may be computed using equation (57) and the recovery current may be updated using equation (56). After that, the input may be added to the voltage. If the voltage condition is met, let c=+ and otherwise c=−, qC may be re-computed using equation (54), and the anticipated spike time may be re-computed using equation (53). Finally, the anticipated spike event may be rescheduled.
It should be noted that for re-computing qC, the quantity i to be added to the potential may differ from a discrete time simulation operation since there is no time step here. The i value that would be used for discrete iterative simulation may be scaled by the time step to obtain an equivalent value for event-based modeling. Optionally, these algorithmic steps can be executed at least every predefined period T, if there is no input.
Upon anticipated spike event, the following steps may be executed. The state may be reset or updated (optional/as applicable). If the voltage condition is met, let c=+ and otherwise c=−, qC may be re-computed using equation (54), and the anticipated spike time may be re-computed using equation (53). After that, the anticipated spike event may be rescheduled.
In order to configure the model parameters, the dynamical neuron parameters described by Izhikevich can be considered: membrane capacitance C, threshold potential vt, rest potential vr, spike peak potential vpeak, factors k and b depending on the neuron's rheobase and input resistance, reset voltage after a spike c, recovery time constant a, and net current during a spike d.
For the dynamical event model, set:
αC=k(ΔvC)/C, (58)
β=1/C, (59)
γC=−vCaC. (60)
It should be noted that γC depends on αC and vC, wherein αC depends on ΔvC. The choice of ΔvC may depend on the condition, i.e., whether c=+ or c=−. Generally, if c=+, then,
Δv+=({circumflex over (v)}−vr);v+=vt, (61)
and otherwise,
Δv−=({circumflex over (v)}−vt);v−=vr. (62)
However, it is not necessary to use the current value {circumflex over (v)}=v to compute the above parameters. Instead, constant values for {circumflex over (v)} may be used for simplicity, namely for c=+, the mean of vpeak and vt may be used. For c=−, the mean of vr and vt may be used, for example.
The dynamical event model is a true continuous time model because there is no dependence on time step or time resolution. On the other hand, computing the Izhikevich simple model accurately in continuous time is problematic because finding a closed form solution for the two inter-dependent differential equations is not possible. Typical implementation of the Izhikevich simple model is based on discrete iterative simulation (or Euler numerical method). While a quantized lookup table may be used, by definition, this is not continuous time either.
In an aspect, the dynamical event model can be configured to produce the behavior of a theoretical continuous time simple model. The instantaneous dynamics can be examined in continuous time. Equation (53) can be rearranged in the following form:
(v+q+)=(vpeak+q+)e−Δt
If a transformation on the voltage, v′=v+q+ is used, then,
v′=v′
peak
e
−Δt
/α
. (64)
Taking the Laplace transform for time t=Δtspike, manipulating and taking the inverse Laplace transform yields,
Substituting for v′ and then q+ gives,
Further substituting the parameters gives the differential equation for potential according to the Izhikevich Simple Model (assuming no input),
Thus, instantaneously, the continuous time behavior can be achieved. It can be shown that the same is true for the potential when c=− and for the recovery current.
In the dynamical model, the voltage state is only updated when there is an event (input or spike), whereas the iterative simple model has updates every millisecond. Both are subject to exactly the same spike inputs. This example was performed without a minimum update time. The similarity can be increased even more by using a minimum updated time, the shorter the more similar.
While the same or similar behavior (result) can be achieved, it is important to understand that the models are completely different. In the dynamical event model, the state variables are independent and the behavior is conditional. Essentially, the simple model cannot be solved (executed) in continuous time. On the other hand, the dynamical event model is an entirely continuous time model.
The dynamics of the Model will be explained in the present disclosure in a manner in which the behavior can be well understood. The richness of the Model's behavioral capability derives from event-based formulation, coupling at events and decoupling between events, and division of dynamics into regimes. The general Model is unique in these respects and the resulting behavioral aspects reflect this.
The instantaneous state trajectory is given by the differential equations. Substituting for the transformation variables, the state trajectory in the negative regime is,
The state position vector relative to the rest origin <v−,0> is,
{right arrow over (e)}=<v−v
−
,u>. (69)
The angle between the state trajectory and the position vector is given by,
The projection of the state trajectory onto the position vector gives the component of the trajectory toward rest, or the decay component. The dot product yields the component scaled by the magnitude of the position vector,
The left-hand term in equation (71) is negative indicating a direction toward the rest origin. A simple conceptual case to visualize is if τu=−τ−. Then the left-hand term of equation (71) in square brackets is equal to ∥{right arrow over (e)}∥. However, since the left-hand term is scaled by the time constant 1/τ−, it can be observed that decay toward the origin is possible.
To avoid decay toward the origin, the second term in the right-hand side in equation (71) may need to be positive. To achieve the positive u null-cline slope, it is known that δ should be negative. However, as long as β>−δ/τu, the component β+δ/τu is positive. Nevertheless, u(v−v−) is only negative if one of u or (v−v−) is negative (second or fourth quadrant about the rest origin). Thus, to counter the decay toward the origin, there are several possibilities: (i) increase the magnitude of τ−; (ii) increase β+δ/τu; (iii) move the orbit inward toward the rest origin so that the term (v−v−)2+u2 may be less than u(v−v−); and (iv) alter the inter-event interval.
Thus, it is possible to not only decrease the decay such that the orbit does not decay, but also to create an oscillation that spirals outward and spikes. For example, adjusting δ yields the state paths depicted in
It should be noted that solving for {right arrow over (s)}·{right arrow over (e)}=0 yields,
which is a line in the state space. But, this does not prevent decaying, sustained or exploding orbits about rest because {right arrow over (s)}·{right arrow over (e)} may vary to positive and negative values.
To understand item (iv) above and why inter-event intervals matter, the present disclosure delves deeper into the matter of momentary coupling at events.
Events may influence the behavior of the Model whether they have input or not. The reason for this is that the Model dynamics variables are coupled at events and decoupled between events as defined above. Typical parameterizations and spaces were analyzed above in which the voltage and current evolution change almost imperceptibly over short periods of time regardless of when events occur absent significant input.
In the negative regime, with typical settings, voltage and current state variables may decay toward the respective null-clines. The individual states would reach those separate null-clines after an infinite inter-event interval. However, since both state variables change, the intercepts on the null-clines change. This means that while voltage decays toward some q− (t0) defined at a prior event, the null-cline has changed to q− (t1) at the time of a next event.
As a result, oscillation is possible with the Model entirely within the negative regime due only to sufficient inter-event intervals. Oscillations can occur in individual state variables or even in both, if the first and second states cross both null-clines. Such oscillation may occur even in the absence of parameters conducive to orbit with short inter-event intervals.
However, inter-event intervals contribute to the shape of more conventional orbits discussed in the previous section as well. The underlying reason is the same as above except when combined with an inherent orbit effect, a much smaller inter-event interval effect can be enough to alter the Model's behavior to obtain desired properties such as sub-threshold oscillations.
If inter-event intervals are considered, then the trajectory may be defined based on the state change over an inter-event interval Δt,
which is subtly but importantly different from (68), and thus,
{right arrow over (s)}·{right arrow over (e)}=(v−v−)2l−+u2lu−u(v−v−)(βτ−l−−δlu), (74)
where
Since τ−<0 typically and τu>0, then l−<0 and lu<0 for Δt>0 and the first term of the projection onto {right arrow over (e)} in equation (74) is thus negative, decaying an orbit toward the rest origin. A simple conceptual case to visualize is if τu=−τ−=τ. Then l−=lu=l, and
{right arrow over (s)}·{right arrow over (e)}=l[((v−v−)2+u2)+u(v−v−)(βτ+δ)]. (76)
It can be observed from equation (76) that the second term u(v−v−)(βτ+δ) can counteract the first term ((v−v−)2+u2) if βτ+δ is sufficiently large. Moreover, it can be observed that since l can be factored out, Δt controls the angle θ between the trajectory vector and the position vector. The larger Δt is, the more negative l becomes and the larger the trajectory component in the radial direction is. As a result, when the second term u(v−v−)(βτ+δ) has opposite sign to the first term ((v−v−)2+u2), then a large inter-event interval Δt can push a decaying state back out into an orbit or, if sufficient, to spiral outward.
The first case is illustrated in
It should be noted that solving for {right arrow over (s)}·{right arrow over (e)}=0 yields,
v=v
−
+u[(βτ−l−−δlu)±√{square root over ((βτ−l−−δlu)2−4l−lu)}]/2l−, (77)
which remains a line in the state space. However, this does not prevent decaying, sustained or exploding orbits about rest because {right arrow over (s)}·{right arrow over (e)} may vary to positive and negative values.
At first glance, one might assume that, absent further input, the Model will return toward rest when in the negative regime and eventually spike when in the positive regime. However, this is not generally true. In fact, the Model can spike from the negative regime and return to rest from the positive regime even if there is no further input.
The behavior of the Model in terms of recovery current dynamics is straight-forward. The recovery current may decrease above the current null-cline −r and increases below it (assuming τu>0 or vice versa if τu<0). However, the voltage dynamics are more complex because the voltage dynamics depend on regime and thus on the regime threshold. However, the regime threshold is not generally the same as the voltage null-cline (although it can be defined so).
The typical definition of the regime threshold is,
{circumflex over (v)}
+
=v
+, (78)
where v+ is a constant. It can be assumed that vρ′=vρ in the transformation variables (null-cline expressions). Accordingly, the voltage null-clines cross v+ at u=0 for the positive null-cline −q+=v+ and at u=(v+−v−)/τ−β for the negative null-cline −q+=v−. These two null-clines carve out two sub-regimes with particular properties. In these sub-regimes, one in the positive regime and one in the negative regime, voltage returns toward the regime threshold in finite, rather than infinite time. Moreover, the voltage can actually cross over v+ into the opposite regime absent any subsequent input, whether from the positive regime to the negative regime or vice versa. Bordering these finite regimes, at the threshold voltage, the voltage derivative is discontinuous (although the sign may remain unchanged). For these reasons, these two regimes can be denoted as finite return regimes.
In the positive regime, the voltage tends away from the positive regime null-cline given τ+>0. However, voltage tending away from the positive regime null-cline is not the same as voltage heading away from the regime threshold v+. Rather, there is only a particular region of u in which they are equal under the typical regime threshold definition.
Under the typical regime threshold definition, there exists a positive finite return regime that is a subspace of the positive regime in which the state tends toward the negative regime.
In the positive regime v>v+ and for the state to return toward the negative regime it is required that dv/dt<0 or,
v<−q
+. (79)
Thus,
v<τ
+
βu+v
+. (80)
Together, these two conditions define a region within the positive regime in which the voltage state tends toward the negative regime rather than spiking.
Since τ+>0 and β>0, then
u>(v−v+)/τ+β, (81)
which is a non-null region delineated by the line with slope 1/τ+β and offset −v+/τ+β as long as |τ+β|>0.
For example, if τ+=1 ms, β=1 and v+=−40 mV, then u=v+40. Thus, at v=−30 mV (i.e., above v+), any state with u>10 may yield a decay toward the negative regime. For example, at u=20, −q=−20.
In the positive finite return regime, the state tends into the negative regime in finite time.
In the positive finite return regime, v+<v<−q+. To arrive at a state vf in the negative regime, vf<v+<v. Thus,
v
f
+q
+
<v+q
+<0. (82)
As a result, for a finite vf and finite non-zero τ+,
Following from the example above, q+=−u+40. At u=5, to reach vf=−45 mV in the negative regime from v=−35 mV in the positive regime, Δt≅1.1 ms.
On the positive regime side of the threshold, v=v++ε+, where ε+ is a positive voltage offset and any state with sufficient u>0 decays into the negative regime over time Δt+,
Δv+=(ε+−τ+βu)eΔt
On the negative regime side of the threshold, v=v+−ε−, where ε− is a positive voltage offset and any state with sufficient u decays further into the negative regime over time Δt−, i.e.,
Δv−=(v+−v−−ε−−τ−βu)eΔt
Thus, the difference at the discontinuity, in the direction of the derivative, is,
Δv−−Δv+=−Δv±k−+βu(τ−k−+τ+k+)−ε−eΔt
where,
and Δv±=(v+−v−). It should be noted that Δv± and kρ are always positive with the considered parameters. As ερ→0,
Δv−−Δv+=−Δv±k−+βu(τ−k−+τ+k+). (88)
To remove the discontinuity would require,
However, a discontinuity is of interest. The discontinuity is a negative quantity unless the βu term is sufficiently large and (τ−k−+τ+k+) is a positive quantity. Typically, (τ−k−+τ+k+) is positive even though is typically larger than τ− because k− is typically much smaller than k+ for significant Δtρ. A negative step discontinuity means the voltage decrease accelerates across the regime threshold.
However, the discontinuity may be positive in the direction of the voltage derivative, due to sufficiently large βu, large inter-event interval Δtρ, higher u, or longer time constant τ+ (or shorter time constant τ−).
A constant excitatory input sufficient to overcome voltage leak in the negative regime may not be sufficient to overcome leak in the positive regime. Thus, there is a potential for a voltage cycle under constant excitatory input. If the u null-cline intersects the region of the cycle, there is even the potential for a limit cycle.
In the negative regime, the voltage tends toward the negative regime null-cline given τ−<0. However, voltage tending toward the negative regime null-cline is not the same as declining away from the regime threshold. Rather, there is only a particular region of u in which they are equal under the typical regime threshold definition.
There exists a negative finite return regime which is a subspace of the negative regime in which the state tends toward the positive regime if τ−β is negative.
For negative regime, v<v+ and it requires dv/dt>0. By this definition, v<−q−. For there to be a region in which −q−>v+,
τ−βu>(v+−v−). (90)
If τ−β<0, then,
u<(v+−v−)/τ−β, (91)
which is a non-null region as long as τ−β>−∞. The region is delineated by a zero-slope line that crosses the voltage threshold v+ at (v+−v−)/τ−β.
The derivative of the voltage in the negative finite return regime will thus be positive. However, merely because the state tends toward the positive regime, does not mean it will reach it.
At the negative regime side of the threshold, v=v+−Δv−, any state with sufficient u tends into the positive regime with voltage derivative,
whereas at the positive regime side of the threshold v=v++Δv+,
The acceleration of the voltage across this barrier in the direction of the derivative is,
g(Δv)=τ+Δv+−τ−(v+−v−−Δv−). (94)
At the limit Δv−→0 and Δv+→0 for τ−<0, g(Δv)<0 so the state is generally accelerating toward vS.
In the negative finite return regime, the state tends into the positive regime in finite time for finite negative τ−.
In the negative finite return regime, v<v+<−q−. To arrive at a state vf in the positive regime, vf>v+>v. Thus,
v+q
−
<v
f
+q
−. (95)
For negative regime, v+q−<0, but vf<q− may be chosen such that, as a result, for negative finite non-zero τ−, the following holds,
Following from the example above, at u=−10, for v=−45 mV and vf=−35 mV, Δt≅0.5 ms.
However, as described above, there are alternative definitions possible for the regime threshold. For example, the regime threshold may be defined to follow the voltage null-cline above u=0, or, {circumflex over (v)}+=max(v+, q+). This would eliminate the positive finite return regime but would not eliminate the negative finite return regime. An alternative definition would be to set {circumflex over (v)}+=max(q−, q+) so that the regimes are split at the right-most voltage null-cline. However, given the typical slopes, the positive and negative voltage null-clines cross below u=0, and thus the voltage differential would still generally not be continuous. Rather, the situation would be reversed from above since the positive voltage null-cline would be further left of the regime threshold below u=0. There are various other alternatives that a modeler may define, and it should be noted that the general analysis methods discussed above may be applied to dynamics resulting from other or atypical settings as well.
The finite return regimes are results of a regime threshold that is different from the voltage null-clines. To remove the positive return regime, the regime threshold may be made dependent on the second state variable, namely equal to the positive regime null-cline when that null-cline applies and otherwise equal to the default,
However, this still leaves the negative finite return regime. While both finite return regimes can be useful, to remove the negative finite return regime will prevent the cell from firing for arbitrarily large voltage states if the current state is sufficiently negative. The modification to the regime threshold is,
This is due to the negative voltage null-cline crossing the voltage null-cline discontinuity at v+′. However, this can also be controlled. To remove both finite return regimes,
Further freedom of design may be obtained by using the generalized version of the voltage null-cline definition. It should be recalled that the standard model is defined with the transformations,
q
ρ=−τρβu−vρ′, (100)
r=δ(v+ε). (101)
However, since the voltage dynamics are governed by,
the parameterization may be limiting in terms of having the voltage null-cline slopes being related by slope factor β. To remove this limitation, a more general definition is,
q
ρ=−βρu−vρ′, (103)
where there is a subtle change from βτρ to βρ. The generalized definition is related to the standard definition by the setting,
βρ=τρβ. (104)
This generalization may be used with or without finite return regime controls discussed above.
The model inherits the computational benefits of the anti-leaky-integrate-and-fire (ALIF) neuron model. It has been shown in the present disclosure that the second state variable (u) can be omitted while retaining a near equivalent behavior for some particular parameterizations with low coupling (small β and δ) and high second-state time constant (large τu) by scaling the voltage time constant to compensate for the effects of the second-state null-cline attractor on the voltage decay and rise times.
The reason for this is that the solution to the model dynamics (if continuous coupling is assumed) is of the form,
where a=√{square root over ((τρ+τu)2+2ρδτρτu)}, b=(a−τρ+τu)/2τuτρ, c=(−a−τρ+τu)/2τuτρ, v′=v−vρ′ and u′=u+δε. If the coupling is weak and τu is large, then after approximation, one might propose a≅τu and b≅1/τρ while c≅0. Accordingly,
v′(t)≅[v′(0)−u′(0)βτρ]et/τ
The time to reach such a state (for example spiking or rest or any arbitrary v(t)) is given by,
To achieve approximately the same effect with only one variable (i.e., only v instead of v and u), the time constant τρ can be adjusted to τρ′ to obtain equivalent time t or,
It can be recalled that by default ε=−v−, and thus for δ<0, the denominator in the logarithm on the right-hand-side of equation (108) is larger, thus making the resulting fraction smaller and the logarithm smaller. For this reason, τρ′ need to be smaller (faster) than τρ to account for the removal of the second state that was slowing the effective behavioral time responses.
Certain aspects of the present disclosure support the model definition and basic model dynamics resulting from negative (leaky-integrate-and-fire) and positive (anti-leaky-integrate-and-fire) regimes as well as momentary coupling at events. The present disclosure also provides additional dynamics features including fundamental aspects driving oscillatory or resonating behavior resulting from the model definition including the momentary coupling. Following this, it will be described in the present disclosure how these dynamics can generate a variety of biologically-consistent cell behaviors.
In this section, the present disclosure provides explanation how the dynamics aspects described above and similar dynamics allow the Model to be designed to exhibit a rich variety of behaviors including responses to spike-timing patterns, sustained or step inputs, ramps and other inputs often used to characterize biological cell behavior types. Specific parameterization examples will be also provided. However, these are only examples as there are generally various ways to design the Model to exhibit particular behavioral characteristics. Yet, particular examples can be illustrative in demonstrating how particular parameters govern or influence behavioral differences according to the dynamics principles described above. Given the flexibility of the Model, it would be impractical to consider all combinations and cases, and thus the present disclosure is focused on a demonstrative set of behaviors and a subset of illustrative means to achieve each of those behaviors. Similarly, the behaviors are not limited to the specific dynamics described in the previous chapter, but also include similar or related dynamics according to the general principles discussed there.
For convenience, a nominal (arbitrary) cell parameterization will be used as a reference or baseline. To demonstrate a variety of responses to inputs, examples will typically use a parameterization and configuration where {circumflex over (v)}+=v+, {circumflex over (V)}=v−, ε=−v−, τ−<0, τ+>0 and τu>0 with artificial events at a nominal periodicity. Nominal settings without loss of generality can be: τ−=−20 ms, τ+=2 ms, τu=10 ms, v+=−60 mV, v−=−70 mV, vS=30 mV, β=1, δ=−0.25 and Δu=0, and event periodicity of 1 ms can be considered. However, there is no particular reason for selecting those nominal parameters or settings over others. Other parameterizations can be used to obtain similar or even the same behaviors. The purpose of choosing a specific nominal example is only to show how various behaviors can be achieved with (or without) changes to particular parameters. In other words, the purpose of choosing the specific nominal example is to address whether a cell with the same parameters can demonstrate multiple behaviors, and whether a behavior change can be achieved in multiple/different ways (e.g., by a change in one parameter or an alternative parameter or a combination of parameters or by a change in input).
Model behaviors can be produced at any desired time scale because time units in the definition of the Model are arbitrary. Therefore, without loss of generality, behaviors on abstract time scales can be discussed.
The present disclosure examines a set of demonstrative behaviors in response to a variant of input contexts and in terms of illustrative parameter design. The behaviors are organized according to aspects of synchrony or timing, aspects of accommodation or excitability, aspects of spiking patterns under sustained input or effectively tonic and phasic spiking and bursting, and dynamics of rest or reset orbits under sustained inputs.
Since inputs are applied at events, Dirac delta functions can be used as a fundamental basis for input. Together, multiple delta function inputs generally comprise spiking timing patterns (whether the inputs to the post-synaptic cell are from the same synapse or multiple synapses). A set of representative behaviors is examined in response to such patterns. Although characterized here according to such inputs, the behaviors are relevant in other input contexts as well. Delta function inputs are just a convenient means for elemental analysis in discrete time.
The Model's positive regime dynamics incur spike latency. The positive regime time constant τ+ determines the nominal latency without coupling. With coupling, the current dynamics interact depending on whether the current is above or below the null-cline. The only requirement for incurring spike latency is to move the state into the positive regime at an event (i.e., to spike). From a rest state, with a Dirac delta function input, to enter the positive regime only requires the magnitude contribution of input in voltage terms Δv to satisfy Δv>v+−v−. In general, the smaller the initial amount by which the state moves into the positive regime is, the longer the spike latency is.
For purposes of comparison, the spike trace and state trajectory are depicted in
Thus, the degree of coupling influences spike latency. Decreasing the coupling decreases spike latency. However, decreasing the coupling periodicity would also have the effect of decreasing the latency. One could also achieve a decreased latency simply by shortening the positive regime time constant τ+ or by decreasing the amount of input so that the state enters the positive regime by a smaller amount initially. In summary, there are multiple ways to obtain a particular latency and there is a freedom to design a Model for desired behavior even if some elements are constrained by other design goals.
This flexibility in the Model is an advantage because a cell behavior can be designed using various dimensions: input, cell parameters, and events (including artificial events). Moreover, this flexibility applies not only to spike latency but to model behaviors and dynamics in general. As aforementioned, these dimensions are all independently controllable elements. Not only are inputs and cell parameters distinct dimensions, but events themselves, regardless of whether there is associated input or output, control coupling and thus influence dynamics.
Sub-threshold oscillation may occur after a spike because of the return state. Post-spike oscillation may be set in motion as a result of reset conditions (where the voltage and current state are reset after spike) or due to the trajectory of the state to reach the spike condition (because the reset of the current state is an offset) or a combination of both. As discussed above, reset conditions can result in a rest orbit that converges to or diverges from a rest state (rest origin).
This dynamic is the underlying feature that enables resonator behavior as exhibited in an example 3600 in
A decaying sub-threshold oscillation driving resonator behavior can be seen in state trajectory near rest.
In contrast, integrative behavior illustrated in an example 3800 in
The underlying coupling feature also enables a behavior that may be termed as an apparent variability in the depolarization threshold due to inhibitory input. Inhibitory input, as with excitatory input, can be used to disturb the Model state, moving it into a sub-threshold oscillation. If a subsequent excitatory input occurs synchronously (coincident) with the oscillation, the effect can be reinforced and give the impression that the “threshold” of the cell has changed.
The effect of input magnitude and input timing on the state and thus spiking behavior can be controlled by designing the orbits in terms of the shape in <v,u> space (e.g., circular or elliptical) and time constants controlling the trajectory velocity. The former influences invariance with respect to timing (bandwidth) and magnitude, and the latter influences the primary resonant center frequency.
Above, the oscillations corresponded to sub-threshold state trajectories. However, trajectories are not constrained to the negative regime. If the rebound from an inhibitory input is sufficient, it can induce a spike. This may require a large inhibitory input or multiple inhibitory inputs to move the state into a path of outward spiral depending on parameters. A burst can be designed by increasing current dynamics (the slope of the current null-cline).
Moreover, the very same underlying feature may enable bi-stability behaviors. Once a burst is underway, the state dynamics are in an orbit (which passes through the reset state(s)). In order to stop the burst, an input may be needed to knock the state out of orbit. In an aspect, either type of input (excitatory or inhibitory) can be suitable.
In an aspect of the present disclosure, the parameters can be also modified according to the aforementioned analysis in the previous chapter to counter the decay and maintain the orbit in the negative regime. In addition, two or more of the methods described in that chapter can be used in conjunction.
A variety of demonstrative dynamics of the Model including decaying, sustained and exploding orbits is presented above. A cell can transition between these types of cycles due to coupling or events (whether input, output or artificial). For example, above a sub-threshold, oscillation was demonstrated after spike (output event) and decaying orbit (to rest) was demonstrated after an excitatory input (input event). In general, the change in dynamics behavior may also depend on an oscillation if one is already present. The present disclosure first demonstrates an example application of one of the aforementioned principles to design oscillation by coupling (as opposed to events, input, or other parameters), and then demonstrates an example of how the oscillation can be designed to continue even after an event.
Designing a cell that sustains oscillation after a spike may be achieved only by configuring the reset current offset Δu to bring the state into the orbit region. Generally, this may be accomplished by configuring Δu, {circumflex over (v)}− or both so that the state after reset is on an orbit path.
It should be noted that the behavior of the cells after spiking generally comprises a negative voltage swing (the oscillation starts off decreasing). However, a depolarization after spike is also possible to achieve.
Most of the above behaviors were achieved without changing Model parameters. All behaviors were achieved without substantially changing parameters (adjusting one or two at most). Moreover, adjusting particular parameters to alter aspects of behavior did not disturb other aspects of behavior to be retained. This advantage can be observed in behaviors characterized in other ways than spike timing patterns.
Cell behaviors are often characterized in terms of other input waveform shapes with the typical goal of determining the excitability of a cell (what aspects do or do not excite the cell). One of the more interesting input waveform shapes is the ramp because the amount of input evolves. Specifically, such characterization may involve supplying an increasing current to a cell. The ramp can be characterized by the slope (rate of increase) and the duration. Increasing (or otherwise changing) input is interesting when juxtaposed with how cell dynamics change during the input and whether they counter-act or emphasize the input changes.
Accommodation is a behavior in which a cell's dynamics are above to recover from an increasing input, without spiking, as long as the rate of increase of the input is relatively low. However, if the rate of increase is faster, the cell may be unable to recover and prevent a spike.
Varying spike rate behaviors may also be achieved for increasing excitatory inputs. A class 1 behavior, or increasing spiking rate, may be achieved by decreasing both the negative regime voltage time constant and current time constants. This emphasizes the effect of the instantaneous magnitude of the input, which is increasing, rather than the integrated or accumulated amount of input. As long as the reset parameters bring the state back into a similar trajectory each time, the behavior change (inter-spike-interval) will be largely influenced by that instantaneous input magnitude.
A relatively more constant spiking rate, a class 2 behavior, may result if the reset parameters move the state such that the decay toward rest is faster after each spike so that as the excitatory input ramps up, the inter-spike-interval may remain relatively constant rather than decrease. A class 2 behavior, or relatively constant firing rate for a rising excitatory input, may thus result by balancing the reset condition for the current (and/or voltage).
Application of step input currents has typically been used as an experimental protocol for biological cell characterization including basic tonic and phasic spiking as well as bursting. These behaviors are quite straightforward as the focus is less on timing and input and more on ensuing output pattern, particularly in the case of tonic behavior.
Ignoring the step transition itself for the moment, the sustained input thereafter may also be thought of as a rough approximation of a sum effect of many discrete inputs that are occurring over a period of time, possibly with a substantial filtering effect. With sustained excitatory input, tonic spiking is a behavior resulting from the input driving the voltage repeatedly into depolarization. Depending on state variable coupling and activity, the rate of spiking may change as a direct result.
Phasic spiking behavior differs in that the spiking stops after the transition. However, the difference may be less than it appears at first glance. Phasic behavior will occur if the reset conditions put the trajectory on a non-spiking path after the first spike (despite sustained input) or if the trajectory during the first spike itself moves the state into a path that will decay after reset instead of spike again. The difference between the former and latter is what mechanism (or combination of mechanisms) drives the state to a non-spiking regime.
Tonic and phasic bursting are similar to tonic and phasic spiking. However, to generate bursts, the only fundamental change to parameterization (from nominal) that may be needed is increasing the reset voltage or adjusting the reset current offset so that the cell continues to spike. Also, the current time constant will influence burst characteristics such as how long the bursts lasts.
Phasic busting behavior is similar and achievable using the same mechanisms. A difference (reduction) in input can turn tonic to phasic because the rebound may be reduced and insufficient to kick off the next burst. This can be demonstrated by adjusting the relative regime reference voltage difference to adjust sensitivity of the cell behavior to exact input level.
Mixed mode behavior is similar if not often indistinguishable from tonic spiking. However, there are other ways of generating mixed mode behaviors including parameterizations like those used for tonic or phasic bursting. This is because of two reasons. First, bursts are just a sequence of spikes close in time where the requisite “closeness” to be considered a burst is rather a matter of time scale or definition (often arbitrary). Second, a single spike is basically a burst that has only one spike. Thus, mixed behaviors can be generated by elements of tonic or phasic bursts (one or more spikes).
The same general principles may apply for spike frequency adaptation behaviors, as illustrated in an example 5700 in
A variety of straightforward behaviors in response to step excitatory input can be explained by reset conditions and evolution of the current state as determined by the current state decay time constant. However, various behaviors may be generated by alternative parameterizations and mechanisms (or combinations), and the presented examples are demonstrative and not meant to be exhaustive or preferred. The examples are selected to demonstrate particular minimal parameter differences that change behavior or to explain a particular relationship without necessarily any intention to minimize parameter changes.
The current time constant τu is typically a positive value so that the state tends toward the null-cline. Moreover, the coupling parameter δ is typically negative so that the null-cline has a positive slope. However, if the sign of τu is reversed, the transform variable and null-cline r becomes a saddle point instead of an attractor. This would result in a spiking loop without input because: (i) above the null-cline the state is pushed up and drawn toward negative voltage/away from positive voltage; (ii) with a sufficiently aggressive (small) τ−, the state is drawn across the null-cline; and (iii) once under the null-cline, the state is pushed downward and drawn out toward spiking voltage. When the state is reset, the loop repeats and the cell continues spiking.
However, the cell can be made to stop spiking by adding excitatory input. This may require adding enough excitatory input to offset the voltage derivative at large negative voltages. As a result, the negative voltage derivative becomes positive. If it is positive enough, the state trajectory can cross back over the null-cline and loop back (a sub-threshold loop) and even decay to a stable point if desired. But, if the excitatory input is removed (or countered with inhibition), the cell can re-enter the spiking loop.
In demonstrating these principles below, some additional atypical parameterization aspects can be considered. For example, the regime threshold {circumflex over (v)}+ can be set to a value other than the positive regime reference voltage v+. This means that the state trajectory can be designed using one of the voltage null-clines beyond the typical extend (reference voltage). For example, by reducing the regime threshold to {circumflex over (v)}+=v−, the positive regime null-cline effectively extends all the way back to v− and governs dynamics between v− and v+ as well as above v+. This can be used to achieve various desired effects including pushing the voltage state rather than pulling it.
In an example 5800 in
Bursting can be achieved by applying one or more of the principles discussed above. For example, the reset state can be changed to place the state on a faster orbit path.
In summary, the Model is flexible in that even atypical parameter ranges can be used to model rich biologically realistic behaviors.
For practical purposes, a limited set of examples were discussed in the present disclosure. Accordingly, the examples demonstrated a limited set of behaviors, each generated in one of multiple possible ways. A model designer should be encouraged to apply the principles to achieve these and related or similar desired behaviors and interactions between dynamics and consider that multiple design combinations (parameterizations, events, and inputs) are generally possible. The aforementioned behaviors are reproduced in
Examining
It should be noted in
In another aspect of the present disclosure, the instructions loaded into the general-purpose processor 6602 may comprise code for updating the state of the artificial neuron based on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, code for updating the state of the artificial neuron at time intervals, and code for updating the state of the artificial neuron if an event occur at or between time instants.
In another aspect of the present disclosure, the processing unit 6706 may be configured to update the state of the artificial neuron based on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, update the state of the artificial neuron at time intervals, and update the state of the artificial neuron if an event occur at or between time instants.
In another aspect of the present disclosure, the processing unit 6804 may be configured to update the state of the artificial neuron based on events, wherein a neuron model for the artificial neuron has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes, update the state of the artificial neuron at time intervals, and update the state of the artificial neuron if an event occur at or between time instants.
According to certain aspects of the present disclosure, the operations 6300, 6400 and 6500 illustrated in
The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to, a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in Figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, operations 6300, 6400 and 6500 illustrated in
As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement signal processing functions. For certain aspects, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
The processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Machine-readable media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. The computer-program product may comprise packaging materials.
In a hardware implementation, the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files.
The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described herein. As another alternative, the processing system may be implemented with an ASIC (Application Specific Integrated Circuit) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more FPGAs (Field Programmable Gate Arrays), PLDs (Programmable Logic Devices), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.
The machine-readable media may comprise a number of software modules. The software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module.
If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material.
Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
The present application for patent claims benefit of Provisional Application Ser. No. 61/728,409 filed Nov. 20, 2012 and Provisional Application Ser. No. 61/759,181 filed Jan. 31, 2013, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. The present application for patent is a continuation-in-part of patent application Ser. No. 13/483,811 filed May 30, 2012, pending, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61728409 | Nov 2012 | US | |
61759181 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13483811 | May 2012 | US |
Child | 14081777 | US |