1. Field
Certain aspects of the present disclosure generally relate to neural networks and, more particularly, to a continuous-time, event-based model for neurons and synapses.
2. Background
An artificial neural network is a mathematical or computational model composed of an interconnected group of artificial neurons (i.e., neuron models). Artificial neural networks may be derived from (or at least loosely based on) the structure and/or function of biological neural networks, such as those found in the human brain. Because artificial neural networks can infer a function from observations, such networks are particularly useful in applications where the complexity of the task or data makes designing this function by hand impractical.
One type of artificial neural network is the spiking neural network, which incorporates the concept of time into its operating model, as well as neuronal and synaptic state, thereby increasing the level of realism in this type of neural simulation. Spiking neural networks are based on the concept that neurons fire only when a membrane potential reaches a threshold. When a neuron fires, it generates a spike that travels to other neurons which, in turn, raise or lower their membrane potentials based on this received spike.
Certain aspects of the present disclosure generally relate to a continuous-time neural network event-based simulation. This model is flexible, has rich behavioral options, can be solved directly, and is low complexity.
Certain aspects of the present disclosure provide a method for neural networks. The method generally includes determining a first state of a neuron model, wherein the neuron model has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes; and determining an operating regime for the neuron model from the two or more regimes, based on the first state.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes a processing system configured to determine a first state of a neuron model, wherein the neuron model has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes; and to determine an operating regime for the neuron model from the two or more regimes, based on the first state.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes means for determining a first state of a neuron model, wherein the neuron model has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes; and means for determining an operating regime for the neuron model from the two or more regimes, based on the first state.
Certain aspects of the present disclosure provide a computer-program product for neural networks. The computer-program product generally includes a computer-readable medium having instructions executable to determine a first state of a neuron model, wherein the neuron model has a closed-form solution in continuous time and wherein state dynamics of the neuron model are divided into two or more regimes; and to determine an operating regime for the neuron model from the two or more regimes, based on the first state.
Certain aspects of the present disclosure provide a method for neural networks. The method generally includes determining a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and determining a second state of the neuron model at or shortly after a second event, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes a processing system configured to determine a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and to determine a second state of the neuron model at or shortly after a second event, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes means for determining a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and means for determining a second state of the neuron model at or shortly after a second event, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide a computer-program product for neural networks. The computer-program product generally includes a computer-readable medium having instructions executable to determine a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and to determine a second state of the neuron model at or shortly after a second event, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide a method for neural networks. The method generally includes determining a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and determining a second event when, if ever, a second state of the neuron model will occur, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes a processing system configured to determine a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and to determine a second event when, if ever, a second state of the neuron model will occur, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide an apparatus for neural networks. The apparatus generally includes means for determining a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and means for determining a second event when, if ever, a second state of the neuron model will occur, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
Certain aspects of the present disclosure provide a computer-program product for neural networks. The computer-program product generally includes a computer-readable medium having instructions executable to determine a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and to determine a second event when, if ever, a second state of the neuron model will occur, based on the first state, wherein dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events.
So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects.
Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
As illustrated in
The transfer of spikes from one level of neurons to another may be achieved through the network of synaptic connections (or simply “synapses”) 104, as illustrated in
The neural system 100 may be emulated in software or in hardware (e.g., by an electrical circuit) and utilized in a large range of applications, such as image and pattern recognition, machine learning, motor control, and the like. Each neuron (or neuron model) in the neural system 100 may be implemented as a neuron circuit. The neuron membrane charged to the threshold value initiating the output spike may be implemented, for example, as a capacitor that integrates an electrical current flowing through it.
Fundamentally important biological neural behaviors may be impossible to emulate or predict with typical spiking neuron models because those models do not capture: (i) fine (continuous) timing or (ii) continuous time dynamics. Even models that are expressed in continuous time differential equation form have no closed-form solution and thus are often approximated by numerical methods, such as iteratively, for example, using the Euler method. The problem with iterative models is evident when observing how spike timing can change dramatically (e.g., by tens of milliseconds or more) merely by changing only the time step resolution, even by only a small amount (e.g., from 1 ms to 0.1 ms). The behavior of such models also typically depends strongly on details of the method of implementation rather than fundamental aspects of the theoretical model. While attempting to approximate such models with fine time steps may also be computationally burdensome, more importantly, it generally fails to account for continuous time dynamics, particularly if the model has multiple inter-dependent state variables (such as voltage and current) and multiple attractors.
Accordingly, what is needed is a continuous time dynamical neuron model that is capable of capturing biologically realistic temporal effects.
What makes a good neuron model? The answer may depend on the perspective and purpose. Assuming both neuroscience and engineering objectives, a model that has biologically realistic (or at least biologically consistent behavior) and is computationally attractive may be desired.
The present disclosure sets forth general principles for designing a useful spiking neuron model. A good neuron model has rich potential behavior in terms of two computational regimes: coincidence detection and functional computation. Moreover, a good neuron model should have two elements to allow temporal coding: arrival time of inputs affects output time and coincidence detection can have a narrow time window. Finally, to be computationally attractive a good neuron model should have a closed-form solution in continuous time and have stable behavior including near attractors and saddle points. In other words, a useful neuron model is one that is practical and that can be used to model rich, realistic and biologically-consistent behaviors, as well as be used to both engineer and reverse engineer neural circuits in well-defined, stable computational relations.
Behavior from Events
Natural nerve cells appear to exhibit a plethora of behaviors including tonic and phasic spiking and bursting, integrating input, adapting to input, oscillating sub-threshold, resonating, rebounding, accommodating input, and more. Often different behaviors may be induced by inputs with different characteristics.
For certain aspects of the present disclosure, these cells may be viewed as abstract event state machines in which cell dynamics are determined by events. Furthermore, in this view, events set cell dynamics in motion until the next event, where the dynamics between events are constrained depending on state at the time of the prior event.
To put this in concrete terms, consider three consecutive input events where the first and last events comprise instantaneous inputs that significantly alter the state at their time of occurrence. Assume the middle event has no associated input. Now, consider a hypothetical neuron model in which the behavior changes if the middle event is omitted. With this hypothetical neuron model, the event itself is significant, even if there is no associated input or output. Certain aspects of the present disclosure provide such a neuron model.
The purpose of a neuron model that depends on events themselves is to achieve rich behavioral characteristics with low complexity. To achieve a rich behavioral repertoire, a state machine that can exhibit complex behaviors may be desired. If the occurrence of an event itself, separate from the input contribution (if any) can influence the state machine and constrain dynamics subsequent to the event, then the future state of the system is not only a function of a state and input, but rather a function of a state, event, and input. Events are added dependencies. Given more dependencies, one may potentially simplify individual dependencies (e.g., how future state depends on past state) and yet achieve an equivalent or richer behavioral repertoire.
This principle may be expressed mathematically by defining a univariate or multivariate state S that evolves absent any events between time t0 and tƒ according to
S(tƒ)=ƒ(tƒ,S(t0)) (1.1)
However, upon an event at time t, the state evolves according to
S
=g
S
(1.2)
Effectively, this means if there is one event at time t between times t0 and tƒ, the state evolves between time t0 and tƒ according to
S(tƒ)=ƒ(tƒ,g(t,S(t0)) (1.3)
and so on. The significance of this is that there may be dependence on t regardless of input or even whether there is input.
Certain aspects of the present disclosure provide a neuron model with such a property, and other desirable properties, which is described below. But first, the present disclosure provides a discussion of principles behind other desirable properties.
A useful neuron model can perform temporal coding. And if a neuron model can do temporal coding, then it can also perform rate coding (since rate is just a function of timing or inter-spike intervals).
Arrival Time
In a good neuron model, the time of arrival of an input should have an effect on the time of output. A synaptic input—whether a Dirac delta function or a shaped post-synaptic potential (PSP), whether excitatory (EPSP) or inhibitory (IPSP)—has a time of arrival (e.g., the time of the delta function or the start or peak of a step or other input function), which may be referred to as the input time. A neuron output (i.e., a spike) has a time of occurrence (wherever it is measured, e.g., at the soma, at a point along the axon, or at an end of the axon), which may be referred to as the output time. That output time may be the time of the peak of the spike, the start of the spike, or any other time in relation to the output waveform. The overarching principle is that the output time depends on the input time.
One might at first glance think that all neuron models conform to this principle, but this is generally not true. For example, rate-based models do not have this feature. Many spiking models also do not generally conform. A leaky-integrate-and-fire (LIF) model does not fire any faster if there are extra inputs (beyond threshold). Moreover, models that might conform if modeled at very high timing resolution often will not conform when timing resolution is limited, such as to 1 ms steps.
With an ideal neuron model, the output time will change if the time of any input changes. A good model will conform in a well-defined manner most of the time. For example, the Simple Model generally exhibits a spike timing behavior that is dependent on input times. However, the quadratic differential equation causes a fast voltage rise so that the magnitude of an input may have more relevance than the time of an input. An anti-leaky-integrate-and-fire (ALIF) model and certain aspects of the present disclosure have a more well-defined dependence of output time on input time, as described below.
Coincidence
A useful neuron model should be able to detect fine timing coincidences. This may demand that the impact of an input is not unnecessarily spread out in time. A simple integration model spreads out an input to infinite time. A LIF model limits the spread by leaking. The faster the model leaks, the finer the ability of the neuron to detect a timing coincidence. Thus, a model that leaks exponentially has, all other factors being equal, better timing coincidence detection capability than a model that leaks linearly.
An ideal model would have a variable coincidence detection window. However, detection window can be no smaller than the neuron model's best coincidence detection time resolution. For example, in the Simple Model, the rate of leak depends quite dramatically on the membrane potential. Near threshold, the neuron model retains the input effect for a considerable time, while mid-way between rest and threshold, the neuron model loses the input effect more quickly. Unfortunately, this means the Simple Model detection capability is limited by the slow (almost zero) leak rate near the threshold, which degrades coincidence detection capability almost to the point of a purely integrating model. Moreover, the leak depends on the recovery current variable, as well, meaning that the leak can be even worse than flat and can even increase resulting in voltage increasing despite lack of further input. In contrast to the above, to solve these problems, certain aspects of the present disclosure, despite having dependence on multiple state variables, also have fine timing coincidence resolution capability (as do LIF and ALIF models). To be able to vary the coincidence detection window, certain aspects of the present disclosure use input formulations which spread the input over a longer time instead of altering the fine resolution of the neuron model.
A good neuron model has rich behavior possibilities to model different biologically realistic effects. Two potential computational regimes may be important: one of integrative coincidence detection and one of spike-timing functional determination.
Temporal Detection
For coincidence detection, a useful neuron model should have a regime in which it is able to forget history. This allows a neuron to be used as a detector: a spike occurs upon detection (when detection occurs) and no spike occurs if there is no detection. A LIF neuron model or neuron model with LIF-type behavior may suffice. LIF models are able to detect temporal coincidences to a degree depending on the rate of leak (or decay), which determines how fast the neuron forgets about the prior input and how consistently. However, such models may be limited in terms of other computational properties or biological behaviors that may be reproduced.
What is desired is that the rate of leak be high enough to lose memory of prior inputs that are outside the window of computational relevance (relative frame). While considering more advanced models, one should strive to retain the advantages of this leaky-integrate-and-fire behavior and not obviate these advantages by designing models that prevent, complicate or destabilize such temporal coincidence detection capabilities. Models such as the Simple Model have degraded the leak. In the case of the Simple Model, there is almost no leak near the voltage threshold as discussed above. In contrast, certain aspects of the present disclosure have these desirable LIF-type behaviors.
Temporal Computation
Spiking neuron models are often defined in terms of a threshold. When the threshold is exceeded, the model spikes. Rather than contemplating whether there is such a threshold in biological cells, certain aspects of the present disclosure consider whether there is an invoking event after which a cell will fire, such that it is only a matter of time before firing.
Certain aspects of the present disclosure consider a model with such well-defined properties where the latency is variable. If a cell would fire with a particular fixed latency, the usefulness of the property might be limited since a delay can be achieved by axonal, synapse, or dendritic processes. But if the relative delay between an invoking event and a spike was variable depending on inputs or events subsequent to the invoking event, then one would effectively have a computational engine that generates an output coded in the relative delay, which is potentially a function of relative delays from the invoking events and interim events (including inputs). This would be a function that gives a relative output time as a function of relative input times (whether sub-threshold or super-threshold). This would indeed be useful, particularly if the function was well defined, because it would not only provide a framework for engineering a system with spiking neurons, but also a framework for understanding what a network of spiking neurons is computing.
Returning to the question of whether such a behavior might exist in biological cells, such cells have regenerative upstroke dynamics generating what is referred to as the voltage spike. The often overlooked A-channel is a voltage-gated transient potassium ion channel that activates quickly, which can counteract rapid sodium influx and slow the regenerative upstroke. As a result, it is possible to achieve very long latencies (hundreds of milliseconds or longer) from the time the upstroke is invoked until the spike peak occurs. The voltage level to trigger the A-channel may be slightly below the “threshold” for the sodium channel chain reaction, so the actual timing of events and input may alter the latency properties, as well.
For functional computation, a good neuron model should have a regime in which it will fire regardless of further input, it is only a matter of when it will fire. This regime allows a neuron to be used as a computational device in which the interval between spikes can code information. This is different from detection because it is the time of the spike that codes information, not the existence of the spike itself. The ALIF model and certain aspects of the present disclosure exhibit this behavior. The Simple Model possesses a similar behavior, although the Simple Model cannot be represented in closed form and the quadratic nature of the voltage rise can diminish dependence of output time on input times because it is less discriminating to input time differences as opposed to input magnitude differences. The nature of the Simple Model dynamical equations means the voltage can stay relatively steady near the threshold (on either side). This means that a small difference in magnitudes of inputs can actually have a diverging effect on voltage traces: the difference between voltage traces with and without a magnitude difference increases before converging. This creates instability when learning because, in learning, weights are being adjusted by small amounts, but the effect on the voltage trace is magnified at a time shortly following input effects. In contrast, with certain aspects of the present disclosure (as well as LIF and ALIF models) the effect of a magnitude change in input results in converging voltage state trace.
A good model should be computationally convenient, being stable and directly computable.
Closed Form
Many biologically motivated neuron models are described by differential equations. For example, the Simple Model is described by two second-order differential equations. Unfortunately, these equations cannot be solved in closed form. With a good neuron model, one should be able to directly compute when the neuron will fire (if ever) based on the inputs and current state. Since this cannot be done with the Simple Model (without a lookup table, which has limited resolution), the Euler method of step-based integration is typically used. However, this is computationally burdensome at high timing resolution, as well as unstable at low timing resolution. Moreover, such models suffer in terms of behavior because the behavior may depend more on implementation details (e.g., order of state propagation, numerical method details, and time resolution) than fundamental aspects of the theoretical dynamical equations (as opposed to the actual dynamics).
The LIF, ALIF, and certain aspects of the present disclosure can be solved directly. However, only the ALIF and certain aspects of the present disclosure have non-zero delay until spiking once sufficient input has arrived.
Stability
From a biological modeling perspective, a neuron model is desired for which one can easily determine parameters to match a biological cell's behavior over a broad range of conditions. A problematic model would change behavior for one regime away from biologically desired behavior every time one tries to configure the behavior to match a second regime. Or, the behavior would change dramatically with a small parameter change.
In contrast with these problematic models, an ideal model should be stable in implementation. This is most often a problem near saddle points or attractors. Also, when models are solved by integration (e.g., Euler), the assumption that first derivatives of state variables do not change over the time step is often a poor approximation near thresholds and large values. As a result, computations can typically overshoot or undershoot in step-wise determinations. Models that have a closed-form solution (such as ALIF and certain aspects of the present disclosure) thus have an advantage for stability in implementation.
From an engineering standpoint, a neuron model is desired that has stable computational properties including temporal detection and temporal computation.
Certain aspects of the present disclosure provide a behaviorally rich, biologically-consistent, computationally-convenient model of a neuron which adheres to the principles described above. In this section, a model designed to achieve this is presented. The general model is unique in that it is defined by the state at the time of events and by operations governing the change in that state from one event to the next event.
The model is defined in terms of events, and events are fundamental to the defined behavior. The behavior depends on events, the inputs and outputs occur upon events, and the dynamics are coupled at events.
As illustrated in the graph 200 of recovery current versus membrane potential (voltage) in
The symbol ρ is used herein to denote the dynamics regime with the convention to replace the symbol ρ with the sign “−” or “+” for the negative and positive regimes, respectively, when discussing or expressing a relation for a specific regime.
The model state is defined by a membrane potential (voltage) ν and recovery current u. In basic form, the regime is essentially determined by this state. There are subtle, but important aspects of the precise and general definition, but for the moment, consider the model to be in the positive regime 204 if the voltage ν is above a threshold (ν+) and otherwise in the negative regime 202. This will suffice for understanding the basic model dynamics definitions in individual regimes as explained below, while a precise and complete definition of determining regime is provided below.
The dynamics of the model state are conveniently described in terms of dynamics of a transformed state pair (ν′,u′) The state transformations at event time are
ν′=ν+qρ (2.1)
u′=u+r (2.2)
where qc and r are the linear transformation variables. The voltage transformation depends on the regime ρ. The model dynamics, which are also dependent on the regime, are defined by differential equations in terms of the transformed state pair
where τ− is the negative regime time constant, τ+ is the positive regime time constant, and τu is the recovery time constant. For the moment, let us assume these are constant values, although variable time constants are described below. For convenience, the negative regime time constant τ− will be specified as a negative quantity to reflect decay so that the same expression for voltage evolution may be used as for the positive regime in which the exponent and τ+ will generally be positive, as will be τu.
The state dynamics of the model may be defined in an event framework. Between events, the dynamics may be defined by ordinary differential equations (ODEs). The dynamics of the two state elements may generally be coupled at events by transformations offsetting the states from their null-clines at the time of the event, where the transformation variables are
q
ρ=−τρβu−νρ (2.5)
r=δ(ν+ε) (2.6)
where δ, ε, β, and ν−, ν+ are parameters. The two values for νρ are the base or reference voltages for the two regimes. The parameter ν− is the base voltage for the negative regime, and the membrane potential will generally decay toward ν− in the negative regime. The parameter ν+ is the base voltage for the positive regime, and the membrane potential will generally tend away from ν+ in the positive regime.
The null-clines for ν and u are given by the negative of the transformation variables qρ and r, respectively. The parameter δ is a scale factor controlling the slope of the u null-cline. The parameter ε is typically set equal to −ν−. The parameter β is a resistance value controlling the slope of the ν null-clines in both regimes. The τρ time-constant parameters control not only the exponential decays, but also the null-cline slopes in each regime separately.
The model is defined to spike when the voltage ν reaches a value νr. Subsequently, the state is typically reset at a reset event (which technically may be one and the same as the spike event):
ν={circumflex over (ν)}− (2.7)
u=u+Δu (2.8)
where {circumflex over (ν)}− and Δu are parameters. The reset voltage {circumflex over (ν)}− is typically set to ν−.
The model has closed-form solutions for state evolution at time t+Δt given the state {ν′,u′} at time t:
Therefore, the model state may most likely be, and is defined to be, updated only upon events such as upon an input (pre-synaptic spike) or output (post-synaptic spike). This is generalizable because operations may also be performed on artificial events (whether or not there is input or output) which are described below. By definition, transformations are defined at events, not between events. This means that qρ and r and even ρ need not be recomputed unless there is an event.
This definition means that the model may only be coupled at events, and the regime (whether positive or negative) may only be determined at events. The model state variables ν and u are generally coupled as described above via variables qρ and r which are only determined at events (or steps). The variables qρ and r may be computed based on the prior state. The state elements may then be evolved independently to the next state. In effect, this means the state variables are “momentarily decoupled” between events.
Moreover, the time of a post-synaptic spike may be anticipated so the time to reach a particular state may be determined in advance without numerical methods. Given a prior voltage state ν0, the time delay until voltage state νƒ is reached is given by
If a spike is defined as occurring at the time the voltage state ν reaches νs, then the closed-form solution for the amount of time, or relative delay, until a spike occurs as measured from the time that the voltage is at a given state ν is
where {circumflex over (ν)}+ is typically set to parameter ν+, although other variations and their motivations are described below.
The above definitions of the model dynamics depend on whether the model is in the positive or negative regime. As mentioned, the coupling and the regime ρ may be computed upon events. For purposes of state propagation, the regime and coupling (transformation) variables may be defined based on the state at the time of the last (prior) event. For purposes of subsequently anticipating spike output time, the regime and coupling variable may be defined based on the state at the time of the next (current) event.
For determining regime, the basic form is that if ν>{circumflex over (ν)}+ then the model is in the positive regime and otherwise in the negative regime. Typically, {circumflex over (ν)}+ is a constant set to equal ν+, but it may also be set separately or be variable.
According to certain aspects of the present disclosure, input may only be applied to the model at events. In a typical formulation, input may be applied to the model state after the state has been advanced from a prior event to the time of an input event. Thus, more generally
where hν and hu are called input channel functions. In a simple case, instantaneous current inputs may be modeled as discrete Dirac delta functions, i.e., i=θ(t), applied to voltage state. In this case, input to the model state applies at the time of the event and
h
ν(x,i)=x+iβ;hu(x,i)=x (2.15)
where β is the membrane resistance.
However, inputs may alternatively be continuous, such as a sum of weighted exponential decays describing an excitatory or inhibitory post-synaptic potential, whether current- or conductance-based. In this case, input to the model state may be applied at the time of the event, but also potentially before or after the event. Thus, the equivalent total integrated input contribution, from the input for the next event and inputs from past events, as accumulated between a prior event and a next event may be applied at the time of the next event. Thus, closed-form solutions for continuous input contributions are also an advantage.
A continuous exponentially decaying excitatory or inhibitory input may be defined by
which has a closed-form solution and thus may be evolved from event to event decoupled from the {ν,u} state. The total contribution over a time period from a prior event at time t to a next event at time t+Δt is given by
where g<0 for an inhibitory contribution, g<0 for an excitatory contribution, and the same input channel functions may be used as for Dirac inputs.
Conductance-based input may also be integrated as follows:
where Eg is the reference voltage for the input and V is a voltage level computed to compensate for removing the (ν−Eg) term from the integral. The approximation V=ν(t) (the voltage at the prior event) holds well if ν changes by a relatively small amount across events or events occur at small inter-event-intervals. If not, more sophisticated formulations of V or use of artificial events, described below, can be used to achieve desired effects while keeping with the definition of the model.
According to certain aspects of the present disclosure, the anticipation of spike output does not account for future input between events. The general reason for this is computational simplicity and independence. And, even though closed-form solutions may be available for some continuous input formulations, the equivalent effects can be achieved if the rate of events is sufficiently high, by alternatively defining input or channel functions and, otherwise by using artificial events, as described below.
According to certain aspects of the present disclosure, plasticity applies at events. Spike-timing-dependent plasticity (STDP) is particularly suited to this because long-term potentiation (LTP) and long-term depression (LTD) can be viewed as triggered by a pre-synaptic (input) event preceding or following a post-synaptic (output) event, respectively. Structural or temporal plasticity would also apply at events. Structural plasticity may be thought of as modifying, creating, or deleting synaptic connections, for example. Equivalently, the parameters of an abstract synapse, such as delay and weight, may be changed as if the abstract synapse used to model a deleted synapse was reused to model a new synapse with different parameters. Thus, one may generalize multiple forms of plasticity in terms of variable synaptic parameters. The variability may be at discrete moments or continuous, but regardless are evolved in the model from event to event.
The algorithmic solution to the model comprises: (i) advancing the state from a prior event to a next event; (ii) updating the state at the next event time given the input at the next event time (or applying, at the time of the next event, an input equivalent to that accumulated between the prior and next events); and (iii) anticipating when the next event will occur. Events include input events, which are typically considered to occur at the time a synaptic input propagates to the neuron's soma. Events also include output events, which are typically considered to occur at the time a spike is emitted by the neuron's soma and begins propagating along the axon. Since closed-form solutions are available, the model state may also be determined between events if so desired, but regime and coupling need not be updated unless there is an event.
In a typical formulation, input may be applied to the model state after the state has been advanced from a prior event to the time of an input event. Inputs may be modeled as discrete Dirac delta functions. In this case, input to the model state applies at the time of the event. However, inputs may also be continuous, such as a sum of weighted exponential decays describing an excitatory or inhibitory post-synaptic potential, whether current- or conductance-based. In this case, input to the model state applies at the time of the event, but also potentially before or after the event. Therefore, the equivalent total integrated input contribution, from the input for the next event and inputs from past events, as accumulated between a prior event and a next event may be applied at the time of the next event. Thus, closed-form solutions for continuous input contributions are also an advantage. The following algorithm describes the operations typically conducted upon an input event:
In a typical formulation, an output event comprises no input. The following algorithm describes the operations typically conducted upon an output event::
Both input and output event operations described above include anticipating when the next output spike will occur. This is because state updates may change the anticipated time of the spike.
By definition the model is updated only upon events. But an artificial event may be defined and utilized for certain aspects. An artificial event is an event defined for purposes of defining model dynamics behavior. There are several reasons why a modeler may wish to define artificial events.
However, let us first dispel a potential misconception. Modelers often wish to see traces of voltage and current state at a fine time resolution. Those states can be computed periodically between events without defining artificial events. This would entail computing the voltage and current using the transformation variables and regime as computed at the prior event (not the prior period). Transformation variables and regime may only be computed at events. This subtle point is important because the model's behavior depends on the timing of events.
By definition, the coupling transformations are defined at events, not between events. This means that qρ and τ and even ρ may most likely not be recomputed unless there is an event. So, assuming no artificial events have been defined, the voltage and current state may be updated between events if so desired, but based on the parameters and offsets from the prior event. In other words, by definition, coupling may most likely occur only at events.
Basically, defining artificial events allows a modeler to alter the coupling. There may also be reasons for defining artificial events at convenient times. There is no wrong way to define or not define artificial events, but the modeler should understand that defining artificial events generally changes the behavior of the model.
One reason to define an artificial event is to achieve a behavior characteristic of a model instance with a high rate of events using a model instance with a lower rate of events. If there is a high rate of input events, the model dynamics are advanced at small time intervals. However, if there is a low rate of input events, the model dynamics, without artificial events, may be advanced by larger time intervals. Generally, the difference in behavior may not be significant unless the time intervals are much larger. Even then, adjustments to parameters such as the time constants may be made to compensate.
However, alternatively or if desired, artificial events may be defined. In particular, artificial events may be defined to occur between non-artificial events if the interval between the non-artificial events is large. Technically, this may be achieved in several ways. For example, an artificial event may be tentatively scheduled with some delay after each non-artificial event. If another non-artificial event occurs before the artificial event, then the artificial event may be rescheduled to some delay after the latest non-artificial event. If the artificial event does occur first, then it may be rescheduled again with the same delay. Another way is to schedule artificial events periodically. Artificial events may also be defined to occur conditionally, such as dependent on state or spiking rate.
Generally, the model is suited to solutions in event-based simulation as described above. However, the model may also be solved in conventional step-based simulations. But there are fundamentally two ways to do this: (i) without artificial events; and (ii) with artificial events. These are, by definition, different instances of the model. The model operations are defined to occur at events (whether artificial or other). Thus, without artificial events, the code executing at each step may most likely be conditioned on the occurrence of an event at that time slot (effectively, no operation may occur at steps where there is no event). Alternatively, with artificial events, an artificial event may be defined to occur at every time slot. Since the closed-form solutions are available, there is no need for numerical methods regardless of whether the time between events is constant or variable.
While periodic artificial events would generally entail more computations and would therefore be less desirable, there are some potential simplifications: (i) the time Δt since the prior event may be a constant (the time interval); and (ii) spike anticipation may be replaced by checking if ν≧{circumflex over (ν)}s at each interval.
First, since the time Δt since the prior event is typically constant, the transformed state update may be simplified to a single multiplication for each state element. The infinite impulse response filters are given by
ν′(t+Δt)=cρν′(t) (2.19)
u′(t+Δt)=cuu′(t) (2.20)
where constants cρ=eΔt/τ
Second, the spiking condition was defined to be ν≧νs. However, if the artificial events are defined to occur at a quantized time interval Δt, the spiking condition may actually be reached between intervals. As a result, care should be taken to ensure that the spike occurs at the desired time, whether at the event before or at the event after.
The fundamental model behavior is defined by the parameters illustrated in table 300 of
Additional behavioral aspects may be achieved with separate control of the parameters illustrated in table 400 of
The typical setting for the regime threshold given above was {circumflex over (ν)}+=ν+. However, the voltage null-cline is not a vertical line in the {ν,u} state-space. This characteristic may be advantageous because it allows for richer behaviors. Also, alternatively, the regime threshold may be defined to be the null-cline bordering the positive regime. There are yet other alternatives that may be considered, which are beyond the scope of this disclosure.
Given the state transformations, the null-clines are defined by ν=−qρ and u=−r.
ν=τρβu+νρ or u=(ν−νρ)/τρβ (2.21)
u=−δ(ν+ε)or ν=−u/δ−ε (2.22)
These transformations control the temporal behavior of the model. The voltage transformation offset variable qρ is a linear equation dependent on the recovery current. When the current is zero, the transformation is entirely due to offset νρ. Effectively what this does is shift the voltage state in the transformed state so that the base voltage state is 0. Revisiting the formula for anticipating the spike time, one can see that the logarithm term at u=0 is
Thus, the transformation to state x shifts and normalizes the state model to yield a time delay between 0 and 1 in the positive regime. The information in a spike is in terms of its relative timing. From an information theoretic point of view, one can think of time coding information value (or state) x having range [0,1] as
Δt=−α log x (2.24)
such that the larger the value, the shorter the time delay (response) and a value of 0 corresponds to infinite delay (never spike). Thus, the parameterization of the model allows control of the information representation in spike timing. This aspect is particularly advantageous for computational design purposes.
The operations 500 may begin, at 502, by determining a first state of a neuron model. The neuron model has a closed-form solution in continuous time. Furthermore, state dynamics of the neuron model are divided into two or more regimes. For certain aspects, the neuron model has a variable coincidence detection window. For certain aspects, the first state may be determined after a predetermined period if the neuron model does not receive any event during this period.
At 504, an operating regime for the neuron model may be determined (e.g., selected) from the two or more regimes. This determination of the operating regime may be based on the first state. For certain aspects, the operating regime may be determined at or shortly after a time of an event.
According to certain aspects, the two or more regimes comprise first and second regimes. The state dynamics of the neuron model tend toward rest in the first regime and tend toward spiking in the second regime. For other aspects, the state dynamics of the neuron model tends toward a first reference in a first regime and tend away from a second reference in a second regime. The first or the second reference may include at least one of a point, a line, or a plane. For other aspects, the state dynamics of the neuron model exhibit leaky-integrate-and-fire (LIF) behavior in the first regime and exhibit anti-leaky-integrate-and-fire (ALIF) behavior in the second regime. For other aspects, the neuron model begins losing memory of a prior input after receiving the prior input in the first regime. In the second regime, the neuron model will fire, even with no further excitatory input, such that further excitatory or inhibitory input affects only when the neuron model will fire.
According to certain aspects, the operations 500 may further include determining a second state of the neuron model at a different time than that of the first state. The second state may be determined based on at least one of the first state and the operating regime. For certain aspects, the first state of the neuron model corresponds to a first event, the second state may correspond to a second event, and the second event may be the next event after the first event. In this case, the second state may be determined based on a time between the first event and the second event and on the state dynamics in the operating regime. For certain aspects, the first state is defined by a membrane potential (ν) and a recovery current (u) of the neuron model. In this case, determining the second state includes using the following equations:
where Δt is an elapsed time between the first and second states, ρ=+if ν>{circumflex over (ν)}+, ρ=−if ν≦{circumflex over (ν)}+, {circumflex over (ν)}+ is a regime threshold, τρ is a voltage time constant, τu is a recovery current time constant, β is a resistance, νρ is a base voltage for the operating regime, qρ and r are state transformation variables, δ is a scale factor, and ε is an offset voltage. Algorithmically, the equations may be computed as follows for certain aspects (although variations are also possible with the scope of the present disclosure): (1) the transformation variables q and r are computed; (2) the transformed state ν′ and u′ are computed; (3) the state evolutions are computed; (4) the states are transformed back to ν and u using the qρ and r variables determined in the first step. Variations may include updating the transformation variables before the fourth step, updating the transformation for one state after the evolution of the other state, etc.
For certain aspects, the first state is a current state of the neuron model, and the second state is a future state after the current state or a prior state before the current state. For other aspects, the first state is a prior state of the neuron model, and the second state is a current state or a future state, after the prior state. For certain aspects, the first or the second event is an input event, an output event, or an artificial event for the neuron model.
According to certain aspects, the operations 500 may further include determining when the neuron model will fire based on at least one of the first state and the operating regime. For certain aspects, the first state is defined by a membrane potential (ν) and a recovery current (u) of the neuron model. Therefore, determining when the neuron model will fire may include using the following rule:
where τ+ is a positive regime voltage time constant, νs is a defined voltage of an output spike of the neuron model, q+ is a positive regime state transformation variable, {circumflex over (ν)}+ is a regime threshold and Δts is an anticipated time until the neuron model will fire. For other aspects, determining when the neuron model will fire includes using the state dynamics of the neuron model, decoupled from any events. For certain aspects, the operations 500 may further include outputting a spike at an output time according to the determination of when the neuron model will fire.
According to certain aspects, the operations 500 may further include determining, at or shortly after a time of an event, state transformation variables. At least one of the state transformation variables may be dependent on the operating regime. For certain aspects, the operations 500 may further include determining a second state of the neuron model at a different time than that of the first state. This determination of the second state may be based on the first state and the state transformation variables. For certain aspects, the state dynamics of the neuron model may be expressed as ordinary differential equations (ODEs) based on the state transformation variables between events.
According to certain aspects, the operations 500 may further include outputting the first state of the neuron model to a display. For certain aspects, the state dynamics may be defined by a membrane potential and a recovery current of the neuron model.
The operations 600 may begin, at 602, by determining a first state of a neuron model at or shortly after a first event. The neuron model may have a closed-form solution in continuous time. For certain aspects, the neuron model has a variable coincidence detection window.
At 604, a second state of the neuron model, at or shortly after a second event, may be determined. This determination may be based on the first state. Furthermore, dynamics of the first and second states may be coupled to the neuron model only at the first and second events, respectively, and may be decoupled between the first and second events.
According to certain aspects, the first and second states are multivariate. The second event may be an input event, an output event, or an artificial event. For certain aspects, the first or the second state is defined by two or more state variables whose dynamics are decoupled between the events and coupled at the events through transformations. For certain aspects, the second event is the next event after the first event.
For certain aspects, the first state is a current state of the neuron model, and the second state is a future state after the current state or a prior state before the current state. For other aspects, the first state is a prior state of the neuron model, and the second state is a current state or a future state, after the prior state.
According to certain aspects, the first state is defined by a membrane potential (ν) and a recovery current (u) of the neuron model. In this case, the second state may be determined using the following equations:
where Δt is an elapsed time between the first and second states, ρ=+if ν>{circumflex over (ν)}+, ρ=−if ν≦{circumflex over (ν)}+, {circumflex over (ν)}+ is a regime threshold, τρ is a voltage time constant, τu is a recovery current time constant, β is a resistance, νρ is a base voltage for the operating regime, qρ and r are state transformation variables, δ is a scale factor, and ε is an offset voltage.
According to certain aspects, the operations 600 may further include determining when the neuron model will fire. This determination may be based on at least one of the first state or the second state. For certain aspects, at least one of the first state or the second state is defined by a membrane potential (ν) and a recovery current (u) of the neuron model. In this case, determining when the neuron model will fire may include using the following rule:
where τ+ is a positive regime voltage time constant, νs is a defined voltage of an output spike of the neuron model, q+ is a positive regime state transformation variable, {circumflex over (ν)}+ is a regime threshold and Δts is an anticipated time until the neuron model will fire. For other aspects, determining when the neuron model will fire may include using the dynamics of the first and second states. For certain aspects, the operations 600 may further include outputting a spike at an output time according to the determination of when the neuron model will fire.
According to certain aspects, at least one of the first state or the second state may be determined after a predetermined period if the neuron model does not receive any event during this period. For certain aspects, the dynamics of the first and second states may be expressed as ODEs based on state transformation variables between events. According to certain aspects, the operations 600 may further include outputting at least one of the first state, the second state, a first indication of the first event, or a second indication of the second event to a display.
The operations 700 may begin, at 702, by determining a first state of a neuron model at or shortly after a first event. The neuron model may have a closed-form solution in continuous time. For certain aspects, the neuron model has a variable coincidence detection window. For certain aspects, the first state may be determined after a predetermined period if the neuron model does not receive any event during the period.
At 704, a second event when, if ever, a second state of the neuron model will occur may be determined. This determination may be based on the first state. Dynamics of the first and second states may be coupled to the neuron model only at the first and second events, respectively, and may be decoupled between the first and second events. For certain aspects, the dynamics of the first and second states are expressed as ODEs based on state transformation variables between the first and second events
According to certain aspects, the operations 700 may further include outputting a spike at an output time according to the determination of the second event. For certain aspects, the second event is an input event, an output event, or an artificial event. The second event may be the next event after the first event. For certain aspects, the first state is a current state of the neuron model, and the second state is a future state after the current state or a prior state before the current state. For other aspects, the first state is a prior state of the neuron model, and the second state is a current state or a future state, after the prior state.
According to certain aspects, the first and second states are multivariate. For example, the first state may be defined by a membrane potential (ν) and a recovery current (u) of the neuron model. In this case, the second event may be determined using the following rule:
where τ+ is a positive regime voltage time constant, νs is a defined voltage of an output spike of the neuron model, q+ is a positive regime state transformation variable, {circumflex over (ν)}+ is a regime threshold and Δts is an anticipated time until the neuron model will fire.
According to certain aspects, the operations 700 may further include outputting at least one of the first state, the second state, a first indication of the first event, or a second indication of the second event to a display.
The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, operations 600 illustrated in
For example, means for displaying may comprise a display (e.g., a monitor, flat screen, touch screen, and the like), a printer, or any other suitable means for outputting data for visual depiction, such as a table, chart, or graph. The means for processing, means for outputting, or means for determining may comprise a processing system, which may include one or more processors or processing units. Means for storing may comprise a memory or any other suitable storage device (e.g., RAM), which may be accessed by the processing system.
As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining, and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory), and the like. Also, “determining” may include resolving, selecting, choosing, establishing, and the like.
As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c.
The various illustrative logical blocks, modules, and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement signal processing functions. For certain aspects, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
The processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Machine-readable media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. The computer-program product may comprise packaging materials.
In a hardware implementation, the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files.
The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may be implemented with an ASIC (Application Specific Integrated Circuit) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more FPGAs (Field Programmable Gate Arrays), PLDs (Programmable Logic Devices), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.
The machine-readable media may comprise a number of software modules. The software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module.
If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material.
Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a device as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a device can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.