The present invention relates to data processing techniques. Embodiments implement a new way of carrying out calculations in machines, in particular in programmable machines.
For the most part, current computers are based on the Von Neumann architecture. The data and the program instructions are stored in a memory that is accessed sequentially by an arithmetic logic unit in order to execute the program on the data. This sequential architecture is relatively inefficient, namely because of the requirement for numerous memory accesses, both for reading and writing.
The search for alternatives that are more energy-efficient lead to the proposition of clockless processing architectures that attempt to imitate the operation of the brain. Recent projects, such as the DARPA SyNAPSE program, lead to the development of silicon-based neuromorphic card technologies, which allow a new type of computer inspired by the shape, the operation and the architecture of the brain to be built. The main advantages of these clockless systems are their energy efficiency and the fact that performance is proportional to the quantity of neurons and synapses used. A plurality of platforms were developed in this context, in particular:
These machines substantially aim to simulate biology. Their main uses are in the field of learning, namely in order to execute deep learning architectures such as neural networks or deep belief networks. They are efficient in a plurality of fields like those of artificial vision, speech recognition, and language processing.
There are other options such as the NEF (“Neural Engineering Framework”) capable of simulating certain functionalities of the brain and in particular carrying out visual, cognitive and motor tasks (Chris Eliasmith, et al.: “A Large-Scale Model of the Functioning Brain”, Science, Vol. 338, No. 6111, pages 1202-1205, November 2012).
These various approaches do not propose a general methodology for executing calculations in a programmable machine.
An object of the present disclosure is to propose a novel approach for the representation of the data and the execution of calculations. It is desirable for this approach to be suitable for an implementation having reduced energy consumption and massive parallelism.
A data processing device is proposed, comprising a set of processing nodes and connections between the nodes. Each connection has an emitter node and a receiver node out of the set of processing nodes and is configured to transmit, to the receiver node, events delivered by the emitter node. Each node is arranged to vary a respective potential value according to events that it receives and to deliver an event when the potential value reaches a predefined threshold. At least one input value of the data processing device is represented by a time interval between two events received by at least one node, and at least one output value of the data processing device is represented by a time interval between two events delivered by at least one node.
The processing nodes form neuron-type calculation units. However, it is not especially desired here to imitate the operation of the brain. The term “neuron” is used in the present disclosure for linguistic convenience but does not necessarily mean strong resemblance to the operating mode of the neurons of the cortex.
By using a specific temporal organisation of the events in the processing device, as well as various properties of the connections (synapses), an overall calculation framework, suitable for calculating the elementary mathematical functions, can be obtained. All the existing mathematical operators can then be implemented, whether linear or not, without necessarily having to use a Von Neumann architecture. From that point on, it is possible for the device to function like a conventional computer, but without requiring incessant back-and-forth trips to the memory and without being based on floating point precision. It is the temporal concurrence of synaptic events, or their temporal offsets, that form the basis for the representation of the data.
The proposed methodology is consistent with the neuromorphic architectures that do not make any distinction between memory and calculation. Each connection of each processing node stores information and simultaneously uses this information for the calculation. This is very different from the prevailing organisation in conventional computers that distinguishes between memory and processing and causes the Von Neumann bottleneck, in which the majority of the calculation time is dedicated to moving information between the memory and the central processing unit (John Backus: “Can Programming Be Liberated from the von Neumann Style?: A Functional Style and Its Algebra of Programs”, Communications of the ACM, Vol. 21, No. 8, pages 613-641, August 1978).
The operation is based on communication governed by events (“event-driven”) like in biological neurons, and thus allowing execution with massive parallelism.
In one embodiment of the device, each processing node is arranged to reset its potential value when it delivers an event. The reset can in particular be to a zero potential value.
Numerous embodiments of the device for processing data include, among the connections between the nodes, one or more potential variation connections, each having a respective weight. The receiver node of such a connection is arranged to respond to an event received on this connection by adding the weight of the connection to its potential value.
The potential variation connections can include excitation connections, which have a positive weight, and inhibiting connections, which have a negative weight.
In order to manipulate a value in the device, the set of processing nodes can comprise at least one first node forming the receiver node of a first potential variation connection having a first positive weight at least equal to the predefined threshold for the potential value, and at least one second node forming the receiver node of a second potential variation connection having a weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value. The aforementioned first node further forms the emitter node and the receiver node of a third potential variation connection having a weight equal to the opposite of the first weight, as well as the emitter node of a fourth connection, while the second node further forms the emitter node of a fifth connection. The first and second potential variation connections are thus configured to each receive two events separated by a first time interval representing an input value whereby the fourth and fifth connections transport respective events having between them a second time interval related to the first time interval.
Various operations can be carried out using a device according to the invention.
In particular, an example of a device for processing data comprises at least one minimum calculation circuit, which itself comprises:
first and second input nodes;
an output node;
first and second selection nodes;
first, second, third, fourth, fifth and sixth potential variation connections, each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value;
seventh and eighth potential variation connections each having a second weight opposite to the first weight; and
ninth and tenth potential variation connections each having a third weight double of the second weight.
In this minimum calculation circuit, the first input node forms the emitter node of the first and third connections and the receiver node of the tenth connections, the second input node forms the emitter node of the second and fourth connections and the receiver node of the ninth connection, the first selection node forms the emitter node of the fifth, seventh and ninth connections and the receiver node of the first and eighth connections, the second selection node forms the emitter node of the sixth, eighth and tenth connections and the receiver node of the second and seventh connections, and the output node forms the receiver node of the third, fourth, fifth and sixth connections.
Another example of a device for processing data comprises at least one maximum calculation circuit, which itself comprises:
first and second input nodes;
an output node;
first and second selection nodes;
first, second, third and fourth potential variation connections, each having a first positive weight at least equal to half the predefined threshold for the potential value and less than the predefined threshold for the potential value; and
fifth and sixth potential variation connections each having a second weight equal to double the opposite of the first weight.
In this maximum calculation circuit, the first input node forms the emitter node of the first and third connections, the second input node forms the emitter node of the second and fourth connections, the first selection node forms the emitter node of the fifth connection and the receiver node of the first and sixth connections, the second selection node forms the emitter node of the sixth connection and the receiver node of the second and fifth connections, and the output node forms the receiver node of the third and fourth connections.
Another example of a device for processing data comprises at least one subtractor circuit, which itself comprises:
first and second synchronisation nodes;
first and second inhibition nodes;
first and second output nodes;
first, second, third, fourth, fifth and sixth potential variation connections each having a first positive weight at least equal to the predefined threshold for the potential value;
seventh and eighth potential variation connections each having a second weight equal to half the first weight;
ninth and tenth potential variation connections each having a third weight opposite to the first weight; and
eleventh and twelfth potential variation connections each having a fourth weight double of the third weight.
In this subtractor circuit, the first synchronisation node forms the emitter node of the first, second, third and ninth connections, the second synchronisation node forms the emitter node of the fourth, fifth, sixth and tenth connections, the first inhibition node forms the emitter node of the eleventh connection and the receiver node of the third, eighth and tenth connections, the second inhibition node forms the emitter node of the twelfth connection and the receiver node of the sixth, seventh and ninth connections, the first output node forms the emitter node of the seventh connection and the receiver node of the first, fifth and eleventh connections, and the second output node forms the emitter node of the eighth connection and the receiver node of the second, fourth and twelfth connections. The first synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a first pair of events having between them a first interval of time representing a first operand. The second synchronisation node is configured to receive, on at least one potential variation connection having the second weight, a second pair of events having between them a second interval of time representing a second operand, whereby a third pair of events having between them a third time interval is delivered by the first output node if the first time interval is longer than the second time interval and by the second output node if the first time interval is shorter than the second time interval, the third time interval representing the absolute value of the difference between the first and second operand.
The subtractor circuit can further comprise zero detection logic including at least one detection node associated with detection and inhibition connections with the first and second synchronisation nodes, one of the first and second inhibition nodes and one of the first and second output nodes. The detection and inhibition connections are faster than the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh and twelfth connections, in order to inhibit the production of events by one of the first and second output nodes when the first and second time intervals are substantially equal.
In various embodiments of the device, the set of processing nodes comprises at least one node arranged to vary a current value according to events received on at least one current adjustment connection, and to vary its potential value over time at a rate proportional to said current value. Such a processing node can in particular be arranged to reset its current value to zero when it delivers an event.
The current value in at least some of the nodes has a component that is constant between two events received on at least one constant current component adjustment connection having a respective weight. The receiver node of a constant current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the constant component of its current value.
Another example of a device for processing data comprises at least one inverter memory circuit, which itself comprises:
an accumulator node;
first, second and third constant current component adjustment connections, the first and third connections having the same positive weight and the second connection having a weight opposite to the weight of the first and third connections; and
at least one fourth connection,
In this inverter memory circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection, and the first and second connections are configured to respectively address, to the accumulator node, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the accumulator node then responds to a third event received on the third connection by increasing its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to the first time interval.
Another example of a device for processing data comprises at least one memory circuit, which itself comprises:
first and second accumulator nodes;
first, second, third and fourth constant current component adjustment connections, the first, second and fourth connections each having a first positive weight and the third connection having a second weight opposite to the first weight; and
at least one fifth connection.
In this memory circuit, the first accumulator node forms the receiver node of the first connection and the emitter node of the third connection, the second accumulator node forms the receiver node of the second, third and fourth and fifth connections and the emitter node of the fifth connection, the first and second connection are configured to respectively address, to the first and second accumulator nodes, first and second events having between them a first time interval related to a time interval representing a value to be memorised, whereby the second accumulator node then responds to a third event received on the fourth connection by increasing its potential value until delivery of a fourth event on the fifth connection, the third and fourth events having between them a second time interval related to the first time interval.
The memory circuit can further comprise a sixth connection having the first accumulator node as an emitter node, the sixth connection delivering an event to signal the availability of the memory circuit for reading.
Another example of a device for processing data comprises at least one synchronisation circuit, which includes a number N>1 of memory circuits, of the type mentioned just above, and a synchronisation node. The synchronisation node is sensitive to each event delivered on the sixth connection of one of the N memory circuits via a respective potential variation connection having a weight equal to the first weight divided by N. The synchronisation node is arranged to trigger simultaneous reception of the third events via the respective fourth connections of the N memory circuits.
Another example of a device for processing data comprises at least one accumulation circuit, which itself comprises:
N inputs each having a respective weighting coefficient, N being an integer greater than 1;
an accumulator node;
a synchronisation node;
for each of the N inputs of the accumulation circuit:
a first constant current component adjustment connection having a first positive weight proportional to the respective weighting coefficient of said input; and
a second constant current component adjustment connection having a second weight opposite to the first weight;
a third constant current component adjustment connection having a third positive weight.
In this accumulation circuit, the accumulator node forms the receiver node of the first, second and third connections, the synchronisation node forms the emitter node of the third connection. For each of the Ninputs, the first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval representing a respective operand provided on said input. The synchronisation node is configured to deliver a third event once the first and second events have been addressed for each of the N inputs, whereby the accumulator node increases its potential value until delivery of a fourth event. The third and fourth events have between them a second time interval related to a time interval representing a weighted sum of the operands provided on the N inputs.
In an example of a device for processing data according to the invention, the accumulation circuit is part of a weighted addition circuit further comprising:
a second accumulator node;
a fourth constant current component adjustment connection having the third weight; and
a fifth and sixth connection.
In this weighted addition circuit, the synchronisation node of the accumulation circuit forms the emitter node of the fourth connection, the accumulator node of the accumulation circuit forms the emitter node of the fifth connection, and the second accumulator node forms the receiver node of the fourth connection and the emitter node of the sixth connection. In response to delivery of the third event by the synchronisation node, the accumulator node of the accumulation circuit increases its potential value until delivery of a fourth event on the fifth connection, and the second accumulator node increases its potential value until delivery of a fifth event on the sixth connection, the fourth and fifth events having between them a third time interval related to a time interval representing a weighted sum of the operands provided on the N inputs of the accumulation circuit.
Another example of a device for processing data comprises at least one linear combination circuit including two accumulation circuits, which share their synchronisation node, and a subtractor circuit configured to respond to the third event delivered by the shared synchronisation node and to the fourth events respectively delivered by the accumulator nodes of the two accumulation circuits by delivering a pair of events having between them a third time interval representing the difference between the weighted sum for one of the two accumulation circuits and the weighted sum for the other of the two accumulation circuits.
In some embodiments of the device, the set of processing nodes comprises at least one node, the current value of which has a component that decreases exponentially between two events received on at least one exponentially decreasing current component adjustment connection having a respective weight. The receiver node of an exponentially decreasing current component adjustment connection is arranged to react to an event received on this connection by adding the weight of the connection to the exponentially decreasing component of its current value.
Another example of a device for processing data comprises at least one logarithm calculation circuit, which itself comprises:
an accumulator node;
first and second constant current component adjustment connection, the first connection having a positive weight, and the second connection having a weight opposite to the weight of the first connection;
a third exponentially decreasing current component adjustment connection; and
at least one fourth connection.
In this logarithm calculation circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection. The first and second connections are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the logarithm calculation circuit. The third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing a logarithm of the input value.
The processing device can further comprise at least one deactivation connection, the receiver node of which is a node capable of cancelling out its exponentially decreasing component of current in response to an event received on the deactivation connection.
Another example of a device for processing data comprises at least one exponentiation circuit, which itself comprises:
an accumulator node;
a first exponentially decreasing current component adjustment connection;
a second deactivation connection;
a third constant current component adjustment connection; and
at least one fourth connection.
In this exponentiation circuit, the accumulator node forms the receiver node of the first, second and third connections and the emitter node of the fourth connection. The first and second connection are configured to address, to the accumulator node, respective first and second events having between them a first time interval related to a time interval representing an input value of the exponentiation circuit. The third connection is configured to address, to the accumulator node, a third event simultaneous or posterior to the second event, whereby the accumulator node increases its potential value until delivery of a fourth event on the fourth connection, the third and fourth events having between them a second time interval related to a time interval representing an exponentiation of the input value.
Another example of a device for processing data comprises at least one multiplier circuit, which itself comprises:
first, second and third accumulator nodes;
a synchronisation node;
first, second, third, fourth and fifth constant current component adjustment connections, the first, third and fifth connections having a positive weight, and the second and fourth connections having a weight opposite to the weight of the first, second and fifth connections;
sixth, seventh and eighth exponentially decreasing current component adjustment connections;
a ninth deactivation connection; and
at least one tenth connection.
In this multiplier circuit, the first accumulator node forms the receiver node of the first, second and sixth connections and the emitter node of the seventh connection, the second accumulator node forms the receiver node of the third, fourth and seventh connections and the emitter node of the fifth and ninth connections, the third accumulator node forms the receiver node of the fifth, eighth and ninth connections and the emitter node of the tenth connection, and the synchronisation node forms the emitter node of the sixth and eighth connections. The first and second connection are configured to address, to the first accumulator node, respective first and second events having between them a first time interval related to a time interval representing a first operand of the multiplier circuit. The third and fourth connections are configured to address, to the second accumulator node, respective third and fourth events having between them a second time interval related to a time interval representing a second operand of the multiplier circuit. The synchronisation node is configured to deliver a fifth event on the sixth and eighth connections once the first, second, third and fourth events have been received. Thus, the first accumulator node increases its potential value until delivery of a sixth event on the seventh connection and then, in response to the sixth event, the second accumulator node increases its potential value until delivery of a seventh event on the fifth and ninth connections. In response to this seventh event, the third accumulator node increases its potential value until delivery of an eighth event on the tenth connection, the seventh and eighth events having between them a third time interval related to a time interval representing the product of the first and second operands.
Sign detection logic can be associated with the multiplier circuit in order to detect the respective signs of the first and second operands and cause two events having between them a time interval representing the product of the first and second operands to be delivered on one or the other of two outputs of the multiplier circuit according to the signs detected.
In a typical embodiment of the processing device, each connection is associated with a delay parameter, in order to signal the receiver node of this connection to carry out a change of state with a delay, with respect to the reception of an event on the connection, indicated by said parameter.
The time interval Δt between two events representing a value having an absolute value x can have, in particular, the form Δt=Tmin+x·Tcod, where Tmin and Tcod are predefined time parameters. The values represented by time intervals have, for example, absolute values x between 0 and 1.
A logarithmic scale rather than a linear one for Δt as a function of x can also be suitable for certain uses. Other scales can also be used.
The processing device can have special arrangements in order to handle signed values. It can thus comprise, for an input value:
a first input comprising one node or two nodes out of the set of processing nodes, the first input being arranged to receive two events having between them a time interval representing a positive value of the input value; and
a second input comprising one node or two nodes out of the set of processing nodes, the second input being arranged to receive two events having between them a time interval representing a negative value of the input value.
For an output value, the processing device can comprise:
a first output comprising one node or two nodes out of the set of processing nodes, the first output being arranged to deliver two events having between them a time interval representing a positive value of said output value; and
a second output comprising one node or two nodes out of the set of processing nodes, the second output being arranged to deliver two events having between them a time interval representing a negative value of said output value.
In an embodiment of the processing device, the set of processing nodes is in the form of at least one programmable array, the nodes of the array having a shared behaviour model according to the events received. This device further comprises a programming logic in order to adjust weights and delay parameters of the connections between the nodes of the array according to a calculation program, and a control unit in order to provide input values to the array and recover output values calculated according to the program.
Other features and advantages of the present invention will appear in the following description, in reference to the appended drawings, in which:
A data processing device as proposed here works by representing the processed values not as amplitudes of electric signals or as binary-encoded numbers processed by logic circuits, but as time intervals between events occurring within a set of processing nodes having connections between them.
In the context of the present disclosure, an embodiment of the data processing device according to an architecture similar to those of artificial neural networks is presented. Although the data processing device does not necessarily have an architecture strictly corresponding to that which people agree to call “neural networks”, the following description uses the terms “node” and “neuron” interchangeably, just like it uses the term “synapse” to designate the connections between two nodes or neurons in the device.
The synapses are oriented, i.e. each connection has an emitter node and a receiver node and transmits, to the receiver node, events generated by the emitter node. An event typically manifests itself as a spike in a voltage signal or current signal delivered by the emitter node and influencing the receiver node.
As is usual in the context of artificial neural networks, each connection or synapse has a weight parameter w that measures the influence that the emitter node exerts on the receiver node during an event.
A description of the behaviour of each node can be given by referring to a potential value V corresponding to the membrane potential V in the paradigm of artificial neural networks. The potential value V of a node varies over time according to the events that the node receives on its incoming connections. When this potential value V reaches or exceeds a threshold Vt, the node emits an event (“spike”) that is transmitted to the node(s) located downstream.
In order to describe the behaviour of a node, or neuron, in an exemplary embodiment of the invention, reference can further be made to a current value g having a component ge and optionally a component gƒ.
The component ge is a component that remains constant, or substantially constant, between two events that the node receives on a particular synapse that is called here constant current component adjustment connection.
The component gƒ is an exponentially changing component, i.e. it varies exponentially between two events that the node receives on a particular synapse that is called here exponentially decreasing current component adjustment connection.
A node that takes into account an exponentially decreasing current component gƒ can further receive events for activation and deactivation of the component gƒ on a particular synapse that is called here activation connection.
In the example in question, the behaviour of a processing node can therefore be expressed in a generic manner by a set of differential equations:
where:
In the system (1), it is considered that there is no leak of the membrane potential V, or that the dynamics of this leak are on a much larger time scale than all the other dynamics operating in the device.
In this model, four types of synapses that influence the behaviour of a neuron can be distinguished, each synapse being associated with a weight parameter indicating a synaptic weight w, positive or negative:
Each synaptic connection is further associated with a delay parameter that gives the delay in propagation between the emitter neuron and the receiver neuron.
A neuron triggers an event, when its potential value V reaches a threshold Vt, i.e.:
V≥V
t (2)
The triggering of the event gives rise to a spike delivered on each synapse of which the neuron forms the emitter node and to resetting its state variables to:
V←V
reset (3)
g
e←0 (4)
g
ƒ←0 (5)
gate←0 (6)
Without losing any generality, the case where Vreset=0 can be considered.
Hereinafter, the notation Tsyn designates the delay in propagation along a standard synapse, and the notation Tneu designates the time that a neuron takes to transmit the event when producing its spike after having been triggered by an input synaptic event. Tneu can for example represent the time step of a neural simulator.
A standard weight we is defined as the minimum excitation weight that must be applied to a V-synapse in order to trigger a neuron from the reset state, and another standard weight W is defined as the inhibition weight having a contrary effect:
w
e
=V
t (7)
w
i
=w
e (8)
The values processed by the device are represented by time intervals between events. Two events of a pair of events are separated by a time interval Δt that is a function of the value x encoded by this pair:
Δt=ƒ(x) (9)
where ƒ is an encoding function chosen for the representation of the data in the device.
The two events of the pair encoding this value x can be delivered by the same neuron n or by two distinct neurons.
In the case of the same neuron n, delivering events at successive times en(i), i=0, 1, 2, etc., it can be considered that this neuron n encodes a time-varying signal u(t), the discrete values of which are given by:
u(en(i))=ƒ−1(en(i+1)−en(i)),∀i==2·p,p∈, (10)
where ƒ−1 is the encoding function inverse chosen and i is an even number.
The encoding function ƒ:=→ can be chosen while taking into account the signals processed in a particular system, and adapted to the required precision. The function ƒ calculates the interval between spikes associated with a particular value. In the rest of the present description, embodiments of the processing device using a linear encoding function are illustrated:
Δt=ƒ(x)=Tmin+x·Tcod (11)
with x∈[0, 1].
This representation of the function ƒ[0, 1]→[Tmin, Tmax] allows any value x between 0 and 1 to be encoded linearly by a time interval between Tmin and Tmax=Tmin+Tcod. The value of Tmin can be zero. However, it is advantageous for it to be non-zero. Indeed, if two events representing a value come from the same neuron or are received by the same neuron, the minimum interval Tmin>0 gives this neuron time to reset. Moreover, a choice of Tmin>0 allows certain arrangements of neurons to respond to the first input event and propagate a change of state before receiving a second event.
The form (11) for the encoding function ƒ is not the only one possible. Another suitable choice is to take a logarithmic function, which allows a wide range of values to be encoded with dynamics that are suitable for certain uses, in this case with less precision for large values.
To represent signed values, two different paths, one for each sign, can be used. Positive values are thus encoded using a particular neuron, and negative values using another neuron. Arbitrarily, zero can be represented as a positive value or a negative value. Hereinafter, it is represented as a positive value.
Thus, to continue the example of form (11), if a value x has a value in the range [−1, +1], it is represented by a time interval Δt=Tmin+|x|·Tcod between two events propagated on the path associated with the positive values if x≥0 and on the path associated with the negative values if x<0.
The choice of (9) or (11) for the encoding function leads to the definition of two standard weights for the ge-synapses. The weight wacc is defined as being the value of ge necessary to trigger a neuron, from its reset state, after the time Tmax=Tmin+Tcod, i.e., considering (1):
Moreover, the weight
For the ge-synapses, another standard weight gmult can be given as:
The connections between nodes of the device can further be each associated with a respective delay parameter. This parameter indicates a delay with which the receiver node of the connection carries out a change of state, with respect to the emission of an event on the connection. The indication of delay values by these delay parameters associated with the synapses allows suitable sequencing of the operations in the processing device to be ensured.
Various technologies can be used to implement the processing nodes and their interconnections in order for them to behave in the way described by the equations (1)-(6), namely the technologies routinely used in the well-known field of artificial neural networks. Each node can, for example, be created using analogue technology, with resistive and capacitive elements in order to preserve and vary a voltage level and transistor elements in order to deliver events when the voltage level exceeds the threshold Vt.
Another possibility is to use digital technologies, for example based on field-programmable gate arrays (FPGAs), which provide a convenient means for implementing artificial neurons.
Below, a certain number of devices or circuits for processing data that are made using interconnected processing nodes are presented. In
Some of the nodes or neurons shown in these drawings are named in such a way as to evoke the functions resulting from their arrangement in the circuit: ‘input’ for an input neuron, ‘input+’ for the input of a positive value, ‘input’ for the input of a negative value, “output” for an output neuron, ‘output+’ for the output of a positive value, ‘output−’ for the output of a negative value, ‘recall’ for a neuron used to recover a value, ‘acc’ for an accumulator neuron, ‘ready’ for a neuron indicating the availability of a result or of a value, etc.
The activation of the recall neuron 15 triggers the output neuron 16 at times Tsyn and Tsyn+ƒ(x), and thus the circuit 10 delivers two events separated in time by the value ƒ(x) representing the constant x.
A.1. Inverting Memory
This device 18 stores an analogue value x encoded by a pair of input spikes provided at an input neuron 21 with an interval Δtin=ƒ(x), using an integration of current over the dynamic range ge in an acc neuron 30. The value x is stored in the membrane potential of the acc neuron 30 and read during the activation of a recall neuron 31, which leads to delivering a pair of events separated by a time interval Δtt corresponding to the value 1−x at the output neuron 33, i.e. Δtout=ƒ(1−x).
The input neuron 21 belongs to a group of nodes 20 used to produce two events separated by ƒ(x) Tmin=x·Tcod on ge-synapses 26, 27 directed towards the acc neuron 30. This group comprises a ‘first’ neuron 23 and a ‘last’ neuron 25. Two excitation V-synapses 22, 24 having a delay Tsyn go from the input neuron 21 to the first neuron 23 and to the last neuron 25, respectively. The V-synapse 22 has a weight we, while the V-synapse 24 has a weight equal to we/2. The first neuron 23 inhibits itself via a V-synapse 28 having a weight wi and a delay Tsyn.
The excitation ge-synapse 26 goes from the first neuron 23 to the acc neuron 30, and has the weight wacc and a delay of Tsyn+Tmin. The inhibiting ge-synapse 27 goes from the last neuron 25 to the acc neuron 30, and has the weight −wacc and a delay Tsyn. An excitation V-synapse 32 goes from the recall neuron 31 to the output neuron 33, and has the weight we and a delay of 2Tsyn+Tneu. An excitation V-synapse 34 goes from the recall neuron 31 to the acc neuron 30, and has the weight wacc and a delay Tsyn. Finally, an excitation V-synapse 35 goes from the acc neuron 30 to the output neuron 33, and has the weight we and a delay Tsyn.
The operation of the inverting-memory device 18 is illustrated by
Emission of a first event (spike) at time tin1 at the input neuron 21 triggers an event at the output of the first neuron 23 after the time Tsyn+Tneu, i.e. at time tfirst1 in
The emission of the second spike at time tin2=tin1+Tmin+x·Tcod at the input neuron 21 brings the last neuron 25 to the threshold potential Vt. An event is then produced at time tlast1=tin2+Tsyn+Tneu on the inhibiting ge-synapse 27. The second spike also triggers the resetting of the potential of the first neuron 23 to zero via the synapse 22. The event transported by the ge-synapse 27 in response to the second spike stops the accumulation carried out by the acc neuron 30 at time tend1=tst1+x·Tcod.
At this stage, the potential value
is stored in the acc neuron 30 in order to memorise the value x. Its complement 1−x can then be read by activating the recall neuron 31, which takes place at time trecall1 in
Finally, the two events delivered by the output neuron 33 are separated by a time interval ΔTout=tout2−tout1=Tmin+(1−x)·Tcod=ƒ (1−x).
It is noted that the value x is stored in the acc neuron 30 upon reception of the two input spikes and immediately available to be read by activating the recall neuron 31.
Since the standard weight we was defined as the minimum excitation weight that must be applied to a V-synapse in order to trigger a neuron from the reset state, it is noted that the processing circuit 18 of
A.2. Memory
This device 40 memorizes an analogue value x encoded by a pair of input spikes provided at a input neuron 21 with an interval Δtin=ƒ(x), using an integration of current on the dynamic range ge in two cascaded acc neurons 42, 44 in order to form a non-inverting output with a pair of events separated by a time interval Δtt=ƒ(x).
The memory circuit 40 has an input neuron 21 in order to receive the value to be stored, a read-command input formed by a recall neuron 48, a ready neuron 47 indicating the time from which a reading command can be presented to the recall neuron 48, and an output neuron 50 in order to return the stored value. All the synapses of this memory circuit have the delay Tsyn.
The input neuron 21 belongs to a group of nodes 20 similar to that described in reference to
A ge-synapse 41 goes from the first neuron 23 to the first acc neuron 42, and has the weight wacc The acc neuron 42 thus starts accumulation at time tst1−tin1+2·Tsyn+Tneu (
The accumulation in the acc neuron 42 continues until the time tend1=tst1+Tmax at which the potential of the acc neuron 42 reaches the threshold Vt, which triggers the emission of a spike at time tacc1=tend1 Tneu on the ge-synapse 45 (
At this stage, the potential value stored in the acc neuron 44 is
which allows the value x to be memorized. The reading can then take place by activating the recall neuron 48, which takes place at time trecall1 in
The activation of the recall neuron 48 triggers an event at time tout1=trecall1+Tsyn+Tneu on the output neuron 50 via the V-synapse 49, and restarts the process of accumulation in the acc neuron 44 via the ge-synapse 51 at time tst22=trecall1 Tsyn. The accumulation continues in the acc neuron 44 until the time tend22 at which its potential value reaches the threshold Vt, i.e. tend22=tst22+ƒ (x)−Tsyn−Tneu. An event is emitted on the V-synapse 52 at time tacc1=tend22=Tneu and triggers another event on the output neuron 50 at time tout2=tacc21+Tsyn+Tneu=trecall1+Tsyn+Tneu+ƒ(x).
Finally, the two events delivered by the output neuron 50 are separated by a time interval ΔTout=tout2−tout1=ƒ(x).
It is noted that the acc neuron 42 in
It is also noted that the memory circuit 40 functions for any encoding of the value x by a time interval between Tmin and Tmax, without being limited to the form (11) above.
A.3. Signed Memory
The signed-memory circuit 60 is based on a memory circuit 40 of the type shown in
Moreover, the neurons 61, 62 are connected, respectively, to ready+ and ready− neurons 65, 66 by excitation V-synapses 67, 68 having a weight of we/4. The signed memory circuit has a recall neuron 70 connected to the ready+ and ready− neurons 65, 66 by respective excitation V-synapses 71, 72 having the weight we/2. Each of the ready+ and ready− neurons 65, 66 is connected to the recall neuron 48 of the circuit 40 by respective excitation V-synapses 73, 74 having the weight we. An inhibiting V-synapse 75 having a weight of wi/2 goes from the ready+ neuron 65 to the ready− neuron 66, and reciprocally, an inhibiting V-synapse 76 having a weight of wi/2 goes from the ready neuron 66 to the ready+ neuron 65. The ready+ neuron 65 is connected to the output− neuron 82 of the signed memory circuit by an inhibiting V-synapse 77 having a weight of 2wi. The ready− neuron 66 is connected to the output+ neuron 81 of the signed memory circuit by an inhibiting V-synapse 78 having a weight of 2wi.
The output neuron 50 of the circuit 40 is connected to the output+ and output− neurons 81, 82 by respective excitation V-synapses 79, 80 having the weight we.
The output of the signed memory circuit 60 comprises a ready neuron 84 that is the receiver node of an excitation V-synapse 85 having the weight we coming from the ready neuron 47 of the memory circuit 40.
and its ready neuron 47 produces an event at time tready1, as described above.
Once the ready neuron 47 has produced its event, the ready neuron 70 can be activated in order to read the signed piece of data, which takes place at time trecall1 in
Activation of the recall neuron 70 triggers the ready+ or ready− neuron 65, 66 via the V-synapse 70 or 71, and this triggering resets the other ready− or ready+ neuron 65, 66 to zero via the V-synapse 75 or 76. The event delivered by the ready+ or ready− neuron 65, 66 inhibits the output− or output+ neuron 82, 81 via the V-synapse 77 or 78 by bringing its potential to − 2Vt.
The event delivered by the ready+ or ready− neuron 65, 66 at time tsign1 is provided via the V-synapse 73 or 74. This triggers the emission of a pair of spikes separated by a time interval equal to ƒ(|x|) by the output neuron 50 of the circuit 40. This pair of spikes communicated to the output+ and output− neurons 81, 82 via the V-synapses 79, 80 twice triggers, at times tout1 and tout2=tout1+ƒ(|x|), the one of the output+ and output− neurons 81, 82 that corresponds to the sign of the input piece of data x, and resets the potential value of the other neuron 81, 82 to zero.
It is noted that the signed-memory circuit 60 shown in
A.4. Synchroniser
Each signal encodes a value xk for k=0, 1, . . . , N−1 and is in the form of a pair of spikes occurring at times tink1 andink2=ink1+Δtk with Δtk=ƒ(xk)∈[Tmin, Tmax]. These signals are returned at the output of the circuit 90 in a synchronised manner, i.e. each signal encoding a value xk is found at the output in the form of a pair of spikes occurring at times toutk1 and toutk2=toutk1+Δtk with tout01=tout11= . . . =toutN-11, as shown in
The circuit 90 shown in
The synchronisation circuit 90 comprises a sync neuron 95 that is the receiver node of N excitation V-synapses 960, . . . , 96N−1 having a weight of we/N, the emitter nodes of which are, respectively, the ready neurons 470, . . . , 47N−1 of the memory circuits 400, . . . , 40N−1. The circuit 90 also comprises excitation V-synapses 970, . . . , 97N−1 having the weight we, the sync neuron 95 as an emitter node, and, respectively, the recall neurons 480, . . . , 48N−1 of the memory circuits 400, . . . , 40N−1 as receiver nodes.
The sync neuron 95 receives the events produced by the ready neurons 470, . . . , 47N−1 as the N input signals are loaded into the memory circuits 400, . . . , 40N−1, i.e. at times tridy01, trdy11 in
The presentation of the synchronisation circuit in reference to
It is also possible to put out only a single event, at time tout1=tout01=tout11= . . . =toutN-11, as the first event of all the pairs forming the synchronised output signals. The sync neuron 95 thus directly controls the emission of the first spike on a particular output of the circuit (which can be one of the output neurons 920, . . . , 92N−1 or a specific neuron), and then the second spike of each pair by reactivating the acc neurons 44 of the memory circuits 400, . . . , 40N−1 via a ge-synapse. In other words, the sync neuron 95 acts as the recall neurons 48 of the various memory circuits.
Such a synchroniser circuit 98 is illustrated for the case where N=2 by
It should be noted that in the example of
More generally, in the context of the present invention, it is not necessary for the two events of a pair representing a value to come from a single node (in the case of an output value) or to be received by a single node (in the case of an input value).
B.1. Minimum
Besides the input neurons 101, 102 and the output neuron 103, this circuit 100 comprises two ‘smaller’ neurons 104, 105. An excitation V-synapse 106, having a weight of we/2, goes from the input neuron 101 to the smaller neuron 104. An excitation V-synapse 107, having a weight of we/2, goes from the input neuron 102 to the smaller neuron 105. An excitation V-synapse 108, having a weight of we/2, goes from the input neuron 101 to the output neuron 103. An excitation V-synapse 109, having a weight of we/2, goes from the input neuron 102 to the output neuron 103. An excitation V-synapse 110, having a weight of we/2, goes from the smaller neuron 104 to the output neuron 103. An excitation V-synapse 111, having a weight of we/2, goes from the smaller neuron 105 to the output neuron 103. An inhibiting V-synapse 112, having a weight of wi/2, goes from the smaller neuron 104 to the smaller neuron 105. An inhibiting V-synapse 113, having a weight of wi/2, goes from the smaller neuron 105 to the smaller neuron 104. An inhibiting V-synapse 114, having the weight wi, goes from the smaller neuron 104 to the input neuron 102. An inhibiting V-synapse 115, having the weight wi, goes from the smaller neuron 105 to the input neuron 101. All the synapses 106-115 shown in
The emission of the first spike on each input neuron 101, 102 at time tin11=tin21 (
Finally, the output neuron 103 does reproduce, between the events that it delivers, the minimum time interval tout2−tout1=tin12−tin11=Δt1 between the events of the two pairs produced by the input neurons 101, 102. This minimum is available at the output of the circuit 100 upon reception of the second event of the pair that represents it at the input.
The circuit 100 for calculating a minimum of
B.2. Maximum
Besides the input neurons 121, 122 and the output neuron 123, this circuit 120 comprises two ‘larger’ neurons 124, 125. An excitation V-synapse 126, having a weight of we/2, goes from the input neuron 121 to the larger neuron 124. An excitation V-synapse 127, having a weight of we/2, goes from the input neuron 122 to the larger neuron 125. An excitation V-synapse 128, having a weight of we/2, goes from the input neuron 121 to the output neuron 123. An excitation V-synapse 129, having a weight of we/2, goes from the input neuron 122 to the output neuron 123. An inhibiting V-synapse 132, having the weight 142 goes from the larger neuron 124 to the larger neuron 125. An inhibiting V-synapse 133, having the weight 142 goes from the larger neuron 125 to the larger neuron 124. All the synapses shown in
The first spikes emitted in a synchronised manner (tin11=tin21) by the input neurons 121, 122 set the larger neurons 124, 125 to a potential value Vt/2 at time tin11+Tsyn, and trigger a first event on the output neuron 123 at time tout1=tin11+Tsyn+Tneu, (
Finally, the output neuron 123 does reproduce, between the events that it delivers, the maximum time interval tout2−tout1=tin22−tin21=Δt2 between the events of the two pairs produced by the input neurons 121, 122. This maximum is available at the output of the circuit 120 upon reception of the second event of the pair that represents it at the input.
The circuit 120 for calculating a maximum of
C.1. Subtraction
Besides the input neurons 141, 142 and the neurons output+ 143 and output− 144, the subtraction circuit 140 comprises two sync neurons 145, 146 and two ‘inb’ neurons 147, 148. An excitation V-synapse 150, having a weight of we/2, goes from the input neuron 141 to the sync neuron 145. An excitation V-synapse 151, having a weight of we/2, goes from the input neuron 142 to the sync neuron 146. Three excitation V-synapses 152, 153, 154, each having the weight of we, go from the sync neuron 145 to the output+ neuron 143, to the output− neuron 144 and to the inb neuron 147, respectively. Three excitation V-synapses 155, 156, 157, each having the weight we, go from the sync neuron 146 to the output− neuron 144, to the output+ neuron 143 and to the inb neuron 148, respectively. An inhibiting V-synapse 158, having the weight iv goes from the sync neuron 145 to the inb neuron 148. An inhibiting V-synapse 159, having the weight wi goes from the sync neuron 146 to the inb neuron 147. An excitation V-synapse 160, having a weight of we/2, goes from the output+ neuron 143 to the inb neuron 148. An excitation V-synapse 161, having a weight of we/2, goes from the output− neuron 144 to the inb neuron 147. An inhibiting V-synapse 162, having a weight of 2wi, goes from the inb neuron 147 to the output+ neuron 143. An inhibiting V-synapse 163, having a weight of 2wi, goes from the inb neuron 163 to the output− neuron 144. The synapses 150, 151, 154 and 157-163 are associated with a delay of Tsyn. The synapses 152 and 155 are associated with a delay of Tmin+3·Tsyn+2·Tneu. The synapses 153 and 156 are associated with a delay of 3·Tsyn+2·Tneu.
The operation of the subtraction circuit 140 according to
The first spikes emitted in a synchronised manner (tin11=tin21) by the input neurons 141, 142 set the sync neurons 145, 146 to the potential value Vt/2 at time tin11+Tsyn. The emission of the second spike on the input neuron providing the smallest value, namely the neuron 142 at time tin22=tin21+Δt2 in the example of
Then, emission of a second spike on the other input neuron 141 at time tin12=tin11+Δt1 sets the other sync neuron 145 to the threshold voltage Vt, which triggers an event at time tsync11=tin12+Tsyn+Tneu at the output of this neuron 145. Thus:
The two excitation events received by the output− neuron 144, at times tin22+Tmin+4·Tsyn+3·Tneu and tin12+4·Tsyn+3·Tneu are after the inhibiting event received at time tin22+3·Tsyn+2·Tneu. As a result, this neuron 144 does not emit any event when Δt2<Δt1, and thus the sign of the result is suitably signalled.
Finally, the output+ neuron 143 delivers two events having between them a time interval Δtout between the events of the two pairs produced by the input neurons 141, 142, with:
Over the output neuron having the correct sign at the output of the subtractor circuit 140, two events are properly obtained having between them the time interval Δtout=ƒ(x1-x2). This result is available at the output of the circuit upon reception of the second event of the input pair having the greatest absolute value.
When two equal values are presented to it at the input, the subtractor circuit 140 shown in
In
The zero neuron 171 acts as a detector of coincidence between the events delivered by the sync neurons 145, 146. Given that these two neurons only deliver events at the time of the second encoding spike of their associated input, detecting this temporal coincidence is equivalent to detecting the equality of the two input values, if the latter are correctly synchronised. The zero neuron 171 only produces an event if it receives two events separated by a time interval less than Tneu from the sync neurons 145, 146. In this case, it directly inhibits the output− neuron 144 via the synapse 178, and deactivates the inb neuron 148 via the synapse 177.
Consequently, two equal input values provided to the subtractor circuit of
C.2. Accumulation
S=Σ
k=0
N-1αk·xk (16)
where α0, α1, . . . , αN−1 are positive or zero weighting coefficients and the input values x0, x1, . . . , xN−1 are positive or zero.
For each input value xk (0≤k<N), the circuit 180 comprises a input neuron 181k and an input− neuron 182k each part of a respective group 20 of neurons arranged in the same way as in the group 20 described above in reference to
The outgoing connections of the first and last neurons of these N groups of neurons 20 are configured as a function of the coefficients αk of the weighted sum to be calculated.
The first neuron connected to the input neuron 181k (0≤k<N) is the emitter node of an excitation ge-synapse 182k having a weight of αk, wacc and a delay of Tmin+Tsyn. The last neuron connected to the input neuron 181k is the emitter node of an inhibiting ge-synapse 183k having a weight of −αk·wacc and the delay Tsyn.
The acc neuron 184 accumulates the terms αk·xk. Thus, for each input k, the acc neuron 184 is the receiver node of the excitation ge-synapse 182k and of the inhibiting ge-synapse 183k.
The circuit 180 further comprises a sync neuron 185 that is the receiver node of N V-synapses, each having a weight of we/N and the delay Tsyn, respectively coming from the last neurons connected to the N neurons input 181k (0≤k<N). The sync neuron 185 is the emitter node of an excitation ge-synapse 186 having the weight wacc and the delay Tsyn, the receiver node of which is the acc neuron 184.
For each input having two spikes separated by Δtk=Tmin+xk·Tcod on the input neuron 181k, the acc neuron 184 integrates the quantity αk·Vt/Tmax over a duration Δtk−Tmin xk·Tcod.
Once all the second spikes of the k input signals have been received, the sync neuron 185 is triggered and excites the acc neuron 184 via the ge-synapse 186. The potential of the acc neuron 184 continues to grow for a residual time equal to Tmax−Σk=0N-1αk·xk·Tcod. At this time, the threshold Vt is reached by the acc neuron 184 that triggers an event.
The delay of this event with respect to that delivered by the sync neuron 185 is Tmax|Σk=0N-1αk·xk·Tcod=ƒ(1−Σk=0N-1αk·xk)=ƒ(1−s). The weighted sums is only made accessible by the circuit 180 in its inverted form (1−s).
The circuit 180 functions in the way that was just described under the condition that Tcod·Σk=0N-1αk·xk<Tmax. The coefficients αk can be normalised in order for this condition to be met for all the possible values of the xk, i.e. such that
C.3. Weighted Sum
A weighted addition circuit 190 can have the structure shown in
In order to obtain the representation of the weighted sum s according to (16), a circuit 180 for weighted accumulation of the type of that described in reference to
The acc neuron 188 is the receiver node of an excitation ge-synapse 191 having the weight wacc and the delay Tsyn, and the emitter node of an excitation V-synapse 192 having the weight we and a delay of Tmin+Tsyn. The output neuron 189 is also the receiver node of an excitation V-synapse 193 having the weight we and the delay Tsyn.
The linearly changing accumulation starts in the acc neuron 188 at the same time as it restarts in the acc neuron 184 of the circuit 180, the two acc neurons 184, 188 being excited on the ge-synapses 186, 191 by the same event coming from the sync neuron 185. Their residual accumulation times, until the threshold Vt is reached, are, respectively, Tmax−Σk=0N-1αk·xk·Tcod and Tmax. Because the synapse 192 has a relative delay of Tmin, the two events triggered on the output neuron 189 have between them the time interval Tmin+Σk=0N-1αk·xk·Tcod=ƒ(s).
The expected weighted sum is represented at the output of the circuit 190. When N=2 and α0=α1=½, this circuit 190 becomes a simple adder circuit, with a scale factor of ½ in order to avoid overflows in the acc neuron 184.
C.4. Linear Combination
The more general case of linear combination is also expressed by equation (16) above, but the coefficients αk can be positive or negative, just like the input values xk. Without losing generality, the coefficients and input values are ordered in such a way that the coefficients α0, α1, . . . , αM−1 are positive or zero and the coefficients αM+1, αM+2, . . . , αN−1 are negative (N≥2, M≥0, N−M≥0).
In order to take into account the positive or negative values, the circuit 200 for calculating a linear combination shown in
The input neurons 181k of the accumulation circuit 180A are respectively associated with the coefficients αk for 0≤k<M and with the inverted coefficients −αk for M≤k<N. These input neurons 181k for 0≤k<M receive a pair of spikes representing xk when xk≥0 and thus form neurons of the input+ type for these values x0, . . . , xM−1. The input neurons 181k of the circuit 180A for M≤k<N receive a pair of spikes representing xk when xk<0 and thus form neurons of the input− type for these values xM, . . . , xN−1.
The input neurons 181k of the circuit 180B for weighted accumulation are respectively associated with the inverted coefficients −αk for 0≤k<M and with the coefficients αk for M≤k<N. These input neurons 181k for 0≤k<M receive a pair of spikes representing xk when xk<0 and thus form neurons of the input− type for these values x0, . . . , xM−1. The neurons input 181k of the circuit 180B for M≤k<N receive a pair of spikes representing xk when xk≥0 and thus form neurons of the input+ type for these values xM, . . . , xN−1.
The two accumulation circuits 180A, 180B share their sync neuron 185 that is thus the receiver node of 2N V-synapses, each having a weight of we/N and the delay Tsyn, coming from last neurons coupled with the 2N input neurons 181k. The sync neuron 185 of the linear combination calculation circuit 200 is therefore triggered once the N input values x0, . . . , xN−1, positive or negative, have been received on the neurons 181k.
A time ΔTA=Tmax|Eα
A time ΔTB=Tmax−Σα
A subtractor circuit 170 that can be of the type of that shown in
The output− neuron 144 and the output+ neuron 143 of the subtractor circuit 170 are respectively connected, via excitation V-synapses 205, 206 having the weight we and the delay Tsyn, to two other output+ and output− neurons 203, 204 that form the outputs of the circuit 200 for calculating a linear combination.
The one of these two neurons that is triggered indicates the sign of the result s of the linear combination. It delivers a pair of events separated by the time interval Δtout=Tmin+ΔTA−ΔTB=ƒ(|Σα
The availability of this result is indicated on the outside by a ‘start’ neuron 207 receiving two excitation V-synapses 208, 209, having the weight we and the delay Tsyn, coming from the output+ neuron 143 and the output− neuron 144 of the subtractor circuit 170. The start neuron 207 inhibits itself via a V-synapse 210, having the weight wi and the delay Tsyn. The start neuron 207 delivers a spike simultaneously to the first spike of the output+ or output− neuron 203, 204 which is activated.
The coefficients αk can be normalised in order for the conditions Σα
in order for the circuit 200 for calculating a linear combination to function as described above. The normalisation factor must therefore be taken into account in the result.
D.1. Logarithm
The input neuron 211 belongs to a group of nodes 20 similar to that described in reference to
and a gate-synapse 218 having a weight of 1 and the delay Tsyn.
The circuit 210 further comprises an output neuron 220 that is the receiver node of an excitation V-synapse 221 having the weight we and a delay of 2·Tsyn coming from the last neuron 215, and of an excitation V-synapse 222 having the weight we and a delay of Tmin+Tsyn coming from the acc neuron 216.
The operation of the logarithm calculation circuit 210 according to
The emission of the first spike at time tin1 at the input neuron 211 triggers an event at the output of the first neuron 213 at time tfirst1=tin1+Tsyn Tneu. The first neuron 213 starts the accumulation by the acc neuron 216 at time tst1=tin1+Tmin+2·Tsyn+Tneu via the ge-synapse 212.
The emission of the second spike at time tin2=tin1+Tmin+x·Tcod at the input neuron 211 causes the last neuron 215 to deliver an event at time tlast1=tin2+Tsyn+Tneu. This event transported by the ge-synapse 214 stops the accumulation carried out by the acc neuron 216 at time tend1=tlast1+Tsyn=tst1+x·Tcod. At this time, the potential value Vt·x is stored in the acc neuron 216.
Via the synapses 217 and 218, the last neuron 215 further activates the exponential change on the acc neuron 216 at the same time tend1 via the gƒ-synapse 217 and the gate-synapse 218. It should be noted that alternatively, the event transported by the gƒ-synapse 217 could also arrive later at the acc neuron 216 if it is desired to store, in the latter, the potential value Vt·x while other operations are carried out in the device.
After activation by the synapses 217 and 218, the component gƒ of the acc neuron 216 changes according to:
and its membrane potential according to:
This potential V(t) reaches the threshold Vt and triggers an event on the V-synapse 222 at time tacc1=tend1−τƒ·log(x).
A first event is triggered on the output neuron 220 because of the V-synapse 221 at time tout1=tlast1+2Tsyn+Tneu=tend1+Tsyn+Tneu. The second event triggered by the synapse 222 occurs at time tout2=tacc1+Tmin+Tsyn+Tneu=tout1+Tmin−τƒ·log(x).
Finally, the two events delivered by the output neuron 220 are separated by a time interval
The representation of a number proportional to the natural logarithm log(x) of the input value x is properly obtained at the output. Since 0<x≤1, the logarithm log(x) is a negative value.
If we call A the value
the circuit 210 of
D.2. Exponentiation
The input neuron 231 belongs to a group of nodes 20 similar to that described in reference to
The circuit 230 further comprises an output neuron 240 that is the receiver node of an excitation V-synapse 241 having the weight we and a delay of 2·Tsyn coming from the last neuron 235, and of an excitation V-synapse 242 having the weight we and a delay of Tmin+Tsyn coming from the acc neuron 238.
The operation of the exponentiation circuit 230 according to
The emission of the first spike at time tin1 at the input neuron 231 triggers an event at the output of the first neuron 233 at time tfirst1=tin1+Tsyn+Tneu. The first neuron 233 starts an exponentially-growing accumulation in the acc neuron 238 at time tst1=tin1+Tmin+2·Tsyn+Tneu via the gƒ-synapse 232 and the gate-synapse 234.
The component gƒ of the acc neuron 238 changes according to:
and its membrane potential according to:
The emission of the second spike at time tin2=tin1+Tmin+x·Tcod at the input neuron 231 causes the last neuron 235 to deliver an event at time tlast1=tin2+Tsyn+Tneu. This event transported by the gate-synapse 236 stops the exponentially-changing accumulation carried out by the acc neuron 238 at time tend1=tlast1+Tsyn=tst1+x·Tcod. At this time, the potential value Vt·(1−Ax) is stored in the acc neuron 238, where, as above,
Via the ge-synapse 237, the last neuron 235 further activates the linear dynamics having the weight
The membrane potential of the neuron 238 thus changes according to:
This potential V(t) reaches the threshold Vt and triggers an event on the V-synapse 222 at time tacc1=tend1+Ax·Tcod.
A first event is triggered on the output neuron 240 because of the V-synapse 241 at time tout1=tlast1+2Tsyn+Tneu=tend1+Tsyn+Tneu. The second event triggered by the synapse 242 occurs at time tout2=tacc1+Tmin+Tsyn+Tneu=tout1+Tmin+Ax·Tcod.
Finally, the two events delivered by the output neuron 240 are separated by a time interval ΔTout=tout2−tout1=Tmin+Ax·Tcod=ƒ(Ax).
The circuit 230 of
The circuit 230 of
This can be used to implement various non-linear calculations using simple operations between logarithm calculation and exponentiation circuits. For example, the sum of two logarithms allows multiplication to be carried out, the subtraction thereof allows division to be carried out, the sum of n times the logarithm allows a number x to be raised to a whole power n, etc.
D.3. Multiplication
Each input neuron 251k (k=1 or 2) belongs to a group of nodes 20k similar to that described in reference to
The circuit 250 further comprises a sync neuron 260 that is the receiver node of two excitation V-synapses 2611, 2612 having a weight of we/2 and the delay Tsyn coming, respectively, from the last neurons 2551, 2552. A gƒ-synapse 262 having the weight gmult and the delay Tsyn and an excitation gate-synapse 264 having a weight of 1 and the delay Tsyn go from the sync neuron 260 to the acc neuron 2561.
A gƒ-synapse 265 having the weight gmult and the delay Tsyn and an excitation gate-synapse 266 having a weight of 1 and the delay Tsyn go from the acc neuron 2561 to the acc neuron 2562.
The circuit 250 comprises another acc neuron 268 that plays a role similar to the acc neuron 238 in
Finally, the circuit 250 has an output neuron 274 that is the receiver node of an excitation V-synapse 275, having the weight we and a delay of 2Tsyn, coming from the acc neuron 2562 and of an excitation V-synapse 276, having the weight we and a delay of Tsyn+Tsyn, coming from the acc neuron 268.
The operation of the multiplier circuit 250 according to
Each of the two acc neurons 2561, 2562 initially behaves like the acc neuron 216 of
Emission of the second spike at time tin22=tin21+Tmin·x2·Tcod at the input neuron having the smallest value (the input neuron 2512 in the example shown in
Emission of the second spike at time tin12=tin11+Tmin+x1·Tcod at the input neuron having the largest value (the input neuron 2511 in the case of
The potential of the acc neuron 2561 reaches the value Vt and triggers an event on the synapses 265, 266 at time tlog11=tst11−τƒ·log(x1).
The exponential change 2801 is thus activated in the acc neuron 2562 at time tst21=tlog11+Tsyn. The potential of this acc neuron 2562 reaches the threshold Vt and triggers an event on the synapses 271, 272, 275 at time tlog21=tst21−τƒ·log(x2)=tsync1−τƒ·log(x1·x2)+2Tsyn. The gate-synapse 271 deactivates the exponential change 281 in the acc neuron 268 at time tend31=tlog21+Tsyn, and simultaneously, the linear change 282 in the acc neuron 268 is activated via the ge-synapse 272, starting from the value:
The V-synapse 275 triggers the emission of a first spike on the output neuron 274 at time tout1=tlog21+2Tsyn+Tneu.
The acc neuron 268 reaches the threshold Vt and triggers an event on the V-synapse 276 at time texp1=tend31 x1·x2·Tcod. This results in emission of a second spike at the output neuron 274 at time tout2=texp1+Tmin+Tsyn+Tneu.
Finally, the two events delivered by the output neuron 268 are separated by a time interval ΔTout=tout2−tout1=Tmin+x1·x2·Tcod=ƒ(x1·x2)
The circuit 250 of
For this, the pairs of events did not have to be received in a synchronised manner on the input neurons 2511, 2512 since the sync neuron 260 handles the synchronisation.
D.4. Signed Multiplication
For each input value xk (1≤k≤2), the multiplier circuit 290 comprises a input+ neuron 291k and a input− neuron 292k that are the emitter nodes of two respective V-synapses 293k and 294k having the weight we. The V-synapses 2931 and 2941 are directed towards an input neuron 2511 of a multiplier circuit 250 of the type shown in
The multiplier circuit 290 has a output+ neuron 295 and a output− neuron 296 that are the receiver nodes of two respective excitation V-synapses 297 and 298 having the weight we coming from the output neuron 274 of the circuit 250.
The multiplier circuit 290 also comprises four sign neurons 300-303 connected to form logic for selecting the sign of the result of the multiplication. Each sign neuron 300-303 is the receiver node of two respective excitation V-synapses having a weight of we/4 coming from two of the four input neurons 291k, 292k. The sign neuron 300 connected to the input+ neurons 2911, 2912 detects the reception of two positive inputs x1, x2. It forms the emitter node of an inhibiting V-synapse 305 having a weight of 214), going to the output− neuron 296. The sign neuron 303 connected to the input− neurons 2921, 2922 detects the reception of two negative inputs x1, x2. It forms the emitter node of an inhibiting V-synapse 308 having a weight of 2wi going to the output− neuron 296. The sign neuron 301 connected to the input neuron 2921 and the input+ neuron 2921 detects the reception of a negative input x1 and of a positive input x2. It forms the emitter node of an inhibiting V-synapse 306 having a weight of 2wi going to the output+ neuron 295. The sign neuron 302 connected to the input+ neuron 2911 and the input− neuron 2922 detects the reception of a positive input x1 and of a negative input x2. It forms the emitter node of an inhibiting V-synapse 307 having a weight of 2wi going to the output+ neuron 295.
Inhibiting V-synapses are arranged between the sign neurons 300-303 in order to ensure that only one of them acts in order to inhibit one of the output+ neuron 295 and the output− neuron 296. Each sign neuron 300-303 corresponding to a sign (+ or −) of the product is thus the emitter node of two inhibiting V-synapses having a weight of we/2 going, respectively, to the two sign neurons corresponding to the opposite sign.
Thus arranged, the circuit 290 of
Logic for detecting a zero on one of the inputs can be added thereto, like in the case of
E.1. Integration
In order to carry out the integration, the circuit 310 uses a linear combination circuit 200 of the type shown in
The input+ neuron 311 and the input neuron 312 are connected, respectively, to the input+ and input− neurons 1811 of the circuit 200 that are associated with the coefficient α1=dt, by two V-synapses 321, 322.
The other input+ and input− neurons 1811 of the circuit 200, which are associated with the coefficient α0=1, are connected, respectively, by two V-synapses 323, 324 to two output+ and output− neurons 315, 316 of a circuit 217, the role of which is to provide an initialisation value x0 for the integration process. The circuit 317 substantially consists of a pair of output+ and output− neurons 315, 316 connected to the same recall neuron 15 in the manner shown in
Another init neuron 318 of the integration circuit 310 is the emitter node of a synapse 325, the receiver node of which is the recall neuron 15 of the circuit 317. The init neuron 318 loads the integrator with its initial value x0 stored in the circuit 317.
Synapses 326, 327 are arranged to provide feedback from the output+ neuron 143 of the linear combination circuit 200 to its input+ neuron 1810 and from the output− neuron 144 of the integration circuit 200 to its input− neuron 1810.
A start neuron 319 is the emitter node of two synapses 328, 329 that feed a zero value in the form of two events separated by the time interval Tmin on the input+ neuron 1811 of the integration circuit 180.
The output+ neuron 143 and the output− neuron 144 of the linear combination circuit 200 are the respective emitter nodes of two synapses 330, 331, the receiver nodes of which are, respectively, the output+ neuron 313 and the output− neuron 314 of the integration circuit 310.
Finally, the integration circuit 310 has a new input neuron 320 that is the receiver node of a synapse 332 coming from the start neuron 207 of the linear combination circuit 200.
The initial value x0 is, according to its sign, delivered on the output+ neuron 313 or the output− neuron 314 once the init neuron 318 and then the start neuron 319 have been activated. At the same time, an event is delivered by the new input neuron 320. This event signals, to the environment of the circuit 310, that the derivative value g′ (k·dt), with k=0, can be provided. As soon as this derivative value g′(k·dt) is presented on the input+ neuron 311 or the input− neuron 312, a new integral value is delivered by the output+ neuron 313 or the output− neuron 314 and a new event delivered by the new input neuron 320 signals, to the environment of the circuit 310, that the next derivative value g′ ((k+1)·dt) can be provided. This process is repeated as many times as derivative values g′ (k·dt) are provided (k=0, 1, 2, etc.).
After a (k+1)-th derivative value g′(k·dt) has been provided to the integrator circuit 310, the representation of the following value is found at the output:
x
0+Σi=0kg′(i·dt)·dt (23)
which, up to an additive constant, is an approximation of g(T)=∫0Tg′(t)·dt with T=(k+1)·dt.
The circuits described above in reference to
In particular,
E.2. First-Order Differential Equation
where τ and X∞ are parameters that can take on various values. The synapses shown in
In order to solve equation (24), the device of
The constant X∞ is provided at one of the input+ and input− neurons 1811 associated with the coefficient α1=1/τ in the linear combination circuit 200 after each activation of the recall neuron 15 that is the receiver node of a synapse 340 coming from the new input neuron 320 of the integrator circuit 310. Two synapses 341, 342 provide feedback from the output node output+ 313 of the integrator circuit 310 to the other input node input+ 1810 of the linear combination circuit 200, and from the output node output− 314 of the circuit 310 to the other input node input− 1810 of the circuit 200. Two synapses 343, 344 go from the output node output+ 203 of the linear combination circuit 200 to the input node input+ 311 of the integrator circuit 310 and, respectively, from the output node output+ 204 of the circuit 200 to the input node input− 312 of the circuit 310.
The device of
The init and start neurons 348, 349 allow the process of integration to be initialised and started. The init neuron 348 must be triggered before the integration process in order to load the initial value into the integrator circuit 310. The start neuron 349 is triggered in order to deliver the first value from the circuit 310.
The device of
Results of simulation of this device with various sets of parameters τ, X∞ and with an integration step size dt=0.5 are presented in
E.3. Second-Order Differential Equation
where ξ and ω0 are parameters that can take on various values. The synapses shown in
In order to solve equation (25), the device of
The constant X∞ is provided at the input neuron 1812 associated with the coefficient α2=ω02 in the linear combination circuit 200 after each activation of the recall neuron 15 that is the receiver node of a synapse 350 coming from the new input neuron 320 of the second integrator circuit 310B. Two synapses 351, 352 provide feedback from the output node output 313 of the second integrator circuit 310B to the input node input 1811 of the linear combination circuit 200 associated with the coefficient α1=−ξ·ω0 and, respectively, from the output node output 313 of the first integrator circuit 310A to the other input node input 1810, of the circuit 200, associated with the coefficient α0=ω02. A synapse 353 goes from the output node output 203 of the linear combination circuit 200 to the input node input 311 of the first integrator circuit 310A. A synapse 354 goes from the output node output 313 of the first integrator circuit 310A to the input node input 311 of the second integrator circuit 310B.
The device of
The init and start neurons 358359 allow the process of integration to be initialised and started. The init neuron 358 must be triggered before the integration process in order to load the initial value into the integrator circuits 310A, 310B. The start neuron 359 is triggered in order to deliver the first value from the second integrator circuit 310B.
The device of
Results of simulation of this device with various sets of parameters ξ, ω0 and with an integration step size dt=0.2 and X∞=−0.5 are presented in
E.4. Resolution of a System of Non-Linear Differential Equations
In order to make sure that the system modelled has a chaotic behaviour, the device of
The synapses shown in
In order to solve the system (26), the device of
The linear combination circuit 200A is configured N=2 and coefficients α0=σ and α1=−σ. Its input neuron 181A0 is excited from the output neuron 313A of the integrator circuit 310A, and its input neuron 181A1 is excited from the output neuron 313B of the integrator circuit 310B. Its output neuron 203A is the emitter node of a synapse extending from the input neuron 910 to the synchroniser circuit 90.
The linear combination circuit 200B is configured N=3 and coefficients α0=ρ and α1=α2=−1. Its input neuron 181B0 is excited from the output neuron 313B of the integrator circuit 310B, its input neuron 181B1 is excited from the output neuron 313A of the integrator circuit 310A, and its input neuron 181B2 is excited from the output neuron 295A of the multiplier circuit 290A. Its output neuron 203B is the emitter node of a synapse coming from the input neuron 911 to the synchroniser circuit 90.
The linear combination circuit 200C is configured N=2 and coefficients α0=1 and α1=−β. Its input neuron 181C0 is excited from the output neuron 295B of the multiplier circuit 290B, its input neuron 181C1 is excited from the output neuron 313C of the integrator circuit 310C. Its output neuron 203C is the emitter node of a synapse extending from the input neuron 912 to the synchroniser circuit 90.
Three synapses go, respectively, from the output neuron 920 of the synchroniser circuit 90 to the input neuron 311A of the integrator circuit 310A, from the output neuron 921 of the circuit 90 to the input neuron 311B of the integrator circuit 310B, and from the output neuron 922 of the circuit 90 to the input neuron 311C of the integrator circuit 310C.
The input neuron 291A1 of the multiplier circuit 290A is excited from the output neuron 313A of the integrator circuit 310A, and its input neuron 291A2 is excited from the output neuron 313C of the integrator circuit 310C. The input neuron 291B1 of the multiplier circuit 290B is excited from the output neuron 313A of the integrator circuit 310A, and its input neuron 291B2 is excited from the output neuron 313B of the integrator circuit 310B.
The device of
The device of
The points in
The system behaves in the expected manner, in accordance with the strange attractor described by Lorenz.
It has been shown that the calculation architecture proposed, with the representation of the data in the form of time intervals between events in a set of processing nodes, allows relatively simple circuits to be designed in order to carry out elementary functions in a very efficient and fast manner. In general, the results of the calculations are available as soon as the various input data have been provided (possible with a few synaptic delays).
These circuits can then be assembled to carry out more sophisticated calculations. They form a sort of brick from which powerful calculation structures can be built. Examples of this have been shown with respect to the resolution of differential equations.
When the elementary circuits are assembled, it is possible to optimise the number of neurons used. For example, some of the circuits were described with input neurons, and/or output neurons and/or first, last neurons. In practice, these neurons at the interfaces between elementary circuits can be eliminated without changing the functionality carried out.
The processing nodes are typically organised as a matrix. This lends itself well in particular to an implementation using FPGA.
A programmable array 400 forming the set of processing nodes, or a portion of this set, in an exemplary implementation of the processing device is illustrated schematically in
Programming or configuration logic 420 is associated with the array 400 in order to adjust the synaptic weights and the delay parameters of the connections between the nodes of the array 400. This configuration is carried out in a manner analogous to that which is routinely practice in the field of artificial neural networks. In the present context, the configuration of the parameters of the connections is carried out according to the calculation program that will be executed and while taking into account the relationship used between the time intervals and the values that they represent, for example the relationship (11). If the program is broken up into elementary operations, the configuration can result from an assembly of circuits of the type of those that were described above. This configuration is produced under the control of a control unit 410 provided with a man-machine interface.
Another role of the control unit 410 and to provide the input values to the programmable array 400, in the form of events separated by suitable time intervals, in order for the processing nodes of the array 400 to execute the calculation and deliver the results. These results are quickly recovered by the control unit 410 in order to be presented to a user or to an application that uses them.
This calculation architecture is well suited for rapidly carrying out massively parallel calculations.
Moreover, it is relatively easy to have a pipelined organisation of the calculations, for the execution of algorithms that are well suited to this type of organisation.
The embodiments described above are illustrations of the present invention. Various modifications can be made to them without departing from the scope of the invention that emerges from the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
1556659 | Jul 2015 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2016/051717 | 7/6/2016 | WO | 00 |