Multi-input multi-output time encoding and decoding machines

Information

  • Patent Grant
  • 9013635
  • Patent Number
    9,013,635
  • Date Filed
    Friday, August 19, 2011
    13 years ago
  • Date Issued
    Tuesday, April 21, 2015
    9 years ago
Abstract
Methods and systems for encoding and decoding signals using a Multi-input Multi-output Time Encoding Machine (TEM) and Time Decoding Machine are disclosed herein.
Description
BACKGROUND

1. Field


The present application relates to methods and systems for a Multi-input Multi-output (MIMO) Time Encoding Machines (TEMs) and Time Decoding Machines (TDMs) as well as uses of TEMs and TDMs for encoding and decoding video signals.


2. Background Art


Most signals in the natural world are analog, i.e., cover a continuous range of amplitude values. However, most computer systems for processing these signals are binary digital systems. Generally, synchronous analog-to-digital (A/D) converters are used to capture analog signals and present a digital approximation of the input signal to a computer processor. That is, at precise moments in time synchronized to a system clock, the amplitude of the signal of interest is captured as a digital value. When sampling the amplitude of an analog signal, each bit in the digital representation of the signal represents an increment of voltage, which defines the resolution of the A/D converter. Analog-to-digital conversion is used in numerous applications, such as communications where a signal to be communicated can be converted from an analog signal, such as voice, to a digital signal prior to transport along a transmission line.


Applying traditional sampling theory, a band limited signal can be represented with a quantifiable error by sampling the analog signal at a sampling rate at or above what is commonly referred to as the Nyquist sampling rate. It is a continuing trend in electronic circuit design to reduce the available operating voltage provided to integrated circuit devices. In this regard, power supply voltages for circuits are constantly decreasing. While digital signals can be processed at the lower supply voltages, traditional synchronous sampling of the amplitude of a signal becomes more difficult as the available power supply voltage is reduced and each bit in the A/D or D/A converter reflects a substantially lower voltage increment.


Time Encoding Machines (TEMs) can encode analog information in the time domain using only asynchronous circuits. Representation in the time domain can be an alternative to the classical sampling representation in the amplitude domain. Applications for TEMs can be found in low power nano-sensors for analog-to-discrete (A/D) conversion as well as in modeling olfactory systems, vision and hearing in neuroscience.


SUMMARY

Systems and methods for using MIMO TEMs and TDMs are disclosed herein.


According to some embodiments of the disclosed subject matter, methods for encoding a plurality (M) of components of a signal include filtering each of the M components into a plurality (N) filtered signals and encoding each of the N filtered-signals using at least one Time Encoding Machine (TEM) to generate a plurality (N) of TEM-encoded filtered signals. In some embodiments include the TEM can be an integrate-and-fire neuron, can have multiplicative coupling, and/or can be an asynchronous sigma/delta modulator. In some embodiments, a bias value can be added to each of the N filtered-signals. In some embodiments, the M signals can be irregularly sampled.


In further embodiments, each of the N filtered signals can be represented by the equation vj=(hj)T*u, where hj=[hj1, hj2, . . . hjM]T is a filtering vector corresponding to one of the N filtered signals represented by j. In some embodiments, the N TEM-encoded filtered signals can represented by qkj, where qkjjδj−bj(tk+1j−tkj), for all times represented by a value k ε custom character, and each of the N filtered signals represented by a value j, j=1, 2, . . . N, where κj is an integration constant, δj is a threshold value, and bj is a bias value for each of the N filtered signals.


According to some embodiments of the disclosed subject matter, methods for decoding a TEM-encoded signal include, receiving a plurality (N) of TEM-encoded filtered signals, decoding the N TEM-encoded filtered signals using at least one Time Decoding Machine (TDM) to generate a plurality (N) of TDM-decoded signal components, and filtering each of the N TDM-decoded signal components into a plurality (M) of output signals components. In some embodiments, one of the M output signal components, represented by the i-th component of the vector valued signal u, |ui(t)|≦ci, can be recovered by solving for









u
i



(
t
)


=




j
=
1

N






k

𝔍





c
k
j




ψ
k
ji



(
t
)






,


where







ψ
k
ji



(
t
)



=


(



h
~

ji

*
g

)



(

t
-

s
k
j


)



,





for all i, i=1, 2, . . . M, skj=(tk+1j+tkj)/2, {tilde over (h)}ji is the involution of hji, hji is represented in a TEM-filterbank








h


(
t
)


=

[





h
11



(
t
)






h
12



(
t
)









h

1

M




(
t
)








h
21



(
t
)






h
22



(
t
)









h

2

M




(
t
)






















h

N





1




(
t
)






h

N





2




(
t
)









h
NM



(
t
)





]


,





and [cj]k=ckj j=1, 2, . . . N, where c=[c1, c2, . . . , cN]T, c=G+q, where q=[q1, q2, . . . qN]T and [qj]k=qkj and








[

G
ij

]

kl

=




m
=
1

M






c
k
i


c

k
+
1

i





h
im

*


h
~

jm

*

g


(

t
-

s
l
j


)






s

.








In some embodiments, the M TEM-encoded signals can be irregularly sampled.


According to some embodiments of the disclosed subject matter, methods for encoding a video stream signal include filtering the video stream signal into a plurality (N) of spatiotemporal field signals, and encoding each of the N spatiotemporal field signals with a Time Encoding Machine to generate a plurality (N) of TEM-encoded spatiotemporal field signals. In some embodiments, the spatiotemporal field signals can be described by an equation: vj(t)=∫−∞+∞(∫∫XDj(x, y, s)I(x, y, t−s)dxdy)ds, where Dj(x, y, s) is a filter function, and I(x, y, t) represents the input video stream.


In some embodiments, the N TEM-encoded spatiotemporal field signals can be represented by a sampling function: ψkj(x, y, t)=D(x, y,−t)*g(t−skj), for k spike times, for each (x, y) in a bounded spatial set, where j corresponds to each of the N TEM-encoded spatiotemporal field signals, and where g(t)=sin(Ωt)/πt.


According to some embodiments of the disclosed subject matter, methods for decoding a TEM-encoded video stream signal include receiving a plurality (N) of TEM-encoded spatiotemporal field signals, decoding each of the N TEM-encoded spatiotemporal field signals using a Time Decoding Machine (TDM) to generate a TDM-decoded spatiotemporal field signal, and combining each of the TDM-decoded spatiotemporal field signals to recover the video stream signal.


In some embodiments, the decoding and combining can be achieved by applying an equation:








I


(

x
,
y
,
t

)


=




j
=
1

N






k

𝔍





c
k
j




ψ
k
j



(

x
,
y
,
t

)






,





where ψkj(x, y, t)=D(x, y, t)*g(t−skj), for k spike times, for each (x, y) in a bounded spatial set, where j corresponds to each of the N TEM-encoded spatiotemporal field signals, and where g(t)=sin(Ωt)/πt, and where [cj]k=ckj and c=[c1, c2, . . . cl]T, c=G+q, where T denotes a transpose, q=[q1, q2, . . . qN]T, [qj]k=qkj and G+ denotes a pseudoinverse, a matrix G is represented by







G
=

[




G
11




G
12







G

1

N







G
21




G
22







G

2

N





















G

N





1





G

N





2








G
NN




]


,





and [Gij]kl=<Di(x, y, •)*g(•-tki), Dj(x, y, •)*g(•-tlj).


According to some embodiments of the disclosed subject matter, methods of altering a video stream signal include receiving a plurality (N) of TEM-encoded spatiotemporal field signals from a plurality (N) of TEM-filters applying a switching matrix to map the N TEM-encoded spatiotemporal field signals to a plurality (N) of reconstruction filters in a video stream signal TDM. In some embodiments for rotating the video stream signal the switching matrix can map each of the N TEM-encoded spatiotemporal field signals from a TEM-filter ([x,y], α, θ) to a reconstruction filter ([x,y], α, θ+lθ0), where lθ0 represents a desired value of rotation.


In some embodiments for zooming the video stream signal the switching matrix can map each of the N TEM-encoded spatiotemporal field signals from a TEM-filter ([x,y], α, θ) to a reconstruction filter ([x,y], α0m α, θ), where α0m represents a desired value of zoom.


In some embodiments for translating the video stream signal by a value [nb0, kb0] the switching matrix can map each of the N TEM-encoded spatiotemporal field signals from a TEM-filter ([x,y], α, θ) to a reconstruction filter at ([x+nb0, y+kb0], α, θ).


In some embodiments for zooming the video stream signal by a value α0m and translating the video stream signal by a value [nb0, kb0] the switching matrix can map each of the N TEM-encoded spatiotemporal field signals from a TEM-filter ([x,y], α, θ) to a reconstruction filter at ([x+α0mnb0, y+α0mkb0], α0mα, θ).


According to some embodiments of the disclosed subject matter, methods of encoding a video signal include inputting the video signal into a first and second time encoding machine (TEM), the first TEM including a first TEM-input and a first TEM-output, the second TEM including a second TEM-input and a second TEM-output, wherein the first TEM-output is connected to the first TEM-input and the second TEM-input to provide negative feedback and the second TEM-output is connected to the first TEM-input and the second TEM-input to provide positive feedback.


Some embodiments further include outputting a first set of trigger values from the first TEM according to an equation







u


(

t
k
1

)


=



+

δ
1


+




l
<
k





h
11



(


t
k
1

-

t
l
1


)



-



l





h
21



(


t
k
1

-

t
l
2


)




1

{


t
l
2

<

t
k
1


}





=

q
k
1







and outputting a second set of trigger values from the second TEM according to an equation







u


(

t
k
2

)


=



-

δ
2


+




l
<
k





h
22



(


t
k
2

-

t
l
2


)



-



l





h
12



(


t
k
2

-

t
l
1


)




1

{


t
l
1

<

t
k
2


}





=


q
k
2

.






Yet other embodiments further include outputting a first set of trigger values from the first TEM according to an equation










c
k
1


c

k
+
1

1





u


(
s
)





s



=



κ
1



δ
1


-


b
1



(


t

k
+
1

1

-

t
k
1


)


+




l
<
k







c
k
1


c

k
+
1

1





h
11



(

s
-

t
l
1


)




-



l






c
k
1


c

k
+
1

1






h
21



(

s
-

t
l
2


)





s







1

{


t
l
2

<

t
k
1


}











and outputting a second set of trigger values from the second TEM according to an equation










t
k
2


t

k
+
1

2





u


(
s
)





s



=



κ
2



δ
2


-


b
2



(


t

k
+
1

1

-

t
k
1


)


+




l
<
k







t
k
2


t

k
+
1

2





h
22



(

s
-

t
l
2


)




-



l






t
k
2


t

k
+
1

2






h
12



(

s
-

t
l
1


)





s








1

{


t
l
1

<

t
k
2


}


.









According to some embodiments of the disclosed subject matter, methods of decoding a video signal include receiving first and second TEM-encoded signals and applying an equation








u


(
t
)


=





k

𝔍





c
k
1




ψ
k
1



(
t
)




+




k

𝔍





c
k
2




ψ
k
2



(
t
)






,





where ψkj(t)=g(t−tkj), for j=1, 2, g(t)=sin(Ωt)/πt, t ε custom character, c=[c1; c2] and [cj]k=ckj, and a vector of coefficients c can be computed as c=G+q, where q=[q1; q2] with [qj]k=qkj and







G
=

[




G
11




G
12






G
21




G
22




]


,



[

G
ij

]

kl

=




x
k
i

,

ψ
l
j





,





for all j=1, 2, and k, l εcustom character.


The accompanying drawings, which are incorporated and constitute part of this disclosure, illustrate preferred embodiments of the disclosed subject matter and serve to explain its principles.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a multiple input multiple output Time Encoding Machine architecture in accordance with some embodiments of the disclosed subject matter;



FIG. 2 depicts a multiple input multiple output Time Decoding Machine Architecture in accordance with some embodiments of the disclosed subject matter;



FIG. 3 depicts a multiple input multiple output irregular sampling architecture in accordance with some embodiments of the disclosed subject matter;



FIG. 4 depicts a video stream encoding device in accordance with some embodiments of the disclosed subject matter;



FIG. 5 depicts a video stream decoding device in accordance with some embodiments of the disclosed subject matter;



FIG. 6 depicts a second video stream decoding device in accordance with some embodiments of the disclosed subject matter;



FIG. 7 depicts a video stream encoding and decoding device in accordance with some embodiments of the disclosed subject matter;



FIG. 8 depicts a time encoding circuit in accordance with some embodiments of the disclosed subject matter;



FIG. 9 depicts a single neuron encoding circuit that includes an integrate-and-fire neuron with feedback in accordance with some embodiments of the disclosed subject matter;



FIG. 10 depicts two interconnected ON-OFF neurons each with its own feedback in accordance with some embodiments of the disclosed subject matter; and



FIG. 11 depicts two interconnected ON-OFF neurons each with its own feedback in accordance with some embodiments of the disclosed subject matter.





Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present disclosed subject matter will now be described in detail with reference to the Figs., it is done so in connection with the illustrative embodiments.


DETAILED DESCRIPTION

Improved systems, methods, and applications of Time Encoding and Decoding machines are disclose herein.


Asynchronous Sigma/Delta modulators as well as FM modulators can encode information in the time domain as described in “Perfect Recovery and Sensitivity Analysis of Time Encoded Bandlimited Signals” by A. A. Lazar and L. T. Toth (IEEE Transactions on Circuits and Systems-I: Regular Papers, 51(10):2060-2073, October 2004), which is incorporated by reference. More general TEMs with multiplicative coupling, feedforward and feedback have also been characterized by A. A. Lazar in “Time Encoding Machines with Multiplicative Coupling, Feedback and Feedforward” (IEEE Transactions on Circuits and Systems II: Express Briefs, 53(8):672-676, August 2006), which is incorporated by reference. TEMs realized as single and as a population of integrate-and-fire neurons are described by A. A. Lazar in “Multichannel Time Encoding with Integrate-and-Fire Neurons” (Neurocomputing, 65-66:401-407, 2005) and “Information Representation with an Ensemble of Hodgkin-Huxley Neurons” (Neurocomputing, 70:1764-1771, June 2007), both of which are incorporated by reference. Single-input single-output (SIMO) TEMs are described in “Faithful Representation of Stimuli with a Population of Integrate-and-Fire Neurons” by A. A. Lazar and E. A. Pnevmatikakis (Neural Computation), which is incorporated by reference.


Multi-input multi-output (MIMO) TEMs can encode M-dimensional bandlimited signals into N-dimensional time sequences. A representation of M-dimensional bandlimited signals can be generated using an N×M-dimensional filtering kernel and an ensemble of N integrate-and-fire neurons. Each component filter of the kernel can receive input from one of the M component inputs and its output can be additively coupled to a single neuron. (While the embodiments described refer to “neurons,” it would be understood by one of ordinary skill that other kinds of TEMs can be used in place of neurons.)


As depicted in FIG. 1, in one embodiment, M components of a signal 101, can be filtered by a set of filters 102. The outputs of the filters can be additively coupled to a set of N TEMs 104 to generated a set of N spike-sequences or trigger times 105.


Let Ξ be the set of bandlimited functions with spectral support in [−Ω, Ω]. The functions ui=ui(t), t ε custom character, in Ξ can model the M components of the input signal. Without any loss of generality the signal components (u1, u2, . . . uM) can have a common bandwidth Ω. Further, it can be assumed that the signals in Ξ have finite energy, or are bounded with respect to the L2 norm. Thus Ξ can be a Hilbert space with a norm induced by the inner product in the usual fashion.


ΞM can denote the space of vector valued bandlimited functions of the form u=[u1, u2, . . . , uM]T, where T denotes the transpose. ΞM can be a Hilbert space with inner product defined by













u
,
v




Ξ
M


=




i
=
1

M







u
i

,

v
i




Ξ






(
1
)








and with norm given by













u



Ξ
M

2

=




i
=
1

M






u
1



Ξ
2



,




(
2
)








where u=[u1, u2, . . . , uM]T ε ΞM and v=[v1, v2, . . . , vM]T ε ΞM.


We can let H:custom characterN×M be a filtering kernel 102 defined as:










H


(
t
)


=

[





h
11



(
t
)






h
12



(
t
)









h

1

M




(
t
)








h
21



(
t
)






h
22



(
t
)









h

2

M




(
t
)






















h

N





1




(
t
)






h

N





2




(
t
)









h
NM



(
t
)





]





(
3
)








where it can be assumed that supp(ĥij) [−Ω,Ω], for all i, i=1, 2, . . . , M and all j, j=1, 2, . . . N (supp denoting the spectral support and ^ denotes the Fourier transform). Filtering the signal u with the kernel H can lead to an N-dimensional vector valued signal v defined by:










v


=
Δ




H
*
u

=

[







i
=
1

M




h

1

i


*

u
i











i
=
1

M




h

2

i


*

u
i
















i
=
1

M




h
Ni

*

u
i






]



,




(
4
)








where * denotes the convolution. Equation (4) can be written as

vj=(hj)T*u, j, j=1, 2, . . . N   (5)

where hj=[hj1, hj2, . . . hjM]T is the filtering vector of the neuron, or TEM, j, j=1, 2, . . . N. A bias bj 103 can be added to the component vj of the signal v and the sum can be passed through an integrate-and-fire neuron with integration constant κj and threshold δj for all j, j=1, 2, . . . , N. The value (tkj), k ε custom character can be the sequence of trigger (or spike) times 105 generated by neuron j, j=1, 2, . . . , N. In sum, the TEM depicted in FIG. 1 can map the input bandlimited vector u into the vector time sequence (tkj), k ε custom character, j=1, 2, . . . , N.


In some embodiments the t-transform can describe the input/output relationship of the TEM, or the mapping of the stimulus u(t), t ε custom character, into the output spike sequence (tkj), k ε custom character, j=1, 2, . . . , N. The t-transform for the j-th neuron can be written as














t
k
j


t

k
+
1

j





(



v
j



(
s
)


+

b
j


)








s



=


κ
j



δ
j



,
or




(
6
)











i
=
1

M






t
k
j


t

k
+
1

j





(


h
ji

*

u
i


)



(
s
)




s




=

q
k
j


,




(
7
)








where qkjjδj−bj(tk+1j−tkj), for all k ε custom character, and j=1, 2, . . . N.


In some embodiments, as depicted in FIG. 2, recovering the stimuli or signal that was encoded 105 can be achieved by seeking the inverse of the t-transform 201. We can let g(t)=sin(Ωt)/πt, t ε custom character, be the impulse response of a low pass filter (LPF) with cutoff frequency at Ω. From Equation (5) vj ε Ξ and therefore the t-transform defined by Equation (7) can be written in an inner-product form as:













i
=
1

M







h
ji

*

u
i


,

g
*

1

[


t
k
j

,

t

k
+
1

j


]







=


q
k
j


or





(
8
)










i
=
1

M






u
i

,



h
~

ji

*
g
*

1

[


t
k
j

,

t

k
+
1

j


]







=

q
k
j














where {tilde over (h)}ji is the involution of hji. From Equality (8) we can say that stimulus u=(u1, u2, . . . , uM)T can be measured by projecting it onto the sequence of functions ({tilde over (h)}j*g*1[tkj,tk+1j]), k ε custom character, and j=1, 2, . . . N. The values of the measurements, qkj, k ε custom character, and j=1, 2, . . . N, are available for recovery. Thus the TEM can act as a sampler of the stimulus u, and because the spike times can depend on the stimulus, the TEM can act as a stimulus dependent sampler.


With (tkj), k ε custom character, the spike times of neuron j, the t-transform, as shown in Equation (8), can be written in an inner product form as










q
k
j

=



u
,

ϕ
k
j








(
9
)





where











ϕ
k
j

=




h
~

j

*
g
*

1

[


t
k
j

,

t

k
+
1

j


]



=


[











h
~


j





1








h
~


j





2



















h
~


j





M





]


g
*

1

[


t
k
j

,

t

k
+
1

j


]








(
10
)








for all k ε custom character, j=1, 2, . . . N, t ε custom character.


The stimulus u 101 can be recovered from Equation (9) if φ=φkj, where k ε custom character, j=1, 2, . . . N, is a frame for ΞM. Signal recover algorithms can be obtained using the frame ψ=(ψkj), k ε custom character, j=1, 2, . . . N, where

ψkj(t)=({tilde over (h)}j*g)(t−skj),   (11)

and skj=(tk+1j+tkj)/2.


A filtering kernel H can be said to be BIBO stable if each of the filters hji=hji(t), t ε custom character, j=1, 2, . . . N, and i=1, 2, . . . , M, is bounded-input bounded-output stable, i.e.,










h
ji



1



=
Δ

















h
ji



(
s
)











s



<


.







Filtering vectors hj=[hj1, hj2, . . . hjM]T, j=1, 2, . . . , N, can be said to be BIBO stable if each of the components hji, i=1, 2, . . . M, is BIBO stable. Further, if Ĥ: custom charactercustom characterN×M is the Fourier transform of the filtering kernel H, then [Ĥ(w)]nm=custom characterhnm(s)exp(−iws)ds, for all n=1, 2, . . . , N, and m=1, 2, . . . M, where i=√{square root over (−1)}. The filtering vectors hj can be said to have full spectral support if supp(ĥij) [−Ω, Ω], for all i, i=1, 2, . . . M. A BIBO filtering kernel H can be said to be invertible if Ĥ has rank M for all w ε [−Ω, Ω]. Filtering vectors (hj), j=1, 2, . . . , N, can be called linearly independent if there do not exist real numbers aj, j=1, 2, . . . N, not all equal to zero, and real numbers αj, j=1, 2, . . . N, such that










j
=
1

N





a
j



(


h
j

*
g

)




(

t
-

α
j


)



=
0





for all t, t ε custom character (except on a set of Lebesgue-measure zero).


Assuming that the filters hj=hj(t), t ε custom character, are BIBO stable, linearly independent and have full spectral support for all j, j=1, 2, . . . N, and that matrix H is invertible, then the M-dimensional signal u=[u1, u2, . . . , uM]T can be recovered as











u


(
t
)


=




j
=
1

N






k

𝔍








c
k
j




ψ
k
j



(
t
)






,




(
12
)








where ckj, k ε custom character, j=1, 2, . . . N are suitable coefficients provided that:













j
=
1

N




b
j



κ
j



δ
j






M






Ω
π






(
13
)








and |ui(t)|<ci, i=1, 2, . . . M.


Letting [cj]k=ckj and c=[c1, c2, . . . c1]T, the coefficients c can be computed as

c=G+q  (14)
202 where T denotes the transpose, q=[q1, q2, . . . qN]T, [qj]k=qkj and G+ denotes the pseudoinverse. The entries of the matrix G can be given by










G
=

[




G
11




G
12







G

1

N







G
21




G
22







G

2

N





















G

N





1





G

N





2








G
NN




]


,










[

G
ij

]

kl

=






ψ
l
j

,

ϕ
k
i










=






m
=
1

M






t
k
i


t

k
+
1

i





h
im

*


h
~

jm

*

g


(

t
-

s
l
j


)





s












(
15
)








for all i=1, 2, . . . , N, j=1, 2, . . . , N, k ε custom character, l ε custom character.


Assume that the filtering vectors hj=hj(t), are BIBO stable, linearly independent and have full spectral support for all j, j=1, 2, . . . N, and that matrix H is invertible. If Σj=1Nbjjδj diverges in N, then there can exist a number custom character such that for all N≧custom character, the vector valued signal u can be recovered as










u


(
t
)


=




j
=
1

N






k

𝔍





c
k
j




ψ
k
j



(
t
)









(
16
)








and the ckj, k ε custom character, j=1, 2, . . . N, are given in the matrix form by c=G+q.


In some embodiments, the previously disclosed Multiple-Input-Multiple-Output scheme can be applied to an Irregular Sampling problem, as depicted in FIG. 3. While similar to the previously disclosed MIMO-TEM, integrate-and-fire neurons can be replaced by irregular (amplitude) sampliers.


The samples for each signal 302 vj=hj1*u1+hj2*u2+ . . . +hjM*uM at times (skj)), k ε custom character, j=1, 2, . . . N, respectively can be recovered and sk=(tk+1+tk)/2, k ε custom character.


As with previous embodiments, assuming that the filtering vectors hj are BIBO stable for all j=1, 2, . . . N, and that H is invertible, the vector valued signal u, sampled with the circuit of FIG. 3, can be recovered as:










u


(
t
)


=




j
=
1

N






k

𝔍





c
k
j




ϕ
k
j



(
t
)









(
17
)








provided that






D
>

M


Ω
π







holds, where D is the total lower density. Further for [cj]k=ckj and c=[c1, c2, . . . cN]T, the vector of coefficients c can be computed as c=G+q, where q=[q1, q2, . . . , qN]T and [qj]k=qkj. The entries of the G matrix can be given by:














[

G
ij

]

kl

=






ϕ
l
j

,

ψ
k
i










=






m
=
1

M




(


h
im

*


h
~

jm

*
g
*

1

[


t
j
j

,

t

i
+
1

j


]



)




(

s
k
i

)

.










(
18
)







In other embodiments, assuming that the filtering vectors hj are BIBO stable and have full spectral support for all j=1, 2, . . . N and that H is invertible, the vector valued signal u, sampled with the circuit of FIG. 3, can be recovered as

u=Σj=1NΣk ε ℑckjψkj(t)   (19).

provided that






D
>

M


Ω
π







holds where D is the total lower density. Further for [cj]k=ckj and c=[c1, c2, . . . cN]T, the vector of coefficients c can be computed as c=G+q, where q=[q1, q2, . . . qN]T and [qj]k=qkj. The entries of the G matrix can be given by:











[

G
ij

]

kl

=





ψ
l
j

,

ψ
k
i




=




m
=
1

M




(


h
im

*


h
~

jm

*
g

)




(


s
k
i

-

s
l
j


)

.








(
20
)







In some embodiments, TEMs and TDMs can be used to encode and decode visual stimuli such as natural and synthetic video streams, for example movies or animations. Encoding and decoding visual stimuli is critical for, among other reasons, the storage, manipulation, and transmission of visual stimuli, such as multimedia in the form of video. Specifically, the neuron representation model and its spike data can be used to encode video from its original analog format into a “spike domain” or time-based representation; store, transmit, or alter such visual stimuli by acting on the spike domain representation; and decode the spike domain representation into an alternate representation of the visual stimuli, e.g., analog or digital. Moreover, in “acting” on the visual stimuli through use of the spike domain, one can dilate (or zoom) the visual stimuli, translate or move the visual stimuli, rotate the visual stimuli, and perform any other linear operation or transformation by acting or manipulating the spike domain representation of the visual stimuli.


Widely used modulation circuits such as Asynchronous Sigma/Delta Modulators and and FM modulators have been shown to be instances of TEMs by A. A. Lazar and E. A. Pnevmatikakis in “A Video Time Encoding Machine” (IEEE International Conference on Image Processing, San Diego, Calif., Oct. 12-15, 2008), which is incorporated by reference. TEMs based on single neuron models such as integrate-and-fire (IAF) neurons, as described by A. A. Lazar in “Time Encoding with an Integrate-and-Fire Neuron with a Refractory Period” (Neurocomputing, 58-60:53-58, June 2004), which is incorporated by reference, and more general Hodgkin-Huxley neurons with multiplicative coupling, feedforward and feedback have been described by A. A. Lazar in “Time Encoding Machines with Multiplicative Coupling, Feedback and Feedforward,” which was incorporated by reference above. Multichannel TEMs realized with invertible filterbanks and invertible IAF neurons have been studied by A. A. Lazar in “Multichannel Time Encoding with Integrate-and-Fire Neurons,” which is incorporated by reference above, and TEMs realized with a population of integrate-and-fire neurons have been investigated by A. A. Lazar in “Information Representation with an Ensemble of Hodgkin-Huxley Neurons,” which was incorporated by reference above. An extensive characterization of single-input single-output (SIMO) TEMs can be found in A. A. Lazar and E. A. Pnevmatikakis' “Faithful Representation of Stimuli with a Population of Integrate-and-Fire Neurons,” which is incorporated by reference above.



FIG. 8 depicts an embodiment of a time encoding circuit. The circuit can model the responses of a wide variety of retinal ganglion cells (RGCs) and lateral geniculate nucleus (LGN) neurons across many different organisms. The neuron can fire whenever its membrane potential reaches a fixed threshold δ 801. After a spike 802 is generated, the membrane potential can reset through a negative feedback mechanism 803 that gets triggered by the just emitted spike 802. The feedback mechanism 803 can be modeled by a filter with impulse response h(t).


As previously described, TEMs can act as signal dependent samplers and encode information about the input signal as a time sequence. As described in “Perfect Recovery and Sensitivity Analysis of Time Encoded Bandlimited Signals” by A. A. Lazar and László T. Tóth, which is incorporated by reference above, this encoding can be quantified with the t-transform which describes in mathematical language the generation of the spike sequences given the input stimulus. This time encoding mechanism can be referred to as a single neuron TEM. Where (tk), k ε custom character, is the set of spike times of the output of the neuron, the t-transform of the TEM depicted in FIG. 1 can be written as










u


(

t
k

)


=

δ
+




l
<
k





h


(


t
k

-

t
l


)


.







(
21
)








Equation (21) can be written in inner product form as

<u, Xk>=qk,  (22)

where








q
k

=


u


(

t
k

)


=

δ
+




l
<
k




h


(


t
k

-

t
l


)






,





Xk(t)=g(t−tk), k ε custom character, and g(t)=sin(Ωt)/πt, t ε custom character, is the impulse response of a low pass filter with cutoff frequency Ω. The impulse response of the filter in the feedback loop can be causal, and can be decreasing with time.



FIG. 9 depicts an embodiment of a single neuron encoding circuit that includes an integrate-and-fire neuron with feedback. The t-transform of the encoding circuit can be written as














t
k


t

k
+
1






u


(
s
)





s



=

κδ
-

b


(


t

k
+
1


-

t
k


)


-




l
<
k







t
k


t

k
+
1






h


(

s
-

t
l


)





s






,




(
23
)








or in inner product form as

<u, Xk>=qk,  (24)

with







q
k

=

κδ
-

b


(


t

k
+
1


-

t
k


)


-




l
<
k







t
k


t

k
+
1






h


(

s
-

t
l


)





s










and Xk(t)=g*1[tk,tk+1], for all k, k ε custom character.


In some embodiments, the bandlimited input stimulus u can be recovered as











u


(
t
)


=




k

𝔍





c
k




ψ
k



(
t
)





,




(
25
)








where ψk(t)=g(t−tk), provided that the spike density of the neuron is above the Nyquist rate Ω/π. For [c]k=ck, the vector of coefficients c can be computed as c=G+q, where G+ denotes the pseudoinverse of G, [q]k=qk and [G]kl=<Xk, ψl>.



FIG. 10 depicts an embodiment consisting of two interconnected ON-OFF neurons each with its own feedback. Each neuron can be endowed with a level crossing detection mechanism 1001 with a threshold δ that takes a positive value 1 and a negative value 2, respectively. Whenever a spike is emitted, the feedback mechanism 1002 can reset the corresponding membrane potential. In addition, each spike can be communicated to the other neuron through a cross-feedback mechanism 1003. In general, this cross-feedback mechanism can bring the second neuron closer to its firing threshold and thereby increases its spike density. The two neuron model in FIG. 10 can mimic the ON and OFF bipolar cells in the retina and their connections through the non-spiking horizontal cells. This time encoding mechanism can be described as an ON-OFF TEM.


For the TEM depicted in FIG. 10, with (tkj), k ε custom characterrepresenting the set of spike times of the neuron j, the t-transform of the ON-OFF TEM can be described by the equations











u


(

t
k
1

)


=



+

δ
1


+




l
<
k





h
11



(


t
k
1

-

t
l
1


)



-



l





h
21



(


t
k
1

-

t
l
2


)




1

{


t
l
2

<

t
k
1


}





=

q
k
1











u


(

t
k
2

)


=



-

δ
2


+




l
<
k





h
22



(


t
k
2

-

t
l
2


)



-



l





h
12



(


t
k
2

-

t
l
1


)




1

{


t
l
1

<

t
k
2


}





=

q
k
2



,





(
26
)








for all k, k ε custom character. The equations (31) can be written in inner product form as

<u, xkj>=qkj  (27)

for all k, k ε custom character, j, j=1, 2, where xkj(t)=g(t−tkj), j=1, 2.



FIG. 11 depicts an embodiment consisting of two interconnected ON-OFF neurons each with its own feedback. The t-transforms of the neurons depicted in FIG. 11 can be described by the equations














t
k
1


t

k
+
1

1





u


(
s
)





s



=



κ
1



δ
1


-


b
1



(


t

k
+
1

1

-

t
k
1


)


+




l
<
k







t
k
1


t

k
+
1

1





h
11



(

s
-

t
l
1


)




-



l






t
k
1


t

k
+
1

1






h
21



(

s
-

t
l
2


)





s







1

{


t
l
2

<

t
k
1


}


















t
k
2


t

k
+
1

2





u


(
s
)





s



=



κ
2



δ
2


-


b
2



(


t

k
+
1

1

-

t
k
1


)


+




l
<
k







t
k
2


t

k
+
1

2





h
22



(

s
-

t
l
2


)




-



l






t
k
2


t

k
+
1

2






h
12



(

s
-

t
l
1


)





s







1

{


t
l
1

<

t
k
2


}







,





(
28
)








or in inner product form as

<u, xkj>=qkj   (29)

with








q
k
j

=




t
k
j


t

k
+
1

j





u


(
s
)









s




,





for all k, k ε custom character, j, j=1, 2.


In some embodiments, the input stimulus u can be recovered as











u


(
t
)


=





k

𝔍





c
k
1




ψ
k
1



(
t
)




+




k

𝔍





c
k
2




ψ
k
2



(
t
)






,




(
30
)








where ψkj(t)=g(t−tkj), j=1, 2, provided that the spike density of the TEM is above the Nyquist rate Ω/π. Moreover with c=[c1; c2] and [cj]k=ckj, the vector of coefficients c can be computed as c=G+q, where q=[q1; q2] with [qj]k=qkj and







G
=

[




G
11




G
12






G
21




G
22




]


,







[

G
ij

]

kl

=




x
k
i

,

ψ
l
j





,





for all j=1, 2, and k, l ε custom character.


In one embodiment, as depicted in FIG. 4, let custom character denote the space of (real) analog video streams I(x, y, t) 401 which are bandlimited in time, continuous in space, and have finite energy. Assume that the video streams are defined on bounded spatial set X which is a compact subset of custom character2. Bandlimited in time can mean that for every (x0, y0) ε X then I(x0, y0, y) ε Ξ, where Ξ is the space of bandlimited functions of finite energy. In some embodiments, custom character={I=I(x,y,t)|I(x0,y0,t) ε Ξ, ∀(x0,y0) ε X and I(x,y,t0) ε L2(X), ∀t0 ε custom character)}. The space custom character, endowed with the inner product <•,•>:Hcustom character defined by:

<I1,I2>=∫∫custom characterI1(x,y,t)I2(x, y, t)dxdydt   (31)

can be a Hilbert space.


Assuming that each neuron or TEM j, j=1, 2, . . . N has a spatiotemporal receptive field described by the function Dj(x,y,t) 402, filtering a video stream with the receptive field of the neuron j gives the output vj(t) 403, which can serve as the input to the TEM 404:

vj(t)=∫−∞+∞(∫∫XDj(x,y,s)I(x,y,t−s)dxdy)ds.  (32)


In some embodiments, the receptive fields of the neurons can have only spatial components, i.e. Dj(x,y,t)=Dsj(x,y)δ(t), where δ(t) is the Dirac function.


Where tkj, k ε custom character represents the spike time of neuron j, j=1, 2, . . . N, the t-transform can be written as








v
j



(

t
k
j

)


=


δ
j

+




l
<
k





h


(


t
k
j

-

t
l
j


)


.








In inner product form, this can be written as <I, ψkj>=qkj with qkjjl<kh(tkj−tlj). The sampling function can be given by ψkj(x,y,t)={tilde over (D)}j(x,y,t)*g(t−skj) where {tilde over (D)}j(x,y,t)=Dj(x,y,−t). In embodiments with only spatial receptive fields, the sample functions can be written as φkj(x,y,t)=Dsj(x,y)*g(t−skj).


In some embodiments, as depicted in FIG. 5, decoding the TEM-encoded signal can be recovered using the same frame ψkj, j=1, 2, . . . N, k ε custom character, with ψkj(x,y,t)=D(x,y,t)*g(t-skj) 503, 402. Where the filters modeling the receptive fields Dj(x,y,t) are linearly independent and span the whole spatial domain of interest (i.e., for every t0 ε custom character, Dj(x,y,t0))j forms a frame for L2(X), if the total spike density diverges in N, then there can exist a number custom character such that if N≧custom character then the video stream I=I(x,y,t) can be recovered as










I


(

x
,
y
,
t

)


=




j
=
1

N






k

𝔉





c
k
j




ψ
k
j



(

x
,
y
,
t

)









(
33
)








503 and ckj, k ε custom character, j=1, 2, . . . , N, are suitable coefficients.


Letting [cj]k=ckj and c=[c1, c2, . . . cl]T, the coefficients c can be computed as

c=G+q  (34)
502 where T denotes the transpose, q=[q1, q2, . . . qN]T, [qj]k=qkj and G+ denotes the pseudoinverse. The entries of the matrix G can be given by







G
=

[




G
11




G
12







G

1

N







G
21




G
22







G

2

N





















G

N





1





G

N





2








G
NN




]


,





[Gij]kl=<Di(x,y,•)*g(•-tki), Dj(x,y,•)*g(•-tlj)>. Where the receptive field is only in the spatial domain,

ψkj(t)=Dsj(x,y)g(t−skj) and [Gij]kl=(∫∫XDi(x,y)Dj(x,y)dxdy)g(tki−tlj).


In some embodiments, as depicted in FIG. 6, bounded and surjective operations on the elements of a frame can preserve the frame's characteristics. As such, bounded and surjective operations can be performed on the frames while in the spike domain (after being encoded by one or more TEMs and prior to being decoded by one or more TDMs) to alter the characteristics of the video. In accordance with the disclosed subject matter, a method of dilating, translating, and rotating a visual stimuli can be achieved by applying, for example, the following function to the encoded visual stimuli:

ψα,x0,y0(x,y)=α−1Rθψ(a−1x−x0, a−1y−y0),
601 where α represents the amount of dilation and is a real number not including zero; x0, y0 represents the amount of translation; θ represents the amount of rotation between 0 and 2π, Rθψ(x,y)=ψ(x cos(θ)+y sin(θ),−x sin(θ)+y cos(θ)); and







ψ


(

x
,
y

)


=


1


2

π





exp


(


-

1
8




(


4


x
2


+

y
2


)


)





(










kx


-




k
2

/
2



)

.






In further embodiments, the alterations to the video can be achieved by using a switching-matrix 701, as depicted in FIG. 7. To rotate the video by an angle lθ0, l ε custom character, the spike coming from filter element ([x,y], α, θ) can be mapped to the reconstruction filter at ([x,y], α, θ+lθ0). To dilate, or zoom the video by a value α0m, m ε custom character, the spike coming from filter element ([x,y], α, θ) can be mapped to the reconstruction filter at ([x,y], α0m α, θ). To translate a video by [nb0, kb0], the spike coming from filter element ([x,y], α, θ) can be mapped to the reconstruction filter at ([x+nb0, y+kb0], α, θ). To simultaneous dilate by α0m and translate by [nb0, kb0], then the spike coming from filter element ([x,y], α, θ) can be mapped to the reconstruction filter at ([x+α0m nb0, y+α0m kb0], α0m α, θ).


The disclosed subject matter and methods can be implemented in software stored on computer readable storage media, such as a hard disk, flash disk, magnetic tape, optical disk, network drive, or other computer readable medium. The software can be performed by a processor capable of reading the stored software and carrying out the instructions therein.


The foregoing merely illustrates the principles of the disclosed subject matter. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous techniques which, although not explicitly described herein, embody the principles of the disclosed subject matter and are thus within the spirit and scope of the disclosed subject matter.

Claims
  • 1. A multiple-input, multiple-output encoder comprising: a filtering kernel comprising a plurality (N) of sets of filters, each of the N sets of filters having a plurality (M) of filter elements; andat least one time encoding machine (TEM), each of the M filter elements of the N sets of filter elements being additively coupled to a corresponding input of the at least one TEM.
  • 2. The encoder of claim 1 wherein the at least one TEM comprises an integrate-and-fire neuron.
  • 3. The encoder of claim 1 wherein the at least one TEM comprises a neuron having multiplicative coupling.
  • 4. The encoder of claim 1 wherein the at least one TEM comprises an asynchronous sigma/delta modulator.
  • 5. The encoder of claim 1 further comprising a plurality (N) of adders, each adder coupled to a corresponding one of the N sets of filters and one of a plurality (N) of bias values.
  • 6. The encoder of claim 1 wherein the at least one TEM comprises an irregular sampler.
  • 7. The encoder of claim 1 wherein the at least one TEM generates a spike sequence.
  • 8. The encoder of claim 1 wherein the at least one TEM comprises at least two ON-OFF neurons, each neuron being interconnected and having a feedback mechanism.
  • 9. A decoder to decode a signal encoded by a multiple-input, multiple-output TEM encoder comprising: at least one Time Decoding Machine (TDM); anda filtering kernel comprising a plurality (N) of sets of filters, each of the N sets of filters having a plurality (M) of filter elements, a corresponding output of the at least one TDM being coupled to a corresponding one of the N sets of filters, each of the M filter elements of each set of filters being additively coupled to a corresponding M filter element of each other set of the N sets of filters.
  • 10. The decoder of claim 9, wherein the filtering kernel has an overall response selected to at least substantially invert an encoding process of a TEM encoder used to encode the TEM-encoded signal.
  • 11. The decoder of claim 9, wherein the TEM-encoded signal is irregularly sampled.
  • 12. An encoder to encode a video stream signal comprising: a filter to receive the video stream signal, the filter having a plurality (N) of filter elements; anda Time Encoding Machine (TEM) to generate a plurality (N) of TEM encoded spatiotemporal field signals in response to a spatiotemporal field signal received from a corresponding one of the N filter elements.
  • 13. The encoder of claim 12, wherein the spatiotemporal field signals are described by an equation: vj(t)=∫−∞+∞(∫∫XDj(x,y,s)I(x,y,t−s)dxdy)ds, where Dj(x,y,s) is a filter function, and I(x,y,t) represents the video stream signal.
  • 14. The encoder of claim 12 wherein the N TEM-encoded spatiotemporal field signals are represented by a sampling function: ψkj(x,y,t)=D(x,y,−t)*g(t−skj), for k spike times, for each (x, y) in a bounded spatial set, where j corresponds to each of the N TEM-encoded spatiotemporal field signals, and where g(t)=sin(Ωt)/πt.
  • 15. A decoder to decode a TEM-encoded video stream signal comprising: a Time Decoding Machine (TDM) to receive a plurality (N) of TEM-encoded spatiotemporal field signals, and for each TEM-encoded spatiotemporal field signal generate a TDM-decoded spatiotemporal field signal; andan adder to combine each of said TDM-decoded spatiotemporal field signals to recover the video stream signal.
  • 16. A system to alter a video stream signal comprising: a plurality (N) of TEM-filters;a plurality (N) of reconstruction filters;a switching matrix, operatively coupled to the plurality (N) of TEM-filters and the plurality (N) of reconstruction filters, to map a plurality (N) of TEM-encoded spatiotemporal field signals received from the N TEM-filters to the plurality (N) of reconstruction filters in a video stream signal TDM.
  • 17. The system of claim 16, adapted to rotate the video stream signal, wherein the switching matrix maps each of the N TEM-encoded spatiotemporal field signals from a corresponding TEM-filter ([x,y], α, θ) to a corresponding reconstruction filter ([x,y],α,θ+lθ0), where lθ0 represents a desired value of rotation.
  • 18. The system of claim 16, adapted to zoom the video stream signal, wherein the switching matrix maps each of the N TEM-encoded spatiotemporal field signals from a corresponding TEM-filter ([x,y], α, θ) to a corresponding reconstruction filter ([x,y], α0m α, θ), where α0m represents a desired value of zoom.
  • 19. The system of claim 16, adapted to translate the video stream signal by a value [nb0, kb0], wherein the switching matrix maps each of the N TEM-encoded spatiotemporal field signals from a corresponding TEM-filter ([x,y], α, θ) to a corresponding reconstruction filter at ([x+nb0, y+kb0], α, θ).
  • 20. The system of claim 16, adapted to zoom the video stream signal by a value α0m and translate said video stream signal by a value [nb0, kb0], wherein the switching matrix maps each of the N TEM-encoded spatiotemporal field signals from a corresponding TEM-filter ([x,y], α, θ) to a corresponding reconstruction filter at ([x+α0m nb0, y+α0m kb0], α0m α, θ).
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of Ser. No. 12/645,292, filed Dec. 22, 2009 now U.S. Pat. No. 8,023,046, which is a continuation of International Application PCT/US2008/068790 filed Jun. 30, 2008, which claims priority from: U.S. patent application Ser. No. 11/965,337 filed on Dec. 27, 2007; U.S. Provisional Patent Application No. 60/946,918 filed on Jun. 28, 2007; and U.S. Provisional Patent Application No. 61/037,224 filed on Mar. 17, 2008, the entire disclosures of which are explicitly incorporated by reference herein.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH

This invention was made with government support under grants CCF-06-35252, awarded by the National Science Foundation, and R01 DC008701-01, awarded by the National Institutes of Health. The government has certain rights in the invention.

US Referenced Citations (54)
Number Name Date Kind
5079551 Kimura et al. Jan 1992 A
5200750 Fushiki et al. Apr 1993 A
5392042 Pellon Feb 1995 A
5392044 Kotzin et al. Feb 1995 A
5393237 Roy et al. Feb 1995 A
5396244 Engel Mar 1995 A
5424735 Arkas et al. Jun 1995 A
5511003 Agarwal Apr 1996 A
5561425 Therssen Oct 1996 A
5568142 Velazquez et al. Oct 1996 A
5761088 Hulyalkar et al. Jun 1998 A
5815102 Melanson Sep 1998 A
6081299 Kesselring et al. Jun 2000 A
6087968 Roza Jul 2000 A
6121910 Khoury et al. Sep 2000 A
6177893 Velazquez et al. Jan 2001 B1
6177910 Sathoff et al. Jan 2001 B1
6332043 Ogata Dec 2001 B1
6369730 Blanken et al. Apr 2002 B1
6441764 Barron et al. Aug 2002 B1
6476749 Yeap et al. Nov 2002 B1
6476754 Lowenborg et al. Nov 2002 B2
6511424 Moore-Ede et al. Jan 2003 B1
6515603 McGrath Feb 2003 B1
6646581 Huang Nov 2003 B1
6744825 Rimstad et al. Jun 2004 B1
6961378 Greenfield et al. Nov 2005 B1
7028271 Matsugu et al. Apr 2006 B2
7336210 Lazar Feb 2008 B2
7346216 Adachi et al. Mar 2008 B2
7479907 Lazar Jan 2009 B2
7750835 Albrecht et al. Jul 2010 B1
7764716 McKnight et al. Jul 2010 B2
7948869 Peter et al. May 2011 B2
7966268 Anderson et al. Jun 2011 B2
8023046 Lazar et al. Sep 2011 B2
8199041 Nakajima Jun 2012 B2
8223052 Kong et al. Jul 2012 B1
8314725 Zepeda et al. Nov 2012 B2
8595157 Albrecht et al. Nov 2013 B2
20010044919 Edmonston et al. Nov 2001 A1
20040071354 Adachi et al. Apr 2004 A1
20040158472 Voessing Aug 2004 A1
20050190865 Lazar et al. Sep 2005 A1
20050252361 Oshikiri Nov 2005 A1
20060261986 Lazar Nov 2006 A1
20090141815 Peter et al. Jun 2009 A1
20090190544 Meylan et al. Jul 2009 A1
20100138218 Geiger Jun 2010 A1
20100303101 Lazar et al. Dec 2010 A1
20120084040 Lazar et al. Apr 2012 A1
20120310871 Albrecht et al. Dec 2012 A1
20120317061 Lakshminarayan et al. Dec 2012 A1
20130311412 Lazar et al. Nov 2013 A1
Foreign Referenced Citations (2)
Number Date Country
WO 2006102178 Sep 2006 WO
WO 2008151137 Dec 2008 WO
Non-Patent Literature Citations (99)
Entry
U.S. Appl. No. 12/628,067, Mar. 6, 2012 Resopnse to Non-Final Office Action.
U.S. Appl. No. 12/645,292, Aug. 17, 2011 Issue Fee payment.
U.S. Appl. No. 12/645,292, May 17, 2011 Notice of Allowance.
U.S. Appl. No. 12/645,292, Apr. 27, 2011 Response to Non-Final Office Action.
U.S. Appl. No. 12/645,292, Jan. 7, 2011 Non-Final Office Action.
U.S. Appl. No. 12/628,067, Nov. 23, 2011 Non-Final Office Action.
Lazar et al., “Perfect recovery and sensitivity analysis of time encoded bandlimited signals”, IEEE Transactions on Circuits and Systems, 51(10): 2060-2073, Oct. 2004.
Lazar et al., “Real-Time Algorithm for Time Decoding Machines”, EUSIPCO '06 (Sep. 2006).
Lazar, A.A., “Time Encoding Machines with Multiplicative Coupling, Feedforward, and Feedback”, Circuits and Systems II: Express Briefs, IEEE Transactions on vol. 53, Issue 8, Aug. 2006, pp. 672-676.
Lazar, A.A., “Time Encoding and Perfect Recovery of Bandlimited Signals”, Acoustics, Speech, and Signal Processing, 2003, Proceedings (ICASSP 2003), 2003 IEEE International Conference on vol. 6, pp. VI-709-712 vol. 6.
Lazar, Aurel A., “Time Encoding Using Filter Banks and Integrate and-Fire Neurons”, Department of Electrical Engineering, Columbia University, Sep. 2003.
U.S. Appl. No. 12/628,067, Aug. 1, 2013 Amendment and Request for Continued Examination (RCE).
U.S. Appl. No. 12/628,067, Jul. 19, 2012 Non-Final Office Action.
U.S. Appl. No. 12/628,067, Mar. 1, 2013 Final Office Action.
U.S. Appl. No. 12/628,067, Jan. 16, 2013 Response to Non-Final Office Action.
U.S. Appl. No. 13/948,615, Dec. 18, 2013 Non-Final Office Action.
Akay, “Time Frequency and Wavelets in Biomedical signal Processing”, Wiley-IEEE Press, Table of Contents (1997) Retrieved at http://www.wiley.com/WileyCDA/WileyTitle/productCd-0780311477,miniSiteCd-IEEE2 on Aug. 6, 2008.
Aksenov, et al., “Biomedical Data Acquisition Systems Based on Sigma-Delta Analogue-To Digital converters”, 2001 Proceedings of the 23rd Annual EMBS International Conference, Istanbul, Turkey, 4:3336-3337 (Oct. 25-28, 2001).
Antoine, et al., “Two-Dimensional Wavelets and Their Relatives”, Cambridge University Press, Table of Contents (2004).
Averbeck, et al., “Neural Correlations, Population Coding and Computation”, Nature, 7:358-366 (2006).
Balan, et al., “Multiframes and Multi Riesz Bases: I. The General theory and Weyl-Heisenberg Case”, Technical Report, Institute for mathematics and its Applications, (18 pages) (1997).
Balan, “Multiplexing of Signals using Superframes”, Wavelets Applications in Signal and Image Processing VIII, 4119:118-130 (2000).
Barr, et al., “Energy Aware Lossless Data Compression”, Proceedings of the 1st International Conference on Mobile Systems, Applications and Services, San Francisco, CA, pp. 231-244 (2003).
Bjorck, et al., “Solution of Vandermonde Systems of Equations”, Mathematics of Computation, 24(112):893-903 (1970).
Butts, et al., “Temporal Precision in the Neural code and the Timescales of Natural Vision”, Nature, 449(7158):92-95 (2007).
Christensen, “Frames, Riesz Bases, and Discrete Gabor/Wavelet Expansions”, American Mathematical Society, 38(3):273-291 (2001).
Christensen, “An Introduction to Frames and Riesz Bases”, Springer, Table of Contents (2003) http://www.springer.com/birkhauser/mathematics/books/978-0-8176-4295-2?detailsPage=toc Retrieved on Aug. 6, 2008.
Dayan et al., “Theoretical Neuroscience”, The MIT Press, Table of Contents (2001) http://mitpress.mit.edu/catalog/item/default.asp?ttyoe=2&tid=8590&mode=toc Retrieved on line Aug. 6, 2008.
Delbruck, “Frame-Free Dynamic Digital Vision”, Proceedings of international Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, University of Tokyo, pp. 21-26 (Mar. 6-7, 2008).
Deneve, et al., “Reading Population Codes: A Neural Implementation of Ideal Observers”, Nature Neuroscience, 2(8):740-745 (1999).
Dods, et al., “Asynchronous Sampling for Optical Performance Monitoring”, Optical fiber Communication Conference, Anaheim, California (3 pages) (2007).
Eldar, et al., “Sampling with Arbitrary Sampling and Reconstruction Spaces and Oblique dual Frame Vectors”, The Journal of Fourier Analysis and Applications, 9(l):77-96 (2003).
Eldar, et al., “General Framework for Consistent Sampling in Hilbert Spaces”, International journal of Wavelets, Multiresolution and information Processing, 3(3):347-359 (2005).
Fain, “Sensory Transduction”, Sinauer Associates, Inc., Table of contents (2003) Retrieved http://www.sinauer.com/detail.php?id=1716 on Aug. 6, 2008.
Feichter, et al., “Theory and Practice of Irregular Sampling”, Wavelets: Mathematics and Applications, CRC Press, Studies in Advanced Mathematics, pp. 305-363 (1994).
Feichter, et al., “Efficient Numerical Methods in Non-Uniform Sampling Theory”, Numer. Math., 69:423-440 (1995).
Feichter, et al., “Improved Locality for Irregular Sampling Algorithms”, 2000 International Conference on Acoustic, Speech, and Signal Processing, Istanbul, Turkey, Table of Contents (Jun. 5-9, 2000).
Field, et al., “Information Processing in the Primate Retina: Circuitry and Coding”, Annual Reviews Neuroscience, 30(1):1-30 (2007).
Häfliger, et al., “A Rank Encoder: Adaptive Analog to Digital conversion Exploiting Time Domain Spike Signal Processing”, Analog Integrated Circuits and Signal Processing Archive, 40(1):39-51 (2004).
Han, et al., “Memoirs of the American Mathematical Society: Frames, Baese and Group Presentations”, American Mathematical Society, 147(697): Table of Contents (2000).
Harris, et al., “Real Time Signal Reconstruction from Spikes on a Digital Signal Processor”, IEEE International Symposium on Circuits and Systems (ISCAS 2008), pp. 1060-1063 (2008).
Haykin, et al., “Nonlinear Adaptive Prediction of Nonstationary Signals”, IEEE Transactions on Signal Processing, 43(2):526-535 (1995).
Hudspeth, et al., “Auditory Neuroscience: Development, Transduction, and Integration”, PNAS, 97(22):11690-11691 (2000).
Jaffard, “A Density Criterion for Frames of Complex Exponentials”, Michigan Math J., 38:339-348 (1991).
Jones, et al., “An Evaluation of the Two-Dimensional Gabor Filter Model of Simple Receptive Fields on Cat Striate Cortex”, Journal of Neurophysiology, 58(6):1233-1258 (1987).
Jovanov, et al., “A Wireless Body Area network of Intelligent motion sensors for computer Assisted Physical Rehabilitation”, Journal of NeuroEngineering and Rehabiliation, 2:6 (10 pages) (2005).
Kaldy, et al., “Time Encoded Communications for Human Area Network Biomonitoring”, BNET Technical Report #2-7, Department of Electrical Engineering, Columbia University, (8 pages) (2007).
Keat, et al., “Predicting Every Spike: A Model for the Responses of Visual Neurons”, Neuron, 30:803-817 (2001).
Kim, et al., “A Comparison of Optimal MIMO Linear and Nonlinear Models for Brain-Machine Interfaces”, Journal of Neural Engineering, 3:145-161 (2006).
Kinget, et al., “On the Robustness of an Analog VLSI Implementation of a Time Encoding Machine”, IEEE International Symposium on Circuits and Systems, pp. 4221-4224 (2005).
Kong, et al., “A Time-Encoding Machine Based High-Speed Analog-to-Digital Converter”, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2(3):552-563 (2012).
Kovacevic, et al., “Filter Bank Frame Expansions with Erasures”, IEEE Transactions on Information Theory, 48(6):1439-1450 (2002).
Krishnapura, et al., “A Baseband Pulse Shaping Filter for Gaussian Minimum Shift Keying”, ISCAS '98, IEEE International Symposium on circuits and Systems, vol. 1:249-252 (1998).
Lazar, et al., “Encoding, Processing and Decoding of Sensory Stimuli with a Spiking Neural Population”, Research in encoding and Decoding of Neural Ensembles, Santiori, Greece, (1 page) (Jun. 26-29, 2008).
Lazar, “A Simple Spiking Retina Model for Exact Video Stimulus Representation”, The Computational Neuroscience Meeting, CNS 2008, Portland, Oregon (1 page) (Jul. 19-24, 2008).
Lazar, et al., “Time Encoding and Time Domain Computing of Video Streams”, Department of Electrical engineering Columbia University, (20 pages) (Mar. 14, 2008).
Lazar, et al., “A MIMO Time Encoding Machine”, submitted for publication Jan. 2008, (28 pages).
Lazar, et al., “Encoding of Multivariate Stimuli with MIMO Neural Circuits”, Proceedings of the IEEE International Symposium on Information Theory, Saint Petersburg, Russia, (5 pages) (Jul. 31-Aug. 5, 2011).
Lazar, et al., “Video Time Encoding Machines”, submitted for publication Oct. 2008, (27 pages).
Lazar, et al., “Channel Identification Machines”, Computational Intelligence and Neuroscience, 2012:209590 (20 pages) (2012).
Lazar, “Multichannel Time Encoding with Integrate-and-Fire Neurons”, Neurocomputing, 65-66:401-407 (2005).
Lazar, “Recovery of Stimuli Encoded with Hodgkin-Huxley Neurons”, Computational and Systems Neuroscience Meeting, COSYNE 2007, Salt Lake City, UT, Feb. 22-25, 2007, Cosyne Poster 111-94, p. 296.
Lazar, “Time Encoding with an Integrate-and-Fire Neuron with a Refractory Period”, Neurocomputing, 58-60:53-58 (2004).
Lazar, et al., “Video Time Encoding Machines”, IEEE Transactions on Neural Networks, 22(3):461-473 (2011).
Lazar, “Population Encoding with Hodgkin-Huxley Neurons”, IEEE Transactions on Information Theory, 56(2):821-837 (2010).
Lazar, “Information Representation with an Ensemble of Hodgkin-Huxley Neurons”, Neurocomputing, 70:1764-1771 (2007).
Lazar, et al., “Encoding Natural Scenes with Neural circuits with Random Thresholds”, Vision Research, Special Issue on Mathematical Models of Visual Coding, 50(22):2200-2212 (2010).
Lazar, “A Simple Model of Spike Processing”, Neurocomputing, 69:1081-1085 (2006).
Lazar, et al., “Faithful Representation of Stimuli with a Population of Integrate-and-Fire Neurons”, Neural Computers, 20(11):2715-2744 (2008).
Lazar, et al., “An Overcomplete Stitching Algorithm for Time Decoding Machines”, IEEE Transactions on Circuits and Systems—I, (11 pages) (2008).
Lazar, et al., “Fast Recovery Algorithms for Time Encoded Bandlimited Signals”, Proceeding of the International Conference on acoustics, Speech and Signal Processing (ICASSP '05), Philadelphia, PA, Mar. 19-23, 2005, 4:237-240 (2005).
Lee, “Image Representation using 2D Gabor Wavelets”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(10):959-971 (1996).
Lichtsteiner, et al., “A 128X128 120db 15 μs Latency Asynchromous Temporal Contrast Vision Sensor”, IEEE Journal of solid-State Circuits, 43(2):566-576 (2008).
Masland, “The Fundamental Plan of the Retina”, Nature Neuroscience, 4(9):877-886 (2001).
MIT-BIH Arrhythmia Database, http://www.physionet.org/physiobank/database/mitd Retrieved on Aug. 5, 2008 (3 pages).
Olshausen, “Sparse Codes and Spikes”, In R.P.N. Rao, B.A. Olshausen and M.S. Lewicki, editors, Probabilistic Models of Perception and Brian Function, MIT Press, (15 pages) (2002).
Olshausen, et al., “Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?”, Vision research, 37(23):3311-3325 (1997).
Ouzounov, et al., “Analysis and Design of High-Performance Asynchronous Sigma-Delta Modulators with a Binary Quantizer”, IEEE Journal of Solid-State Circuits, 41(3):588-596 (2006).
Papoulis, “Generalized Sampling Expansion”, IEEE Transactions on Circuits and Systems, CAS-24(11):652-654 (1977).
“Parks-McClellan FIR filter Design” (Java 1.1 version) http://www.dsptutor.freeuk.com/remez/RemezFIRFilterDesign.htlm. Retrieved on Aug. 5, 2008.
Patterson, et al., “Complex Sounds and Auditory Images”, Advances in the biosciences, 83:429-446 (1992).
Pillow, et al., “Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model”, The Journal of Neuroscience, 25(47):11003-11013 (2005).
Roza, “Analog-to-Digital conversion Via Duty-Cycle Modulation”, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 44(11):907-917 (1997).
Sanger, “Neural Population Codes”, Current Opinion in Neurobiology, 13:238-249 (2003).
Seidner, et al., “Vector Sampling Expansion”, IEEE Transactions on Signal Processing, 48(5):1401-1416 (2000).
Shang, et al., “Vector Sampling Expansions in shift Invariant Subspaces”, J. Math. Anal. Appl., 325:898-919 (2007).
Sheung, “A Continuous-Time Asynchronous Sigma Delta Analog to Digital Converter for Broadband Wireless Receiver with Adaptive Digital Calibration Technique”, PhD Thesis, Department of Electrical and Computer Engineering, Ohio State University, (137 pages) (2009).
Shinagawa, et al., “A Near-Field-Sensing Transceiver for Intrabody Communication Based on the Electrooptic Effect”, IEEE Transactions on Instrumentation and Measurement, 53(6):1533-1538 (2004).
Slaney, “Auditory Toolbox”, Technical Report #1998-010, Interval Research Corporation, (52 pages) (1998).
Strohmer, “Numerical Analysis of the Non-Uniform Sampling Problem”, Journal of Computational and Applied Mathematics, 122:297-316 (2000).
Strohmer, “Irregular Sampling, Frames and Pseudoinverse”, Master Thesis, dept. Math. Univ., Vienna, Austria, (Abstract) (1991).
Teolis, “Computational signal Processing with Wavelets”, Applied and Numerical Harmonic Analysis, Chapter 4-6, pp. 59-167 (1998).
Topi, et al., “Spline Recurrent Neural Networks for Quad-Tree Video Coding”, WIRN Vietri, Springer-Verlag, LNCS 2486, pp. 90-98 (2002).
Venkataramani, et al., “Sampling Theorems for Uniform and Periodic Nonuniform MIMO Sampling of Multiband Signals”, IEEE Transactions on Signal Processing, 51(12):3152-3163 (2003).
Wei, et al., “Signal Reconstruction from Spiking Neuron Models”, Proceedings of the ISCAS '04, vol. V:353-356 (May 23-26, 2004).
Wolfram, “Mathematic 5.2”, The Mathematica Book Online, http://documents.wolfram.com/mathematica Retrieved on Aug. 5, 2008.
Yang, et al., “A Bio-Inspired Ultra-Energy-Efficient Analog-to-Digital converter for Biomedical Applications”, IEEE Transactions on Circuits and Systems-I: Regular Papers, 53(11):2349-2356 (2006).
Zimmerman, “Personal Area Networks (PAN): Near-field Intra-Body Communication”, MS Thesis, MIT, (81 pages) (1995).
Zimmerman, “Personal Area Networks: Near-field Intrabody Communication”, IBM systems Journal, 35(3&4):609-617 (1996).
Related Publications (1)
Number Date Country
20120039399 A1 Feb 2012 US
Provisional Applications (2)
Number Date Country
60946918 Jun 2007 US
61037224 Mar 2008 US
Continuations (2)
Number Date Country
Parent 12645292 Dec 2009 US
Child 13214041 US
Parent PCT/US2008/068790 Jun 2008 US
Child 12645292 US