METHOD FOR CALIBRATING ON-LINE AND WITH FORGETTING FACTOR A DIRECT NEURAL INTERFACE WITH PENALISED MULTIVARIATE REGRESSION

Information

  • Patent Application
  • 20220110570
  • Publication Number
    20220110570
  • Date Filed
    October 12, 2021
    2 years ago
  • Date Published
    April 14, 2022
    2 years ago
Abstract
The present invention relates to a method for calibrating on-line a direct neural interface implementing a REW-NPLS regression between an output calibration tensor and an input calibration tensor. The REW-NPLS regression comprises a PARAFAC iterative decomposition of the cross covariance tensor between the input calibration tensor and the output calibration tensor, each PARAFAC iteration comprising a sequence of M elementary steps (2401, 2401, . . . 240M) of minimisation of a metric according to the alternating least squares method, each elementary minimisation step relating to a projector and considering the others as constant, said metric comprising a penalisation term that is a function of the norm of this projector, the elements of this projector not being subjected to a penalisation during a PARAFAC iteration f not being penalisable during following PARAFAC iterations. Said calibration method makes it possible to obtain a predictive model of which the non-zero coefficients are sparse blockwise.
Description
TECHNICAL FIELD

The present invention relates to the field of direct neural interfaces, also called BCI (Brain Computer Interfaces) or BMI (Brain Machine Interfaces). It is notably applied to the direct neural command of a machine, such as an exoskeleton or a computer.


PRIOR ART

Direct neural interfaces use the electrophysiological signals emitted by the cerebral cortex to elaborate a command signal. These neural interfaces have been the subject of numerous research notably with the aim of restoring a motor function to a paraplegic or tetraplegic subject using a motorised prothesis or orthosis.


Neural interfaces may be of invasive or non-invasive nature. Invasive neural interfaces use intracortical electrodes (that is to say implanted in the cortex) or cortical electrodes (arranged on the surface of the cortex) collecting in this latter case electrocorticographic (EcoG) signals. Non-invasive neural interfaces use electrodes placed on the scalp to collect electrocephalographic (EEG) signals. Other types of sensors have also been envisaged such as magnetic sensors measuring the magnetic fields induced by the electrical activity of the neurones of the brain. One then speaks of magnetocephalographic (MEG) signals.


Direct neural interfaces advantageously use ECoG type signals, having the advantage of a good compromise between biocompatibility (matrix of electrodes implanted on the surface of the cortex) and quality of the collected signals.


The ECoG signals thus measured must be processed in order to estimate the trajectory of the movement desired by the subject and to deduce therefrom the command signals of the computer or of the machine. For example, when it involves commanding an exoskeleton, the BCI estimates the trajectory of the desired movement from measured electrophysiological signals and deduces therefrom the control signals enabling the exoskeleton to reproduce the trajectory in question. Similarly, when it involves commanding a computer, the BCI estimates for example the desired trajectory of a pointer or a cursor from electrophysiological signals and deduces therefrom the command signals of the cursor/pointer.


The estimation of trajectory, and more specifically that of kinematic parameters (position, speed, acceleration), is also designated neural decoding in the literature. Neural decoding notably makes it possible to command a movement (of a prothesis or a cursor) from ECoG signals.


The estimation of trajectory and the calculation of signals for controlling the exoskeleton or the effector generally requires a prior learning or calibration phase, designated off-line. During this phase, the subject imagines, observes or makes a movement according to a determined trajectory during a given calibration interval. The electrophysiological signals measured during this interval are exploited in relation with this trajectory to construct a predictive model and more specifically to calculate the parameters of this model.


The validity of the predictive model is however limited in time on account of the non-stationarity of the neural signals. For this reason, it is necessary to carry out an on-line calibration of the predictive model, that is to say as and when the neural signals are observed and the command applied.


Different methods for calibrating on-line a neural interface have been described in the prior art. However, these on-line calibration methods may require the manipulation of a considerable amount of data resulting from the juxtaposition of data from prior calibration steps and those from a new calibration step.


An on-line BCI calibration method has been described in the article of A. Eliseyev et al. entitled “Recursive exponentially weighted N-way Partial Least Squares regression with recursive validation of hyper-parameters in Brain-Computer Interface applications” published in Scientific Reports, vol. 7, no 1, p. 16281, November 2017 as well as in the patent application FR-A-3 061 318. This method will be designated hereafter under the acronym REW-NPLS (Recursive Exponentially Weighted N-way Partial Least Squares).


Furthermore, different alternatives of off-line calibration method have been proposed in the literature, with the aim of improving the robustness of the prediction and to increase the sparsity of non-zero coefficients in the predictive model tensor. In particular, the sparsity of non-zero coefficients in the spatial mode of this tensor makes it possible to only select certain relevant electrodes for the prediction of the trajectory and the calculation of the command. Such an alternative using a penalised multipath regression with such an aim has been described in the article of A. Eliseyev et al. entitled “L1-penalised N-way PLS for subset of electrodes selection in BCI experiments” published in J. Neural Eng. vol. 9, no 4, p. 045010 as well as in the patent application FR-A-3 046 471.


However, such a calibration method using a penalised multipath regression cannot be implemented on-line on account of the considerable amount of calculations that it requires.


The subject matter of the present invention is consequently to propose a method for calibrating on-line a direct neural interface that can also increase the sparsity of non-zero coefficients of the prediction tensor, notably in such a way as to only select at each observation window a relevant sub-set of electrodes.


DESCRIPTION OF THE INVENTION

The present invention is defined by a method for calibrating on-line a direct neural interface intended to receive a plurality of electrophysiological signals during a plurality of observation windows associated with observation times, to form an input tensor and to deduce therefrom, by means of a predictive model, an output tensor providing command signals intended to command one or more effectors, said calibration is carried out during a plurality of calibration steps, each calibration step u being carried out from input calibration data, represented by an input calibration tensor, Xu∈□(I1× . . . ×IN), and from output calibration data represented by an output calibration tensor Yu∈□(J1× . . . ×JM), said calibration step implementing a REW-NPLS regression comprising:


the calculation of a covariance tensor XXu from a covariance tensor XXu−1 obtained during a preceding calibration step and the tensor Xu, as well as the calculation of a cross covariance tensor XYu from a cross covariance tensor XYu−1 obtained during said preceding calibration step and the tensors Xu and Yu;


a PARAFAC iterative decomposition of the cross covariance tensor XYu, each PARAFAC iteration being associated with a dimension f of the latent variables space on which said regression is carried out, each iteration providing a plurality of projectors wfi, i=1, . . . , M of respective sizes I1, I2, . . . IM and a predictive model defined by a prediction coefficients tensor Buf∈□(I1× . . . ×IM)×(J1× . . . ×JM) and a bias tensor βuf∈□(J1× . . . ×JM).


Said calibration method is original in that each PARAFAC iteration comprises a sequence of M elementary steps of minimisation of a metric according to the alternating least squares method, each elementary minimisation step relating to a projector and considering the others as constant, said metric comprising a penalisation term that is a function of the norm of this projector, the elements of this projector not being subjected to a penalisation during a PARAFAC iteration f not being penalisable during following PARAFAC iterations.


Advantageously, an elementary minimisation step relating to a projector wfi aims to minimise the penalised metric







min


w
~

f
i




(







XY
_


u


(
i
)



-




w
~

f
i



(



w
f
M















w
f

i
+
1




w
f

i
-
1










w
f
1



)


T




2

+


λ
i











w
~

f
i




q
,

Ω

i
,
f






)





where {tilde over (w)}fi is a non-normalised projector and








w
f
m

=



w
~

f
m





w
~

f
m





,




m=1, . . . ; m≠i are normalised projectors, XYu(i) is a matrix of size







I
i

×





n
=
1


n

i


N



I
n






obtained by deployment of the cross covariance tensor XYu while taking the mode i as reference, ⊗ is the Kronecker product, λi is a strictly positive coefficient, ∥w∥q,Ωi,f, q=0, ½, 1 is a norm defined by











w
f
i




0
,

Ω

i
,
f


,


=




k


Ω

i
,
f






(

1
-

δ

0
,

w

f
,
k

i




)



,









w
f
i





1
/
2

,

Ω

i
,
f




=




k


Ω

i
,
f









w

f
,
k

i










,









w
f
i




1
,

Ω

i
,
f




=




k


Ω

i
,
f








w

f
,
k

i









where wf,ki is the kth element of the projector wfi, δ is the Kronecker symbol, and Ωi,f is the set of indices corresponding to the penalisable elements of the projector wfi at the PARAFAC iteration f.


Alternatively, an elementary minimisation step relating to a projector wfi aims to minimise the penalised metrics








min

w

f
,
k

i




(






z
i
k

-



w

f
,
k

i



(


(



w
f
M















w
f

i
+
1




w
f

i
-
1






w
f
1



)

T

)


T




2

+


λ
i




g
q



(

w

f
,
k

i

)




)


,




k=1, . . . , Ii where zik is the kth column vector of the matrix XYu(i) of size







I
i

×





n
=
1


n

i


N



I
n






obtained by deployment of the cross covariance tensor XYu while taking the mode i as reference, ⊗ is the Kronecker product, λi is a strictly positive coefficient, and the functions gq, q=0, ½, 1 are defined by:






g
0(wf,ki)=1−δ0,wf,ki if k∈Ωi,f and g0(wf,ki)=0 if not






g
1/2(wf,ki)=√{square root over (|wf,ki|)} if k∈Ωi,f and g1/2(wf,ki)=0 if not






g
1(wf,ki)=|wf,ki| if k∈Ωi,f and g1(wf,ki)=0 if not


where Ωi,f is the set of indices k corresponding to the penalisable elements wf,ki of the projector wfi at the PARAFAC iteration f.


In a particular embodiment, q=0 and the elements wf,ki resulting from the minimisation are given by:





(wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0i





(wf,ki)L0=(wf,ki)LS if not


where (wf,ki)LS represents the minimisation solution in the least squares sense:








(

w

f
,
k

i

)


L

S


=



z
i
k



(



w
f
M















w
f

i
+
1




w
f

i
-
1










w
f
1



)








w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1





2






and that the threshold ThL0i is given by







Th

L

0

i

=




λ
i







w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1






.





In a second particular embodiment, q=½ and the elements wf,ki resulting from the minimisation are given by:





(wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0,5i





(wf,ki)L0=arg min(h0,5(0),h0,5(Bi·(wf,ki)LS)) if k∈Ωi,f and |(wf,ki)LS|>ThL0,5i





(wf,ki)L0=(wf,ki)LS if not


where h0,5(wf,ki)=∥z1k−wf,ki(wf3≃wf2)T2i√{square root over (|wf,ki|)} and where (wf,ki)LS represents the minimisation solution in the least squares sense:








(

w

f
,
k

i

)


L

S


=



z
i
k



(



w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1



)








w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1





2






and the threshold ThL0,5i is given by







Th


L

0

,
5

i

=


3
4





(


λ
i







w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1





2


)


2
3


.






In a third particular embodiment, q=1 and the elements wf,ki resulting from the minimisation are given by:





(wf,ki)L1=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL1i





(wf,ki)L1=sgn((wf,ki)LS)(|(wf,ki)LS|−ThL1i) if k∈Ωi,f and |(wf,ki)LS|>ThL1i





(wf,ki)L0=(wf,ki)LS if not


where (wf,ki)LS represents the minimisation solution in the least squares sense:








(

w

f
,
k

i

)


L

S


=



z
i
k



(



w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1



)








w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1





2






and the threshold ThL1i is given by







Th

L

1

i

=


λ
i






w
f
M











w
f

i
+
1




w
f

i
-
1






w
f
1










Typically, said on-line calibration method may apply to electrophysiological signals constituted of ECoG signals.





BRIEF DESCRIPTION OF THE DRAWINGS

Other characteristics and advantages of the invention will become clear on reading the preferential embodiment of the invention, described with reference to the appended figures among which:



FIG. 1A schematically represents the decomposition of a tensor by parallel factorial analysis (PARAFAC);



FIG. 1B schematically represents the development of a tensor with respect to a mode;



FIG. 1C schematically represents a decomposition using an alternating least squares (ALS) method;



FIG. 2 schematically represents the flowchart of a recursive method for calibrating a direct neural interface with penalised multivariate regression, according to an embodiment of the present invention;



FIG. 3 schematically represents the mechanism for updating the predictive model in the calibration method of FIG. 2.





DETAILED DESCRIPTION OF PARTICULAR EMBODIMENTS

A direct neural interface with continuous decoding will be considered hereafter.


The electrophysiological signals coming from the different electrodes are sampled and grouped together by data blocks, each block corresponding to a sliding observation window of width ΔT. Each observation window is defined by an observation time (epoch) at which the window in question starts.


The electrophysiological signals may be subjected to a pre-processing. This pre-processing may notably comprise a deletion of the average taken over the set of electrodes, then a time-frequency analysis is carried out on each of the observation windows.


The time-frequency analysis is based on a decomposition into wavelets, notably Morlet wavelets. Those skilled in the art will understand however that other types of time-frequency analysis could be envisaged by those skilled in the art.


These time-frequency analyses results may further be subjected to frequency smoothing or decimation.


Thus, with each observation window, or observation time t, is associated a tensor of order 3 of observation data, hence the generation of an input tensor of order 4: the first mode corresponds to the successive observation windows, the second mode corresponds to the space, in other words to the sensors, the third mode corresponds to the time within an observation window, in other words to the positions of the wavelets, and the fourth mode corresponds to the frequency, in other words to the number of frequency bands used for the decomposition into wavelets over an observation window.


More generally, the input tensor (or observation tensor) will be of order N, the first mode being in all cases that relative to the observation times (epochs). The input tensor (or observation tensor) is noted X and is of dimension I1× . . . ×IN.


In the same way, the trajectory of the movement imagined, observed or made is described by an output tensor (or command tensor) of order M, noted Y, of dimension J1× . . . ×JM, of which the first mode corresponds to the successive times to which the commands will apply (as a general rule, this first mode also corresponds to the observation windows), the other modes corresponding to the commands of different effectors or to the other degrees of freedom of a multi-axis robot.


More specifically, the output tensor provides command data blocks, each of the blocks making it possible to generate command signals relative to the different effectors or degrees of freedom. Thus, it will be understood that the dimension of each data block could depend on the envisaged case of use and notably the number of degrees of freedom of the effector.


One will note hereafter X, the observation tensor at the time t. This tensor is consequently of order N and of dimension I1× . . . ×IN. It takes its values in a space X⊂□I1× . . . ×IN where □ is the set of real values. Similarly, one will note Y, the command tensor at the time t. This output tensor is of order M and of dimension J1× . . . ×JM. It takes its values in a space Y⊂□Xu∈□J1× . . . ×JM.


The calibration or, in an equivalent manner, the mechanism for updating the predictive model according to the REW-NPLS (Recursive Exponentially Weighted N-way Partial Least Squares) algorithm is recalled hereafter.


If one notes {Xu, Yu} the calibration data blocks (or training data sets) during the iteration u the basic idea of the REW-NPLS algorithm is to update the predictive model from the covariance and cross covariance tensors obtained at the preceding calibration step, respectively noted XXu−1 and XYu−1. These two tensors make it possible to obtain a condensed representation of the predictive model at the preceding step, without there being a need to store the history of preceding calibration data blocks: {X1, Y1}, {X2, Y2}, . . . {Xu−1, Yu−1}.


More specifically, the covariance and cross covariance tensors are updated by means of recurrence relationships:







XX

u
XX
u−1
+X
u×1Xu  (1-1)







XY

u
XY
u−1
+X
u×1Yu  (1-2)


where ×1 designates the tensor product of mode 1 and γ is a forgetting coefficient with 0<γ<1.


One thus understands that a current calibration period, carried out during the iteration u takes account of a preceding calibration period carried out during the iteration u−1 and the tensors of the observation data of the current calibration period, the history being weighted by the forgetting coefficient γ with 0<γ<1.


This operation of updating the covariance tensors is carried out in an iterative manner over the calibration periods.


For each update, the predictive model is next obtained in an iterative manner by means of a PARAFAC decomposition (generalisation to the tensorial case of a decomposition into singular values) of the cross covariance tensor, XYu. It will be recalled that, generally speaking, a PARAFAC decomposition of a tensor Z of order M, i.e. Z∈□I1×I2× . . . ×IM makes it possible to represent this tensor in the form of a linear combination of external products of vectors (tensors of order 1) to a nearest residual tensor:










Z
_

=





r
=
1

R




θ
r




w
r
1



w
r
2













w
r
M




+

E
_






(
2
)







with ∥wrm∥=1, m=1, . . . , M where ∘ is the external product and E is a residual tensor and r is the index of the PARAFAC iteration and R is a given integer. A PARFAC decomposition of a tensor Z has been represented schematically in FIG. 1A.


In the case of a REW-NPLS calibration step, each PARAFAC iteration only relates to an elementary decomposition, that is to say R=1. It makes it possible to extract from the tensor XYu a set of projection vectors, also designated projectors, wf1, . . . , wfM, of respective I1, I2, . . . , IM where the iteration number f gives the dimension of a latent variables space onto which the input tensor Xu is projected.


In order to simplify the presentation and without prejudice of generalisation to a tensor of any order, we will assume hereafter that the order of the tensor XYu is equal to 3 (M=3), that is to say XYu∈□I1×I2×I3 with ∥XYu∥=1 For example, the first mode corresponds to the epoch, the second to the frequency and the third to the space (that is to say to the electrodes).


At the iteration f of the PARAFAC decomposition, one searches for the projection vectors wf1, wf2, wf3 which verify:







min









XY
_

u

-




Y
_

u





2






with











Y
_

u


=


θ
f




w
f
1



w
f
2



w
f
3







and










w
f
1



=




w
f
2



=




w
f
3



=
1






This minimisation problem may be resolved by an alternating least squares method in which one carries out sequentially a minimisation for each of the vectors of the decomposition, i.e.:










min

w
f
1









XY
_


u


(
1
)



-



w
f
1



(


w
f
3



w
f
2


)


T




2





(

4


-


1

)







min

w
f
2









XY
_


u


(
2
)



-



w
f
2



(


w
f
3



w
f
1


)


T




2





(

4


-


2

)







min

w
f
3









XY
_


u


(
3
)



-



w
f
3



(


w
f
2



w
f
1


)


T




2





(

4


-


3

)







where ⊗ is the Kronecker product and Z(n) is a matrix obtained by deployment of the tensor Z while taking the mode n as reference mode (tensor unfolding or flattening along mode n). For example, the matrix Z(1) is defined by (z11| . . . |z1I1)∈□I1×I2I3 where the Z1j are the columns of the matrix Z(1).


The minimisation operations (4-1), (4-2) and (4-3) are repeated up to convergence. the minimisation solutions in the least squares (LS) sense for each of these operations are given by the projectors:










w
f
1

=




XY
_


u


(
1
)





(


w
f
3



w
f
2


)







w
f
3



w
f
2




2






(

5


-


1

)







w
f
2

=




XY
_


u


(
2
)





(


w
f
3



w
f
1


)







w
f
3



w
f
1




2






(

5


-


2

)







w
f
3

=




XY
_


u


(
3
)





(


w
f
2



w
f
1


)







w
f
2



w
f
1




2






(

5


-


3

)







Each iteration f of the PARAFAC decomposition provides a plurality M of projectors wf1, . . . , wfM and a prediction coefficients tensor, or predictive model of the direct neural interface, defined by a prediction coefficients tensor BNf∈□(I1× . . . ×IN)×(J1× . . . ×JM) and a bias tensor βuf(J1× . . . ×JM), making it possible to predict the command tensor from the observation tensor. One thus obtains a plurality of models, Buf, f=1, . . . , F corresponding to different successive dimensions of the latent variables space.


One next determines, at the following calibration step, u+1, among Buf, f=1, . . . , F, the predictive model Buf* corresponding to the smallest prediction error where one has noted f* the index f corresponding to this smallest prediction error. This determination of the predictive model at the following calibration step is designated in the literature under the term recursive validation.



FIG. 1C schematically represents a decomposition using an alternating least squares (ALS) method.


The decomposition by alternating least squares has been illustrated for a REW-NPLS calibration step and a PARAFAC iteration f.


At step 110, one estimates the vector w1f (corresponding to the epoch mode) from the matrix XYu(1) and the current vectors wf2 and wf3. This operation may be considered as a projection of the column vectors of the matrix XYu(1) onto the tensorial space generated by the modes 2 and 3.


At step 120, one estimates the vector w2f (corresponding to the frequency mode) from the matrix XYu(2) and the current vectors wf1 and wf3. This operation may be considered as a projection of the column vectors of the matrix XYu(2) onto the tensorial space generated by the modes 1 and 3.


At step 130, one estimates the vector w3f (corresponding to the spatial mode) from the matrix XYu(3) and the current vectors wf1 and wf2. This operation may be considered as a projection of the column vectors of the matrix XYu(3) onto the tensorial space generated by the modes 1 and 2.


Steps 110, 120, 130 are iterated until a convergence criterion is met, for example when the sum of the deviations between the vectors wfj of two successive ALS iterations is less than a predetermined threshold value.


The basic idea of the present invention is to carry out a penalisation by section at each iteration of the PARAFAC algorithm, a penalisation by section resulting in an elimination of elements relative to an index value in the tensor of the prediction model. The elements not being subjected to a penalisation during an iteration f are no longer penalisable for the following iterations, f+1, f+2, . . . , F. On the other hand, the elements penalised during an iteration are taken into account during the PARAFAC decomposition of the following iteration. One thus promotes a blockwise sparsity of non-zero elements in the tensor of the prediction model, which considerably simplifies the calculation of the command.


More specifically, during an iteration f of the PARAFAC decomposition, the minimisation of the quadratic deviations according to (4-1), (4-2) and (4-3) in the alternating least squares method is replaced by a minimisation taking into account a penalisation term favouring this sparsity, namely:










min


w
~

f
1




(





XY
_


u


(
1
)



-





w
~

f
1



(


w
f
3



w
f
2


)


T





2




+

λ
1








w
~

f
1




q
,

Ω

1
,
f





)









(

6


-


1

)







min


w
~

f
2




(





XY
_


u


(
2
)



-





w
~

f
2



(


w
f
3



w
f
1


)


T





2




+

λ
2








w
~

f
2




q
,

Ω

2
,
f





)









(

6


-


2

)







min


w
~

f
3




(





XY
_


u


(
3
)



-





w
~

f
3



(


w
f
2



w
f
1


)


T





2




+

λ
3








w
~

f
3




q
,

Ω

3
,
f





)









(

6


-


3

)







where {tilde over (w)}f1, {tilde over (w)}f2 and {tilde over (w)}f3 are non-normalised projectors and








w
f
1

=



w
~

f
1





w
~

f
1





,






w
f
2

=



w
~

f
2





w
~

f
2





,






w
f
3

=



w
~

f
3





w
~

f
3









are the corresponding normalised projectors and λ1, λ2, λ3 are coefficients that are strictly positive and less than 1. It will be noted that due to the presence of penalisation terms in the expressions (6-1), (6-2) and (6-3), the minimisation can no longer be carried out on normalised projection vectors as in the classic ALS method. The norms ∥wfiq,Ωi q=0, ½, 1 are defined by:













w
f
i




0
,

Ω

i
,
f


,


=




k


Ω

i
,
f






(

1
-

δ

0
,

w

f
,
k

i




)






(

7


-


1

)










w
f
i





1
/
2

,

Ω

i
,
f




=




k


Ω

i
,
f









w

f
,
k

i









(

7


-


2

)










w
f
i




1
,

Ω

i
,
f




=




k


Ω

i
,
f








w

f
,
k

i








(

7


-


3

)







where wf,ki is the kth element of the projection vector wfi, δ is the Kronecker symbol, and Ωi,f is the set of indices corresponding to the penalisable elements, it being understood that Ωi,f+1∩Ωi,f since the non-penalised elements of a projector at an iteration f are not penalisable at the iteration f+1.


It will notably be understood that the norm of the expression (7-1) gives the sum of the non-zero elements of which the indices belong to Thus for example, in this case, the penalisation will be all the greater when the number of elements of which the indices belong to Ωi,j is higher.


In order to simplify the minimisation calculation in the expressions (6-1), (6-2), (6-3), one could advantageously operate element by element, namely:











min

w

f
,
k

1




(






z
1
k

-



w

f
,
k

1



(


w
f
3



w
f
2


)


T




2

+


λ
1




g
q



(

w

f
,
k

1

)




)


,

k
=
1

,





,

I
1





(

8


-


1

)








min

w

f
,
k

2




(






z
2
k

-



w

f
,
k

2



(


w
f
3



w
f
1


)


T




2

+


λ
2




g
q



(

w

f
,
k

2

)




)


,

k
=
1

,





,

I
2





(

8


-


2

)








min

w

f
,
k

3




(






z
3
k

-



w

f
,
k

3



(


w
f
2



w
f
1


)


T




2

+


λ
3




g
q



(

w

f
,
k

3

)




)


,

k
=
1

,





,

I
3





(

8


-


3

)







where the vectors z1k, z2k, z3k are the kth column vectors of the matrices XYu(1), XYu(2), XYu(3) obtained by deployment of the cross covariance tensor XYu according to modes 1, 2 and 3, respectively.


The functions gq, q=0, ½, 1 apply to scalar values and are defined by:






g
0(wf,ki)=1−δ0,wf,ki if k∈Ωi,f and g0(wf,ki)=0 if not  (9-1)






g
1/2(wf,ki)=√{square root over (|wf,ki|)} if k∈Ωi,f and g1/2(wf,ki)=0 if not  (9-2)






g
1(wf,ki)=|wf,ki| if k∈Ωi,f and g1(wf,ki)=0 if not  (9-3)


It may be shown that the result of the penalised minimisation operation according to (8-1), (8-2), (8-3) for the norm L0 (that is to say q=0) gives:





(wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0i  (10-1)





(wf,ki)L0=(wf,ki)LS if not  (10-2)


where (wf,ki)LS represents the minimisation solution in the least squares sense (5-1), (5-2), (5-3), that is to say:











(

w

f
,
k

1

)


L

S


=



z
1
k



(


w
f
3



w
f
2


)







w
f
3



w
f
2




2






(

11


-


1

)








(

w

f
,
k

2

)


L

S


=



z
2
k



(


w
f
3



w
f
1


)







w
f
3



w
f
1




2






(

11


-


2

)








(

w

f
,
k

3

)


L

S


=



z
3
k



(


w
f
2



w
f
1


)







w
f
2



w
f
1




2






(

11


-


3

)







and the thresholds ThL0i, i=1, 2, 3 are given by:










T


h

L

0

1


=



λ
1






w
f
3



w
f
2









(

12


-


1

)







T


h

L

0

2


=



λ
2






w
f
3



w
f
1









(

12


-


2

)







T


h

L

0

3


=



λ
3






w
f
2



w
f
1









(

12


-


3

)







The penalised minimisation operation consequently comes down to carrying out a thresholding on the elements of the projectors obtained at each step of the ALS method.


Similarly, it may be shown that the penalised minimisation operation according to (8-1), (8-2), (8-3) for the norm L1/2 (that is to say q=½) has for solution:





(wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0,5i  (13-1)





(wf,ki)L0=arg min(h0,5(0),h0,5(Bi·(wf,ki)LS)) if k∈Ωi,f and |(wf,ki)LS|>ThL0,5i   (13-2)





(wf,ki)L0=(wf,ki)LS if not  (13-3)


where h0,5(wf,ki)=∥z1k−wf,ki(wf3≃wf2)T2i√{square root over (|wf,ki|)} and the thresholds ThL0,5i, i=1, 2, 3 are given by:










Th


L

0

,
5

1

=


3
4




(


λ
1






w
f
3



w
f
2




2


)


2
3







(

14


-


1

)







Th


L

0

,
5

2

=


3
4




(


λ
2






w
f
3



w
f
1




2


)


2
3







(

14


-


2

)







Th


L

0

,
5

3

=


3
4




(


λ
3






w
f
2



w
f
1




2


)


2
3







(

14


-


3

)







The coefficients Bi, i=1, 2, 3 are defined as the respective solutions of the cubics x(1−x)2=Ci, i=1, 2, 3, with:










C
1

=


λ
1
2


16






w
f
3



w
f
2




4




(


(

w

f
,
k

1

)

LS

)

3







(

15


-


1

)







C
2

=


λ
2
2


16






w
f
3



w
f
1




4



(


(

w

f
,
k

2

)

LS

)







(

15


-


2

)







C
3

=


λ
3
2


16






w
f
2



w
f
1




4




(


(

w

f
,
k

3

)

LS

)

3







(

15


-


3

)







Finally, similarly, it may be shown that the penalised minimisation operation according to (8-1), (8-2), (8-3) for the norm L1 (that is to say q=1) has for solution:





(wf,ki)L1=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL1i   (16-1)





(wf,ki)L1=sgn((wf,ki)LS)(|(wf,ki)LS|−ThL1i) if k∈Ωi,f and |(wf,ki)LS|>ThL1i   (16-2)





(wf,ki)L0=(wf,ki)LS if not  (16-3)


where the thresholds ThL1i, i=1, 2, 3 are given by:










Th

L

1

1

=


λ
1





w
f
3



w
f
2









(

17


-


1

)







Th

L

1

2

=


λ
2





w
f
3



w
f
1









(

17


-


2

)







Th

L

1

3

=


λ
3





w
f
2



w
f
1









(

17


-


3

)







It will be understood that, whatever the norm used, the penalised minimisation is advantageously reduced to a simple thresholding of the elements of a projector at each step of the ALS method. This penalised minimisation makes it possible to promote blockwise sparsity of non-zero coefficients in the predictive model and consequently to simplify the calculation. It is important to note that this may be carried out on-line with the aim of carrying out an incremental calibration of the predictive model.



FIG. 2 schematically represents the flow chart of a recursive method for calibrating a direct neural interface with penalised multivariate regression, according to an embodiment of the present invention.


At step 200, one takes into account new input and output calibration data, respectively in the form of an input tensor Xu and an output tensor Yu. The input calibration data result from the measurement of electrophysiological signals. The output calibration data result for example from the measurement of kinematic parameters of the trajectory or command signals of effectors. The input and output tensors may be if appropriate subjected to a pre-processing aiming to centre and normalise them, in a manner known per se.


At step 210, one determines among the set of predictive models Bu−1f, βu−1f, f=1, . . . , F obtained at the preceding step, the predictive model {Bu−1f*, βu−1f*} corresponding to the lowest prediction error. This model is used to calculate the command tensor (output tensor) from the observation tensor (input tensor).


At step 220, one updates the covariance XXu and cross covariance, XYu, tensors according to the relationships (1-1) and (1-2), while taking account of the forgetting factor.


One next enters into a first NPLS iterative loop, in which one iterates on the dimension f of the latent variables space, with f=1, . . . , F where F is a maximum dimension. This iterative loop makes it possible to generate successively a plurality of predictive models of the direct neural interface, each predictive model being defined by a prediction coefficients tensor Buf∈□(I1× . . . ×IM)×(J1× . . . ×JM) and a bias tensor βuf∈□(f1× . . . ×JM).


At step 230, one initialises f with f=1 and the sets Ωi,1, i=1, . . . , M with Ωi,1={1, 2, . . . , Ii} for i=1, . . . , M.


For each value of f, one carries out a PARAFAC decomposition of the cross covariance tensor using an alternating least squares method. Unlike a conventional PARAFAC decomposition (cf. expressions (4-1), (4-2), (4-3)) used in the REW-NPLS method, a PARAFAC decomposition regularised by means of a penalised metric is used here, the penalisation term aiming to promote blockwise sparsity of non-zero coefficients within the tensor representative of the predictive model.


More specifically, in 240, for each value of f, one searches for the M-uplet of projectors wfi, i=1, . . . , M of respective dimensions I1, I2, . . . , IM making it possible to minimise said penalised metric, as expressed in (6-1), (6-2), (6-3) in the case M=3. This minimisation takes place sequentially by ALS, projector by projector in 2401, . . . , 240M, the other projectors being considered as constant.


Advantageously, at each of these steps, the minimisation relates to the elements of the projector in question, as described in relation with the expressions (8-1) to (8-3) and (9-1) to (9-3).


More specifically, at each step, the elements of the projector making it possible to minimise the penalised metric are obtained by means of a thresholding of the elements of the projector minimising the non-penalised metric, as described in relation with the expressions (10-1), (10-2); (13-1) to (13-3); (16-1) to (16-3).


It is important to note that at each iteration f, only the penalisable elements are taken into account for the minimisation of the metric. To this end, one eliminates from the set of penalisable indices Ωi,f, the indices k∈Ωi,f for which the elements wf,ki have not been penalised at the iteration f, to generate the set of penalisable indices relative to the following iteration, Ωi,f+1.


The tensor of the predictive model, Buf, is obtained from the projectors wφi, i=, . . . , M, Ω=1, . . . , f as in the conventional REW-NPLS regression method. Details of the calculation of Buf from the projectors in question will be found in appendix A of the article of A. Eliseyev et al. published in 2017, cited in the introduction part. The elements of the predictive model Buf corresponding to the indices for which the elements wf,ki have been penalised, are zeroed in 250.


After each iteration f, one eliminates in 260, from XXu and XYu the redundant information already taken into account in the regression on the latent variables space, at the preceding iterations 1, . . . , f. The cross-covariance tensor XYu in which this redundancy has been eliminated is used for the new PARAFAC decomposition iteration. Step 260 is known by the name deflation in the NPLS regression.


At the end of the F iterations, in 270, one consequently has available F predictive models, Bu−1f, βu−1f, f=1, . . . , F.


After the acquisition of new input calibration data, Xu+1 and output calibration data, Yu+1 blocks at the following calibration step, 280, one determines in 290, the optimal predictive model, {Bu−1f, βu−1f}. This predictive model is used to calculate the command tensor from the observation tensor, up to the following calibration step, u+2.



FIG. 3 schematically represents the mechanism for updating the predictive model in the calibration method of FIG. 2.


In 310 is represented the taking into account of a new input calibration data block, represented by the tensor Xu and a new output calibration block, represented by the tensor Yu.


The covariance XXu and cross covariance, XYu, tensors are deduced therefrom in 320.


For each value of f, a PARAFAC decomposition of the cross-covariance tensor is carried out using an alternating least squares (ALS) method in 330, each step of the PARAFAC decomposition relating to a projector, this decomposition being regularised by means of a penalisation term relating to this projector. The regularisation results in a thresholding of certain elements of the projector obtained by the conventional least squares method, that is to say non-penalised.


At each PARAFAC iteration represented by the loop 335, one deduces therefrom a predictive model {Buf, βuf} in 340.


For each predictive model, one eliminates in 350 the redundant information in the cross-covariance tensor (deflation) and one carries out a new PARAFAC iteration on the covariance tensor XXu and the new cross covariance tensor XYu (noted XYn+1 in the figure) deflated of said redundancy, 360.


The recursive validation loop is represented in 380. It makes it possible to determine among the predictive models {Bu−1f, βu−1f}, f=1, . . . , F, obtained at the preceding calibration step, that which minimises the prediction error on the new input, Xu+1, and output, Yu+1 calibration data blocks.

Claims
  • 1. Method for calibrating on-line a direct neural interface intended to receive a plurality of electrophysiological signals during a plurality of observation windows associated with observation times, to form an input tensor and to deduce therefrom, by means of a predictive model, an output tensor providing command signals intended to command one or more effectors, said calibration is carried out during a plurality of calibration steps, calibration step u being carried out from input calibration data, represented by an input calibration tensor, Xu∈□(I1× . . . ×IN), and output calibration data represented by an output calibration tensor Yu∈□(J1× . . . ×JM), said calibration step implementing a REW-NPLS regression comprising: the calculation (220) of a covariance tensor XXu from a covariance tensor XXu−1 obtained during a preceding calibration step and the tensor Xu, as well as the calculation of a cross covariance tensor XYu from a cross covariance tensor XYu−1 obtained during said preceding calibration step and the tensors Xu and Yu;a PARAFAC iterative decomposition of the cross covariance tensor XYu, each PARAFAC iteration being associated with a dimension f of the latent variables space on which said regression is carried out, each iteration providing a plurality of projectors wfi, i=1, . . . , M of respective sizes I1, I2, . . . IM and a predictive model defined by a prediction coefficients tensor Buf∈□(I1× . . . ×IM)×(J1× . . . ×JM) and a bias tensor βuf∈□(J1× . . . ×JM);characterised in that each PARAFAC iteration comprises a sequence of M elementary steps (2401, 2401, . . . 240M) of minimisation of a metric according to the alternating least squares method, each elementary minimisation step relating to a projector and considering the others as constant, said metric comprising a penalisation term that is a function of the norm of this projector, the elements of this projector not being subjected to a penalisation during a PARAFAC iteration f not being penalisable during following PARAFAC iterations.
  • 2. Method for calibrating on-line a direct neural interface according to claim 1, characterised in that an elementary minimisation step relating to a projector wfi aims to minimise the penalised metric
  • 3. Method for calibrating on-line a direct neural interface according to claim 1, characterised in that an elementary minimisation step relating to a projector wfi aims to minimise the penalised metrics
  • 4. Method for calibrating on-line a direct neural interface according to claim 3, characterised in that q=0 and that the elements wf,ki resulting from the minimisation are given by: (wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0i (wf,ki)L0=(wf,ki)LS if not
  • 5. Method for calibrating on-line a direct neural interface according to claim 3, characterised in that q=½ and that the elements wf,ki resulting from the minimisation are given by: (wf,ki)L0=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL0,5i (wf,ki)L0=arg min(h0,5(0),h0,5(Bi·(wf,ki)LS)) if k∈Ωi,f and |(wf,ki)LS|>ThL0,5i (wf,ki)L0=(wf,ki)LS if not
  • 6. Method for calibrating on-line a direct neural interface according to claim 3, characterised in that q=1 and that the elements wf,ki resulting from the minimisation are given by: (wf,ki)L1=0 if k∈Ωi,f and |(wf,ki)LS|≤ThL1i (wf,ki)L1=sgn((wf,ki)LS)(|(wf,ki)LS|−ThL1i) if k∈Ωi,f and |(wf,ki)LS|>ThL1i (wf,ki)L0=(wf,ki)LS if not
  • 7. Method for calibrating on-line a direct neural interface according to claim 1, characterised in that the electrophysiological signals are ECoG signals.
Priority Claims (1)
Number Date Country Kind
20 10497 Oct 2020 FR national