The present invention is related to a method to determine artificial limb movement from electroencephalographic (EEG) signal.
Current prostheses dedicated to disabled people or amputees generally use electromyographic (EMG) signals arising from the skin surface of the stump do not integrate the latest advances in the fields of neurophysiology, microelectronics and signal processing.
Such prostheses, are described by G. Cheron et Al. in “A dynamic recurrent neural network for multiple muscles electromyographic mapping to elevation angles of the lower limb in human locomotion”, Journal of Neuroscience Methods, 129(2):95-104, 2003. In that original context, the authors used the DRNN for simulating lower limb coordination in human locomotion. They demonstrated the DRNN was able to establish a mapping between the electromyographic signals (EMG) from six muscles and the elevation angles of the three main lower limb segments (thigh, shank and foot).
The use of such EMG signal is unfortunately not always possible, for example in the case of disabled patients suffering of spinal cord or motor nerves diseases.
The present invention aims to provide a method for determining an artificial limb movement not based on EMG signals.
A first aspect of the present invention is related to a method to determine an artificial limb movement comprising the steps of:
According to particular preferred embodiments, the method of the invention further discloses at least one or a suitable combination of the following features:
A second aspect of the invention is related to a Prosthetic limb system comprising:
Preferably, the artificial neural network of the prosthetic limb system of the invention is a dynamic recurrent neural network.
Advantageously, the prosthetic limb is corresponding to lower limb prosthesis.
The present invention is also related to a computer readable medium having computer readable code embodied therein, said computer readable code, when executed on a computer, implementing the method of the invention.
The present invention is related to a method for determining an artificial limb movement from electro encephalographic (EEG) measurement. The determined limb movement may then be used for example to drive a prosthetic limb. This determined movement may also be used for other applications, such as driving an avatar in virtual reality simulation, or the like.
The method of the invention may for example advantageously be used for driving a lower limb prosthesis.
Preferably, the EEG signal is pre-processed before being used for determining the artificial limb movement.
Advantageously, the pre-processing comprises an artefact removal step, a filtering step and relevant information extraction step based on Independent Component Analysis (ICA).
The artefact removal is preferably a blind source separation for filtering EMG and EOG artefacts. Then, a high pass filter (0.1 Hz) is preferably applied and relevant information is then advantageously obtained by using ICA.
The choice of relevant decompositions are significant in their weight and significant in their location on the scalp. For example, for walk applications, the central motor area is of particular importance as shown in the homunkulus in
Moreover, the use of high pass filtering on ICA component activations, directly named in the following sections ICA components, has proven its good effect on the final results.
The EEG signals, advantageously pre-processed to extract chosen ICA components are fed to a dynamic recurrent neural network (DRNN). The targets of the outputs of the DRNN may advantageously be the principal components of the different joints involved in the movement to be determined. It could also be directly the angular accelerations or speeds of the target movement. But, the use of principal component analysis (PCA) permits to reduce the number of variables.
In a first step, a learning dataset is provided to determine the DRNN parameters, such as synaptic weights and preferably the time constants and bias. This learning dataset comprises an input EEG signal, preferably pre-processed (ICA components) and the corresponding target movement of the artificial limb.
The DRNN used in the invention preferably uses neural network model governed by the following equations:
where F(α) is the squashing function F(α)=1/(1+e−α, yi is the state or activation level of unit i, Ii is an external input (or bias), and xi is given by:
X
i=Σjwijyj (2)
which is the propagation equation of the network (xi is called the total or effective input of the neuron, Wij is the synaptic weight between units i and j). The time constants Ti act as a relaxation process. The correction of the time constants is included in the learning process in order to increase the dynamical features of the method.
The synaptic weights wij, time constants Ti and biases Ii are the free parameters of the DRNN.
Introduction of Ti allows more complex frequential behaviour, improves the non-linearity effect of the sigmoid function and the memory effect of time delays.
The network consists of n fully-connected neurons. Therefore, each neurone in an n neurones network has n connections (including a self-connection). In order to make the temporal behaviour of the network explicit, an error function is defined as:
E=∫
t
t
q(y(t),t)dt (3)
where t0 and t1 give the time interval during which the correction process occurs. The function q(y(t),t) is the cost function at time t which depends on the vector of the neurone activations y and on time. We then introduce new variables pi (called adjoint variables) that will be determined by the following system of differential equations:
with boundary conditions pi(t1)=0.
After the introduction of these new variables, the learning equations can be determined:
Due to the integration of the system of (4) backward through time, this algorithm is sometimes called ‘backpropagation through time’.
More details on that preferred DRNN is described by Cheron et Al. in in “A dynamic recurrent neural network for multiple muscles electromyographic mapping to elevation angles of the lower limb in human locomotion”, Journal of Neuroscience Methods, 129(2):95-104, 2003. The DRNN described in this document will be referred hereafter as the original DRNN. The learning phase of this original DRNN is preferably modified as described hereafter. The modified DRNN will be referred hereafter as the new DRNN.
In a preferred method of the invention the synaptic weights are then adapted, using separate learning rate Ci,j for each connection (i.e. all the synaptic weights have their own adaptive learning rate).
In order to have converging learning procedure in a realistic timeframe, and with a limited learning dataset, a convergence acceleration algorithm is used during the learning phase.
Preferably, in the convergence acceleration algorithm, the adaptation of these learning rates is done by observing the sign of the gradient of the error function E at the two last iterations. As long as no change in sign is detected the corresponding learning rate is increased by a factor u, u being a number greater than 1. If the sign changes the learning rate is decreased by a factor d, d being a number comprised between 0 and 1. More formally, the algorithm can be written:
εi,j(n)=εi,j(n−1)·u (8)
Else
εi,j(n)=εi,j(n−1)·d (9)
The connections wi,j are then computed using the increment:
Preferably, the same procedure is applied at each iteration to the time constants Ti and the biases Ii, with additional learning rates, corresponding to each time constant Ti and bias Ii.
Preferably u is comprised between 1.1 and 1.5, more preferably, u is about 1.3. Preferably, d is comprised between 0.9 and 0.5, more preferably, d is about 0.7, the selected u and d giving the best convergence results.
It was observed that this methodology could accelerate the convergence of the DRNN, but could also lead to an abnormal behavior, like a monotonic increase of the error E as a function of the iteration number, also called bifurcation (see in
A new procedure was therefore developed (as part of the convergence acceleration algorithm), wherein it was checked at each iteration that the new learning rates εi,j does not give rise to bifurcations during the learning process. If so, all the learning rates are divided by a constant factor c larger than 1, preferably comprised between 1.5 and 5, more preferably about 2. For iteration number n, this test procedure can be mathematically described as:
If ε(n+1)>E(n)
then εi,j (n+1)=(n)/c, for all i, j.
This reduction is also preferably applied to the learning rates associated to the time constants and the biases.
This technique totally prevents the error of the DRNN to increase indefinitely. A typical behaviour of the error function during the learning phase is shown on
In addition to this test procedure, the synaptic weights, time constants and biases values giving the lowest error throughout the whole learning procedure are also stored.
The present invention has been evaluated for determining a lower movement, comprising the elevation angles the shank, the knee and the thigh.
In a first step, a large set of recorded EEG signals with corresponding target movements were provided.
Then, a pre-processing was performed on the EEG signals, in order to extract relevant information from said EEG. In this example, the pre-processing is composed of artefact removal, filtering and relevant information extraction based on Independent Component Analysis (ICA). The artefact removal is actually a common BSS filtering for EMG and EOG artefacts. Then, a high pass filter (0.1 Hz) is applied and relevant information is obtained by using ICA depicted in
Then, the chosen ICA components are given as input in the DRNN. The targets of the outputs of the DRNN are for this example the principal components of the elevation angles of the shank, the knee and the thigh. It could also be the relative angles between shank, knee and thigh, angular accelerations or speeds. But, in order to reduce the dimensionality of the DRNN, the use of PCA can reduce by one the number of variables. Indeed, it has been shown that those 3 angles are linked together and not independent as depicted in
Because no optimization method has been proven up to now to obtain the global minimum and to choose the best topology (number of hidden neurons), a high number of trainings (typically 200 trainings) for each topology of the DRNN was tested.
By topology, we mean the number of hidden neurons (the input and output numbers are fixed by the problem). For instance, for the results hereafter, 200 trainings for each topology were used. Each tested topology had a number of hidden neurons between 1 and 20 (this number depends on the complexity of the system; the periodic signal allows to diminish this number). For each topology, the best network in terms of error is saved, then the best of those best networks is used for application.
In order to avoid overtraining problem, the data was split in a training and a testing set. The approach to choose the best network is thus applied to the testing set. This is called the learning procedure.
In order to illustrate the improved performance of the new DRNN with respect to the original version (Cheron et Al.), identical input signals and target output signals were used to compare both DRNN.
The generalization ability of the new preferred DRNN is clearly improved, as can be checked on
A similar improvement was observed in the simulation of electromyographic signals (EMG) by the DRNN on the basis of the corresponding EEG signals (see
The results of the obtained DRNN can be analyzed on an independent testing data set with good or bad initial conditions and to compare the results with a white noise as input to see the added-value of EEG signals.
The intrinsic properties of the DRNN and the link with the Central Pattern Generator approach (CPG) will then be shown. Explanations of why this system works are argued based on FFT and coherence.
First, it is clear that the DRNN is able to generalize for an independent set.
However, it can be noticed that the first point of the output of the DRNN is the correct measurement.
Moreover, if the first kinematic point is still further, the EEG based DRNN takes more time to recover the phase as shown in 15, whereas the white noise is completely wrong as shown in
Actually, the DRNN, by the recurrent approach, is able to automatically generate a periodic signal with zero in input as shown in
Afterward, FFT of the ICA component presents similar frequencies than those of the kinematics as shown on
Finally, in
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP11/50260 | 1/11/2011 | WO | 00 | 11/7/2012 |
Number | Date | Country | |
---|---|---|---|
61293893 | Jan 2010 | US |