This invention relates to a framework for de-noising neural signals. More particularly, to a unified artifact removal framework for removing various artifacts.
Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although invasive electrodes are sometimes used, as in electrocorticography. EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain. EEG has several applications, including nonlimiting examples, such as utilizing EEG to control exoskeletons or prosthetics (e.g. U.S. Pat. Nos. 9,468,541, 10,092,205, WO 2017/218661).
The brain signals from an EEG are prone to artifacts, such as motion or ocular artifacts, in many different types of clinical and technological applications. EEG is perhaps the most widely used non-invasive brain wave measurement system for various purposes. However, all EEG systems are prone to artifacts, which make the analysis of true brain signals very difficult. This motion artifact contamination affects results, especially when mobile applications such as walking or free body movement are considered, as motion artifacts greatly impact the EEG signals. There is currently no method of identifying and removing (i.e., denoising) these artifacts.
Accurate implementation of real-time neural interfaces requires handling major physiological and non-physiological artifacts that are associated with the measurement modalities. As EEG measurements are prone to excessive motion artifacts and other types of artifacts that contaminate the EEG recordings, it is very difficult to provide accurate implementation. Although the magnitude of such artifacts heavily depends on the task and the setup, complete minimization or isolation of such artifacts is extremely difficult.
A clear consensus among researchers is that the motion artifacts are highly dynamic in nature, and are affected by the setup and movement dynamics. It is also highly variable among subjects, within the same session and with respect to the scalp spatial location of EEG sensors of the same subjects. While research has been conducted on such artifacts, there is still no consensus on how to handle these artifacts when they manifest themselves, how to characterize their conditional variabilities and perhaps even less examined, how to remove/suppress them in real-time.
In one embodiment, a method removing artifacts from neural signals comprises receiving electroencephalography (EEG) data from an EEG system and providing the EEG data to a unified artifact removal framework for cleaning artifacts. The EEG data is provided to a first cleaning framework utilizing the first reference to clean first artifacts from the EEG data, and the outputting the EEG data from the first cleaning framework to a second cleaning framework. The second cleaning framework may operate in a similar manner utilizing second reference to clean second artifacts from the EEG data. This general process may be repeated as desired to clean various artifacts from EEG data, when a suitable reference for the artifacts to be removed is utilized. The frameworks utilize H∞ method or filtering involving a H∞ adapting rule to properly weigh the reference, and combining the subsequent output with incoming EEG data results in the desired removal of artifacts.
In yet another embodiment, a method removing artifacts from neural signals may be similar to the methods above. The first cleaning framework may be an ocular cleaning framework, and the second cleaning framework may be a motion artifact cleaning framework. Further, the unified artifact removal framework may comprise a BCG artifact removal framework, tACS artifact removal framework, or some combination of the various frameworks.
A system for removing artifacts from neural signal may provide an electroencephalography (EEG) system comprising a plurality of electrodes that gather EEG data. A unified artifact removal framework for cleaning artifacts from the EEG data may also be provided. The unified artifact removal framework may include one or more artifact removal frameworks, each providing a H∞ module receiving a reference utilized to clean artifacts from the EEG data, and a combiner combining an output of the Hoc module with the EEG data to clean the artifacts from the EEG data. Fully clean EEG data is output from the unified artifact removal framework. The system may also a processor controlling various operations of the unified artifact removal framework.
The foregoing has outlined rather broadly various features of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described hereinafter.
For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions to be taken in conjunction with the accompanying drawings describing specific embodiments of the disclosure, wherein:
Refer now to the drawings wherein depicted elements are not necessarily shown to scale and wherein like or similar elements are designated by the same reference numeral through the several views.
Referring to the drawings in general, it will be understood that the illustrations are for the purpose of describing particular implementations of the disclosure and are not intended to be limiting thereto. While most of the terms used herein will be recognizable to those of ordinary skill in the art, it should be understood that when not explicitly defined, terms should be interpreted as adopting a meaning presently accepted by those of ordinary skill in the art.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the invention, as claimed. In this application, the use of the singular includes the plural, the word “a” or “an” means “at least one”, and the use of “or” means “and/or”, unless specifically stated otherwise. Furthermore, the use of the term “including”, as well as other forms, such as “includes” and “included”, is not limiting. Also, terms such as “element” or “component” encompass both elements or components comprising one unit and elements or components that comprise more than one unit unless specifically stated otherwise.
An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers.
Unified De-Noising Systems
In some embodiments, the system may be utilized for operating a prosthetic or exoskeleton. EEG data may be utilized to operate a prosthetic or exoskeleton via a brain-to-machine interface or BMI (e.g. U.S. Pat. No. 10,092,205 and WO 2017/218661).
Unified Artifact Removal Framework
The unified artifact removal frameworks utilizes H∞ methods or filtering methods discussed herein formulated around the robustness properties due to exogenous effects and modeling errors, and provide a comprehensive framework for real-time applications. H∞ methods may be generally characterized as adaptive cleaning or filtering methods where artifacts or disturbances are cleaned from an input, and feedback is utilized to adaptively adjust filtering.
The systems and methods discussed herein expand on H∞ filtering methods utilized for the unified artifact removal framework, such as ocular, motion, ballistocardiographic (BCG), transcranial Alternative Current Stimulation (tACS), and/or other artifact removal. It is noted that successful implementation of unified artifact removal framework, characterized as adaptive to remove artifacts, requires a good reference source of the contaminants, disturbances, or artifacts. Any neural activity that might be superimposed on or combined with the negative of reference noise sources, would be filtered by an optimal framework. As a nonlimiting example, EEG electrodes (such as FP1, FP2, FT9, FT10) are expected to include some neural activity including artifacts, which are desirable to eliminate. In some embodiments, dedicated artifact tools (e.g. EOG or IMU) may be desirable to generate a reference signal for the framework and/or obtain the optimal filter performance. As discussed previously, the electrodes of an EEG gathering neural activity data may be coupled to a unified artifact removal framework. The framework may be provide by a memory for storing data, software, or the like; a processor for implementing the de-noise/artifact removal discussed further herein; and/or input/output port(s) for receiving sending data.
The H∞ filter or H∞ adaptation rule may utilize a number of key parameters to approach its optimal filter behavior. These parameters tell the filter the level of contamination to handle, and how fast to adapt to the contaminants. In a multi-channel recording paradigm, the level of contamination and contamination profile varies dramatically among electrodes. Therefore, keeping the estimator parameters the same for the entire scalp hinders the true overall performance of the robust filter. In some embodiments, a parameter selection method may be used for automatic identification of filter parameters. The framework may be used to augment various artifacts or EEG contaminants, such as ocular contamination, motion artifacts, BCG artifacts, tACS artifacts, local/global drifts, and/or signal amplitude biases from EEG sources in a real-time setting. The simultaneous filtering of all these unwanted effects is discussed in detail herein.
In the unified framework, successful filtering of artifacts requires a good identification of artifact projections over all scalp areas or the reference utilized. This gives us the opportunity to assess the level of contamination for different scalp locations and also their sample-by-sample change over time. This property may be used to comment on the level of contamination due to volume conduction and get a clear snapshot of it over all scalp areas.
The real-time usability and performance characteristics of a H∞ filter was investigated, which takes advantage of full characterization of ocular and motion contaminants in a sample-by-sample adaptation scheme. Coupled with the robustness characteristics of the H∞ formulation in general, this method is inherently less sensitive (or more robust) to dynamically changing characteristics of artifacts or contaminants. Importantly, this method does not rely on the definition of clear EEG segments, and does not require the acquisition of clean EEG segments, as in an ASR method. Furthermore, the method discussed may allow operators to selectively remove what they desire to remove from EEG data.
The non-stationary nature of the EEG, as well as the artifacts superimposed on them, makes the selection of cleaning methodology exceedingly important. The changes in the characteristics of the artifacts over short periods of time require an adaptation scheme that can effectively handle these dynamics. An adaptive robust framework or H∞ filter formulation with time-varying weigh assumption as an estimator is desirable to compensate for the artifact variability, and identify the projections of the artifacts per channel of EEG data in a sample-adaptive fashion. The unified framework may include different individual frameworks corresponding to different artifacts to be removed from EEG data. For example, an ocular artifact cleaning framework is based on the robust H∞ filter/framework with time-varying weight formulation. To accomplish a fast and effective cleaning, a single weight value, per reference input, may be utilized and the weights may be estimated with the H∞ formulation in some embodiments. The effectiveness of the robust-adaptive filter framework for the ocular artifacts, signal drift, and biases, as well as motion and other select artifacts, is shown by experimentation discussed further below. A linear implementation was found only partially effective, mainly limited to the major frequency of contamination, which is locked to the primary head movement frequency.
In some embodiments, the noise or artifact source (or reference that is input to the framework) is measured with dedicated sensors (e.g. EOG or IMU), and the components of these sources that are represented in the raw EEG data were identified for each sample of data. Although the dedicated sensors might seem like a disadvantage at first, it provides the means to be very specific in terms of the identified components and allows for the selective removal of artifacts from the EEG data. Methods that work only with the clean EEG and expected artifact statistics cannot guarantee the cleaning of outlier contamination and might even risk the removal of underlying neural components that are similar to the artifacts, statistically. In some embodiments, it may be desirable to pre-process the reference or artifact source provided to the framework. As a nonlimiting example, to accomplish a better estimation of the head movement projection to each individual EEG channel, a 2nd order Volterra-series expansion of the reference inputs may be introduced, e.g. having also up to 3 samples time taps. Even though the implementation handled the major frequency contamination with great time and frequency domain suppression, the harmonics were not properly represented in experiments. Thus, a cascade filtering framework for all target frequencies may be utilized for some embodiments.
As a nonlimiting example, in some embodiments, raw EEG data or signal(s) 110 may be initially provided to an ocular cleaning framework 100. For example, the ocular artifacts data (e.g. EOG and bias reference) 120 may be processed by a H∞ module 130 utilizing a H∞ adaptation rule. H∞ module 130 generally apply a H∞ adaptation rule (discussed in detail below), which may also be referred to H∞ filtering or H°/TV (TV indicating the time-warying nature). The negative output of the module may be combined by combiner 140 with the incoming EEG data or signal 110 to provide clean EEG data or signal(s) s0, particularly cleaned of ocular artifacts. In some embodiments, the output of the ocular cleaning framework may be provided to another artifact cleaning framework, such as a motion artifact framework or cascade filtering framework. For example, the EEG signal s0 is provided to a cascade filtering framework 140 for removal of other artifacts, such as motion artifacts. The cascade filtering framework 140 receives IMU data or reference 150. As shown the cascading filtering framework 140 may provide multiple stages n, where input EEG progresses through multiple stages. EEG input generally refers to the input signal to be cleaned, which may be raw EEG data or EEG data previously cleaned of other artifacts in earlier stages. In some embodiments, each stage of the cascade filtering framework 140 may perform pre-processing of reference data, such as a 2nd order Volterra-series expansions with Voterra module/kernel 145 on received reference signal(s) or IMU data 150, prior to further H∞ filtering. The corresponding output resulting from module/kernel 145 may then be processed by H∞ modules 130 (the output of which may be referred to a V-H∞/TV) to clean the received EEG signal in a similar manner as discussed previously. For example, a received EEG signal s0 is fed to a first stage, and IMU data 150 undergoes Volterra 145 and H∞ processing 130. The received EEG signal (e.g. s0 in 1st stage, s1 in 2nd stage, etc.) is combined with the negative output the processed IMU data 150, and an EEG output signal (s1 . . . sn) that is clean of a portion of the artifacts in outputted. The cleaned EEG output from earlier stages (s1 . . . sn) are feed to subsequent stages where similar processing is repeated. Once the initial EEG signal s0 has progressed through the various stages of the cascade filtering framework 140, a clean EEG sn it output from the framework.
The H∞ processing and formulation is revisited in further detail below, as it applies to ocular artifacts, motion artifacts, or other artifact problems. Detailed discussion on the H∞ adaptation rule is also provided herein.
The H∞ Adaptation Rule
H∞ methods or filtering methods generally refers to any suitable methods for adaptively cleaning artifacts or disturbances from an input signal, such as EEG data. In some embodiments, such H∞ methods may involve a H∞ adaptation rule discussed further below. In some embodiments, H∞ methods may involve the H∞ adaptation rule with time-varying weight assumptions. In some embodiments, H∞ methods may also involve pre-processing, e.g. Volterra-series expansion of reference inputs, such as IMU data.
For comparison of techniques,
As previously mentioned, a number of adaptation schemes could be used for the unified de-noising system. Due to its robustness properties under modeling errors and unknown exogenous effects, a H∞ method was used for a sample by sample filter weight adaptation problem. The H∞ filtering formulation guarantees robustness such that, small modeling errors and exogenous noises does not cause large estimation errors. There are a number of different ways to formulate the H∞ filter weight estimation problem, out of concern of the convergence properties of the weights. In some embodiments, one can simply assume a fixed weight per EEG channel and request a linear or exponential convergence to the true weights. However, because of the inherent properties of EEG electrode measurements, such as changes in the percentage of artifact contamination on EEG channels over time due to external factors, convergence to a fixed weight problem poses limitations to the general robustness of such a framework. Nevertheless, a decent level of filtering could still be observed due to its sample-adaptive formulation. A safer and more robust way to adapt the weights would be the assumption of time varying weights per channel in some embodiments.
The H∞ adaptation rule with time-varying weight assumption is given as follows (B. Hassibi, T. Kailath, h∞ adaptive filtering, in: Acoustics, Speech, and Signal Processing, 1995. ICASSP-95, 1995 International Conference on, Vol. 2, IEEE, 1995, pp. 949-952):
Here ŵi represent the estimated weight vector of reference values, ri the reference vector at sample i, si represents the raw EEG data, and {tilde over (P)}i is the noise covariance matrix initialized with {tilde over (P)}0=μl where μ is a constant. T represents a matrix transpose operation, and I represents an identity matrix. The parameters γ and q play an important role on the behavior of the adaptive filter. γ determine the bound on the energy-to-energy gain from the disturbance to the estimation error, roughly determining the amount of disturbance that can be tolerated. For the time varying weight formulation, it should be selected as γ>1. This defines a sub-optimal filter as a trade-off for allowing the weights to vary. For the ocular artifact removal, γ≅1.15 is found to be effective, in general. The parameter q reflects the a priori information of how rapidly the weight will vary in time. Larger values covers for faster variations. For slow signals, q≅10−8 is usually a good start point. It should be noted that these parameters are application dependent.
In the above formulation, the weights are assumed to be time-varying with unknown variation dynamics, and the change δŵ is formulated as an unknown disturbance on the system. Hence, the above formulation is valid for the inequality γ2≤1+q
Studies conducted examining the use of H∞ filter for artifact suppression indicate a single γ selected by trial and error and finding a value that works best by visually examining the filter behavior may be viable. The best pair of γ and q values was investigated for a given data set and their values per electrode. The reflection of ocular artifacts on different electrode locations is not expected to be the same due to volume conduction, nor can they be assumed as coupled systems, due to possibly different exogenous effects. Each electrode measurement is treated as a separate sub-system in some embodiments. Although in general, the weights of the H∞ filter will be adjusted accordingly in an adaptive setting, the performance of the overall filter can be improved upon by identifying the best pair of parameters per electrode. In experimentation, for this offline step of searching a pair of γ and q values, a constrained optimization problem (e.g. using Matlab® fmincon function) is formulated. In each iteration, the values γ and q may be updated using the cross correlation values of filtered EEG data to the raw EEG data (both windowed) as a cost function in some embodiments. When the filter is actively filtering the ocular artifacts, the correlation values would be low by definition—thus, maximizing this metric alone would mean less filtering of the ocular sources. However, the percentage of ocular artifacts in EEG is considered to be less than the actual EEG signals—thus, increasing the overall correlation value would still serve as a partial metric. A second term is added to the cost function Λ, which is the 2-norm square of the difference between the raw EEG signal and the identified ocular component (this difference is ultimately defined as the clean EEG signal). By minimizing Λ, the amplitude of the cleaned EEG is ultimately taken to be as low as possible, while maintaining high correlation with the raw data due to the first metric. Constant weights 0<[α1 α2]<1 for these two partial metrics were also introduced to be able to adjust the level of their contributions. The γ and q values were also bounded, in which the upper bounds are determined to generate a decent filtering performance as examined from an a priori data set. With a running window (w) of 200 ms and 0% overlap, the cost function to be minimized for channel i is defined as:
where Ciw is the vector of correlation coefficients for the running windows over all data used for optimization, Eiw is the vector of windowed raw EEG data for channel i and ziw is the vector of windowed identified ocular component that is reflected onto channel i. Finally, ∥Eiw−ziw∥22 is the 2-norm square of the error term and n is the total number of windows in the data set.
Ocular Artifacts: The system updates an adaptive filter framework in real-time, having the raw EEG signal as the noise contaminated source and the secondary signal as the reference contaminating source (in this case the artifacts). By adapting its weights in real-time, the filter seeks to identify and separate the noise source component that was reflected onto the signal in question (cleaned EEG). Therefore, having a good measurement of the contaminating source (e.g. ocular and motion artifacts) is crucial. It is noted that a well calibrated adaptive framework would effectively eliminate, especially from neighboring electrode locations, neural signal components that may be present in these sources. Applications that require only a few electrode channels (e.g., brain-computer interfaces that rely on EEG signals from mid-central areas), may benefit from the usage of standard channel locations that are close to ocular artifact generators as noise sources, assuming that they share no common neural components. However, for other applications that require a wider coverage of scalp areas, noise profiles are best observed by using dedicated measurement channels. Another disadvantage of using standard electrode locations as noise sources would be rendering these channels obsolete for real-time decoding purposes and discounting the information they would carry for relevant applications.
Eye blinks and eye movements: Eye blink and eye movement artifacts occur due to the electric potential changes around eyes and are one of the main sources of artifact contamination in EEG recordings. They can be dominant in amplitude modulation of EEG channels, and due to volume conduction, can contaminate the entire scalp EEG recordings with varying amplitude and profiles. The noise propagation profile can be highly variable not only across different sessions, but also within a recording session due to experimental conditions such as excessive electrode contact impedance changes due to sweating of scalp, cable tagging/pulling, or simply because of the blink and eye motion potential intensities.
Different methodologies and ocular electrode sites can be used to capture ocular artifacts; however, to the best of our knowledge, there is not a consensus on the sites to be used for this purpose. In some embodiments, a subset of electrodes, either of the EEG or a separate system that is time synchronized, may be placed above and below the eye and/or at left and right temples. In some embodiments, signals that are measured in pairs (for V-EOG and H-EOG) may be utilized to capture a better artifact profile and suppress any common low amplitude components that might be present in the measurement channels, resulting in two noise reference measurements calculated as:
V EOG(t)=V EOGU(t)−V EOGL(t)
H EOG(t)=HEOGR(t)−HEOGL(t)
Low frequency drifts and bias on EEG channels: Real-time brain-machine interface (BMI) applications can benefit from not only the removal of the ocular artifact related components by an ANC filter scheme, but also low frequency drifts and signal biases on EEG channels. Very low frequency drifts can be caused by many factors such as slow electrode scalp impedance changes and amplifier characteristics. Depending on the cause, drifts found in different electrode locations can be uncoupled and uncorrelated. One frequently used method to eliminate this is the low-pass filtering of EEG measurements. Although it is a clear solution for offline applications and signal analysis, under real-time conditions, a low pass filter induces phase (delay) to the filtered signal, dependent also on the filter order and cut-off frequency.
Electrode signal bias is often caused by the measurement system and initial measurement point before data logging starts. Ideally, a neural signal should be logged as a zero mean oscillatory profile. However, due to scalp location dependent artifact, drift and bias characteristics, signals could be recorded with intermittent or continuously drifting profiles with sudden amplitude changes. Again, in an offline setting deriving a zero mean signal does not pose a problem and can be handled with classic filters. However, short drifts caused by ocular noise profiles, or step-like amplitude shifts caused by electrode cable tagging cannot be easily handled and requires an expert investigation in an offline setting. A sample by sample drift and bias removing scheme can also be used for quick removal of these components from EEG source measurements. It should be noted that within this framework, we do not make assumptions on the drift characteristics (i.e. linear or nonlinear, persistent or intermittent). In some embodiments for the ocular cleaning framework, a constant signal with amplitude +1 may be added as an external noise reference to the system, and an additional weight may be adapted to minimize the overall 1 gain per channel, thus eliminating any bias and drift that is present in EEG measurement channels. A full framework with drift and bias removal architecture is provided by the ocular cleaning framework 100 discussed herein.
Adaptive Filter Implementation for Motion Artifacts: Motion artifacts are perhaps the most challenging contaminants of EEG signals to handle. It is generally agreed that motion artifacts vary widely across subjects, and even within sessions. Motion artifacts manifest themselves when there is a relative motion between the electrode tip of an EEG and the scalp. The changes in electrode/conductive electrolyte and electrolyte/scalp interfaces create sudden changes in impedance values, and thus adds associated transient dynamics in the recorded EEG signals. Any additional disturbance due to the continuation of the motion before the previous disturbance reaches its steady-state adds an additional layer of complexity to the problem, as both transients start interacting. We have found these complex dynamics to be highly non-linear, and a statistical or linear framework's effectiveness is not ideal to handle these non-linear projections, as they would at best help to reduce the level of contamination, but significant residuals would remain in the data set. For these reasons, the adaptive framework or methods discussed herein allows for a nonlinear projection. Another difficulty comes from the harmonics of the fundamental contamination frequency (head movement frequency), which are non-existent in neural data. Similar to using the EOG channels as reference signals for filtering ocular artifacts, IMU data, gravity compensated, may be utilized as reference signals for motion artifacts in some embodiments. In some embodiments, the frequency peaks on the vector norm of the acceleration values may be found. Although the acceleration to individual EEG sensor projections are expected to be non-linear, frequency peaks may be found to be very similar to that of the contaminated EEG. In some embodiments, these frequency peaks may be used to generate a narrowband filter-bank. This process is done to generate a new reference signal to target not only the fundamental frequency, but also its harmonics in EEG. The passband boundaries for each filter may be selected as [fj−0.6 fj+0.6] Hz, where fi is the ith frequency peak. Each narrow band filtered reference signal may then be passed through a second order Volterra Series representation.
Volterra series modeling of the non-linear systems is an approach used in many disciplines. For the purpose of adaptive filtering, the model allows us to use the linear adaptive filter formulations. A second-order Volterra series expansion is given below:
Here d(i) represents the unknown representation of the of the motion artifact in the EEG signal, at sample i. r(i) is the reference value used to identify the motion artifact projection, such as IMU data. The terms wok(l1, . . . , lk), k=1, 2 represents the Volterra kernel to be identified via the adaptation rule. This is a tapped line filter representation, for which taps [1,2,3] may be used. For multiple reference signals, all references were tapped and fed into the Volterra representation. One very important aspect is the selection of the reference signal that is used to identify the motion artifacts in EEG signals. In some embodiments, the acceleration values, after gravity compensation using the quaternion of the IMU, may be used as the IMU data or reference for motion artifacts. In some embodiments, the IMU data or reference may be subjected to a Volterra expansion, such as second-order Volterra series expansion, prior to application of a H∞ adaptation rule and combining with incoming EEG data to remove artifacts.
In some embodiments, prior to implementing the motion artifact removal method, the ocular artifacts may be cleaned using our H∞ method, implemented on the raw EEG data (
The parameters of the series were identified using the abovementioned H∞ criterion. A generic second-order Volterra series expansion is given in (2).
Here d(i) represents the unknown representation of the of the motion artifact in the EEG signal, at sample i. r(i) is the reference value used to identify the motion artifact projection. The terms wok(l1, . . . , lk), k=1, 2 represents the Volterra kernel to be identified via the H∞ adaptation rule. This is a tapped line filter representation, for which we have used taps [1, 2, and 3]. For multiple reference signals, all references were tapped and fed into the Volterra representation. Before feeding the EEG signal into the filtering framework, the data was cleaned of ocular artifacts using our ocular cleaning framework discussed above. In some embodiments, all scalp EEG data may then be common average referenced. The EEG signal is then filtered using our method in a cascade manner where the input of the adaptive filtering process j+1 is the output of process j.
BCG/EEG artifacts: Methods that utilize the average template of the artifacts for subtracting it from the measurements are known in neural signal de-noising. The biggest problem for these methods is to eliminate the residual artifacts after cleaning due to the time domain amplitude variations in the signal, and thus (short duration) mismatch of the template and the artifacts. The unified artifact removal framework discussed above may be utilized to accommodate these variations for the BCG signals. Instead of using the common approach, the general H∞ method may be utilized, which also provides adaptation to amplitude variations simultaneously. In some embodiments, EEG data may be cleaned of gradient artifact using standard tools. After, ocular artifacts may be cleaned from the EEG data from using our abovementioned method. In some embodiments, an electrocardiography (ECG) average may optionally be calculated using the contaminated channels to serve as a template or reference, if necessary. This template, without further processing, was used as a reference signal to our general H∞ framework or method discussed in detail previously above. In other words, some embodiments of the H∞ methods may utilize an ECG average provided to a H∞ module as a reference to clean a received EEG signal of BCG artifacts, in a similar manner as for ocular or motion artifacts.
tACS/EEG artifacts: The tACS stimulation artifacts may manifest themselves as very high amplitude contaminants (e.g. ˜104 times the expected EEG amplitude). Similar to the BCG removal method or other methods described above, a template of the contamination is generated to be used as a reference signal for the H∞ method. In some embodiments, first the nonlinear drifts in a data set may be removed using a 5th order polynomial (e.g. corresponding to lower than 0.1 Hz oscillations). EEG data amplitudes may then be normalized to the [−1 1] range, while saving the scaling coefficients (calculated using a clean segment) for later use. Then each EEG channel data may be smoothed using a Savitzky-Golay filter in some embodiments (e.g. order 7 and frame length of 301). This smoothing is intended to eliminate actual neural signals oscillations superimposed onto the high amplitude artifacts. In some embodiments, a Hilbert transform may then be implemented; the instantaneous phase and amplitude of the signal may be derived; and a synthetic data set that has amplitude of 1 and phase identical to the values derived from the Hilbert transform may be generated. At this point, the synthetic data has no amplitude modulation information, but the phase information matches that of the average artifacts. This phase matched reference may be used in the H∞ method to clean the EEG data in a similar manner as discussed previously. Further, the output may be scaled back to original levels using the saved amplitude scaling coefficients.
The following examples are included to demonstrate particular aspects of the present disclosure. It should be appreciated by those of ordinary skill in the art that the methods described in the examples that follow merely represent illustrative embodiments of the disclosure. Those of ordinary skill in the art should, in light of the present disclosure, appreciate that many changes can be made in the specific embodiments described and still obtain a like or similar result without departing from the spirit and scope of the present disclosure. Generally, the experimental examples discussed herein were conducted with setups similar to embodiments of the systems and methods discussed above.
Results Ocular Artifact Removal: Subjects: Three healthy adults, including one with paraplegia due to spinal cord injury, participated in this study. The able bodied subject walked on a treadmill at 1 km/h, whereas the paraplegic and second able bodied participant performed a repeated walk-stance-walk task with the assistance of a powered lower-body robotic exoskeleton (REX Bionics Inc., New Zealand).
EEG Measurements: EEG data were recorded in a real-time experimental session. Experimental protocol, decoding methodology and initial performance evaluation is is not discussed here, but was performed in accordance with prior work. Raw EEG and ocular artifact source measurements were used for the analyses in this study. To capture the dynamically changing eye blink and eye motion profiles, four electrooculography (EOG) electrodes from a 64 channel active EEG electrode cap (actiCAP, Brain Products GmbH) were placed on the head of the subject according to the international 10-20 system having FCz as reference and AFz as ground channels. A wireless interface (MOVE system, Brain Products GmbH) was used to record data, sampled at 100 Hz. Ocular electrodes were placed above and below the left eye to measure the vertical ocular artifacts (PO10 for VEOGU and PO9 for VEOGL), and left and right temples to measure the horizontal ocular artifacts (TP9 for HEOGL and TP10 for HEOGR). Locations of these electrodes in an experimental setting were in accordance with
Time domain analysis: Analysis was conducted in accordance with the framework discussed above, particular for ocular artifact removal. The simultaneous real-time removal of ocular artifact, drift and bias was first investigated in time domain analysis using correlation coefficients between raw and processed data. EEG data was logged in DC mode with no pre-processing applied. In the sample data set used in these analyses, channel F7 was recorded having around 4 μV of drift, where the mean signal amplitude was recorded as −1281 μV. This is a rather large mean amplitude recording; however, it is also a good example to test the performance of the proposed simultaneous filtering framework. It should be remembered that the proposed filtering scheme is a real-time sample by sample filter; therefore, it is responsive to local biases and drifts, as well as drifts and biases that occur during the entire data recording. Time course of real-time filtering results are summarized in
The uppermost plot (shaded) of panel-a raster plot of
The two inset plots (
The V-EOG profile shows another very low frequency and low amplitude artifact, starting as a linear trend around the 1st second and peaking at 1.7 seconds. This lower amplitude contamination has visible components only on very frontal electrodes (FP1 and FP2). Both the ICA and H∞ methods are successful in removal of this component, however the ASR method does not clear this EEG section, which is contaminated by this low amplitude artifact. Also for channel AF8, the ICA and H∞ methods keeps the EEG data intact in between 0-0.8 seconds, which has a minimum of −23 μV (which can be seen as a low-frequency neural signal, and is also not present in EOG source measurements). The ASR method however, filters this section and suppresses the low frequency components. This brings the importance of being selective of what is filtered in a raw EEG recording. The ICA method relates ocular artifact components to frontal electrodes, thus is selective of the ocular artifacts. However, it suffers from the identification of shape-distorted components on electrode locations away from artifact generators. The ASR method covers this problem, however it is not as selective compared to the ICA method on ocular components. The H∞ filter takes advantage of the reference signal measurements and is very selective in that sense. It also covers the local changes on all electrode locations by its sample adaptive formulation, and inherently robust by definition of minimizing the error to disturbance 1 gain.
Finally,
Frequency domain analysis:
Processed signal properties: The ICA method relies on the weight identification to form a mixing matrix to determine unique components in the data set. The H∞ filtering method used is also formulated around the weight identification problem, having the assumption of time variation of the weights and also robustness properties. The sample-adaptive application results in continuously adapting weights for all channels. The ICA identified fixed weights and the time varying H∞ weights are compared in
Inset histograms of
Volume conduction and topographical maps of the contaminants: Effective filtering of the artifacts from EEG signals refers to identification of artifact components that is reflected onto the EEG signal to be recovered. Referring to equation 5, the artifact components can be derived as riTŵi for reference measurement r and estimated weight ŵ at sample i. This gives the opportunity to dynamically examine the artifact component distribution over scalp areas at any given time point. Summarizing these analyses,
The impedance values at the beginning of the session, having a maximum of 18 kΩ for select electrodes, were higher compared to the impedance values at the end of the session. In
Discussion: The real-time applicability and performance of an artifact removal technique is of great importance to BMI applications. EEG signals are inherently prone to artifacts with different amplitude and frequency characteristics. A robust artifact removal technique is proposed that removes ocular artifacts, DC shifts and biases. These artifacts are present in almost all experimental sessions, and as shown, dominant in EEG measurements in both time and frequency domain. An H∞ ANC framework was developed to remove eye blink, eye motion and EEG signal bias and drift simultaneously using a real-time sample adaptive formulation. Successful removal of artifacts suggests a good identification of the contaminant components that are superimposed onto the neural EEG signals. Analysis of these components over all scalp areas gives a clear picture of the contamination spatial distribution due to volume conduction. It has been found that, when the blink amplitudes become dominant, the raw EEG and the contaminants alone share similarities both in amplitude and spatial distribution.
The presence of such dominant artifactual components can hinder the true performance of a real-time BMI decoder. Analysis on the power spectral densities of the identified contaminants also shows similarities in low frequency regions. These results suggests that during the ocular contamination, amplitude, frequency and spatial scalp distribution based features, that are extracted from EEG signals for decoding purposes, share the same adverse effect from ocular contaminations. In the present study, the raw EEG recordings were filtered of ocular artifacts and drifts/biases. In general, decoding methodologies that focus on frequency bands less than 8 Hz could be highly affected by ocular artifacts as even with a bandpass filter used to capture slow cortical oscillations, the ocular artifacts would have high magnitude components in the lower band EEG activity. The present study addresses the negative effects of ocular artifacts for any real-time implementation of BMI decoders. A BMI designer that focuses on these frequency bands has limited options under these circumstances. First, one could design a decoder offline after pruning the EEG signal clean of artifacts. In this case the real-time performance of the same decoder would be adversely affected by these unmodeled artifacts. Second, one could dismiss the frontal electrode locations that are close to ocular sources for real-time decoding, however our analyses show that the contamination amplitude and scalp distribution is dominant in almost all electrode locations. Another option would be not to use the decoder output as a BMI control signal when the blinks are present. However, further analysis on the data set shows over 12.4% ocular artifact contamination in a 10 minute data collection session. Ocular artifact profiles are also subject specific, and occurrences are not periodic or predictable. One interesting result of the analysis is that the ocular artifact densities can also be task specific. As summarized in the prior discussion, the data collection session includes periodic walks and stops with a lower body exoskeleton. It was found that among the 12.4% contamination, the blinks of dominant amplitudes registered as 64.3% when the subject walks, and 35.7% when the subject is in stance position. And more interestingly, the densities of the blinks among the walk sections of the data show an increasing trend as session progresses. This of course cannot be generalized to a wider subject population at this stage; however the results show that the possibility of having such a pattern greatly hinders the applicability of real-time decoders in the presence of ocular artifacts. The highly dynamic nature of these artifacts also suggests that the usage of the same decoder for multiple sessions can also cause the wrong conclusions about the decoder performance.
The comparisons with offline applied ICA show the importance of optimization of filter performance parameters per electrode location and also the local search properties of the artifact removal tool. ICA identified ocular components usually relate to the frontal electrode locations and may be removed by an expert user or an automated method accordingly from all electrode locations. This analysis show that the artifact projection onto electrode sources can vary greatly due to volume conduction, and the definitive properties of the artifacts (such as its profile or amplitude modulation) can be reflected as a distorted version on electrode locations away from artifact generators (such as CZ and POZ electrodes, see
The overall performance characteristics of the offline applied ASR method are close to the real-time implemented H∞ method discussed. Similar to the sample search structure, it uses a moving window of EEG data, therefore actively searches for localized artifact contamination. The problem, however, is the selectivity of the removed/suppressed components from raw EEG recordings. The ASR method uses threshold values to identify any undesired occurrences in EEG. The analysis showed that the EEG components having −23 μV amplitude, which is also not present in the raw EOG measurement are also removed from EEG data (
Application of this framework to other types of artifact contamination in EEG recordings, such as movement related artifacts, is possible and should be carefully formulated as discussed herein.
Our results show over 95-99.9% correlation between the raw and processed signals at non-ocular artifact regions, and depending on the contamination profile, 40-70% correlation when ocular artifacts are dominant. We also compare our results with the offline Independent Component Analysis (ICA) and Artifact Subspace Reconstruction (ASR) methods, and show that some local quantities are handled better by our sample-adaptive real-time framework. The proposed method therefore allows real-time artifact removal for EEG-based closed-loop BMI applications and EEG studies in general, thereby increasing the range of tasks that can be studied in action and context while reducing the need for discarding data due to artifacts.
Results of Motion Artifact Removal: Subjects: 11 healthy able-bodied adults with no known gait deficiencies participated in this study after giving informed consent.
Subjects were asked to walk on a treadmill (
All subjects were equipped with a 64 channel gel based EEG system (actiCAP, Brain Products GmbH) with active electrodes and 10-20 distribution. 4 electrodes were placed around the eyes of the subject to measure the ocular artifacts in bipolar configuration (TP10-TP9 for Vertical-EOG and P010-P09 for Horizontal-EOG). Peripheral electrodes FT9 and FT10 were moved to FPz and FCz locations accordingly for a denser scalp coverage. Reference and Ground electrodes were moved to the ears.
No external layer on EEG electrodes was used (i.e., a medical mesh). An external mesh, covering the electrodes is a standard setup item, as it was found to dramatically reduce the motion of the electrodes and cable sway. For the scope of this work, however, a clear presence of motion artifacts was needed for us to be able to characterize and assess the validity of our cleaning method (
5 of the select electrodes (Fz, Cz, Pz, C5, C6) were equipped with different reflective marker configurations, each containing 4 markers. Markers were placed on a transparent plastic medium with negligible weights and attached to each sensor. The surface of the forehead IMU sensor was also equipped with a reflective marker configuration (
The EEG (1000 Hz), IMU (256 Hz), and OptiTrack (120 Hz) data were synchronized with a manual logic signal at the run-time. For the velocity and acceleration estimation from the optitrack system, per electrode, we have used a linear Kalman filter with a second order nominal model and optimized the model parameters using a constrained optimization tool having the IMU acceleration as an input (the fmincon function in Matlab®).
Non-linear properties of motion artifacts: Analysis was conducted in accordance with the framework discussed above, particular for motion artifact removal. The application of linear correlation analysis alone to gauge the transmission of motion artifacts to EEG signals can hinder the true level of the artifact contamination. In a gel-based EEG setup, the disturbance caused by the movement between the skin-electrolyte and electrolyte-gel interfaces can be a repetitive action, manifesting the artifacts having repetitive transient dynamics. Coupled with the electrode cable bundle and associated sway dynamics, we expect the projection of the artifact from the actual head kinematics to the EEG recordings to be inherently highly non-linear. We would also expect extensive differences on the projection levels of the motion artifacts for different electrode sites. The non-linearity and dynamic characteristics could be reduced to more manageable levels by the use of a head-mesh and limiting the cable sway, essentially coupling the electrodes mechanically. However for the purpose of this study, we have avoided any application that could reduce the artifact contamination and an utilized an EEG setup that is a standard implementation along the lines of the most generic uses.
The linear correlation between measured quantities and the EEG signals are summarized in
Therefore, for a conclusive test of whether or not the EEG is under the influence of motion artifacts, and if there is contamination, the level and time domain characteristics and even the discrete vs continuous existence should be investigated by methods that allow for sample by sample non-linear mapping. This essentially suggests identification of the artifactual components. We investigate the applicability of identifying the nonlinear representation (Volterra Kernel coefficients) through a sample adaptive filter to characterize the motion artifacts. This sample-by-sample adaptation, contrary to the statistical methods, cannot only tell us of the existence the motion artifacts at different instances in a session, but it can also help us define the abovementioned characteristics.
Removal of motion artifacts: As detailed in the previous section, the nonlinear characteristics of the artifactual signal require methods to assess the level of contamination as well as the characteristics of the EEG motion artifacts. The V-H∞/TV formulation allows for the non-linear projection from the reference sources to the contaminated signals (EEG). To test the effectiveness of the method, we have used an individual electrode's (Pz) position, captured by the OptiTrack motion capture system for 4 mph subject walking speed. The fastest walking speed is chosen to ensure the presence of motion artifacts.
The reference signal to the adaptive filter plays fundamental role in removing the artifactual components. For the purpose of this work, we have tracked the precise positions of 5 electrodes for different subjects and motion speeds. For our offline analysis, the availability of this detailed information is valuable in understanding the motion artifact components of the measured signals. However, for a wide-scale application, this information is unlikely to be available, thus we seek to understand the characteristics of the artifacts and determine if another measurement modality can be used instead of the precise electrode position. To accomplish this, we have first used the 3-axis position of the individual sensorized electrodes to remove their respective artifactual components. The position measurements provide us with the oscillatory (regardless of being continuous or intermittent) electrode dynamics, which is technically the main cause of the artifacts (electrode position shift causing the disturbance in the skin/electrolyte and electrolyte/metal interfaces). As summarized in
The final clean EEG signal (sni) is then used to calculate the non-linear projection of the movement reference to the specific EEG channel, for all samples i as: ai=s0i−sni, where soi is the ocular, bias and drift cleaned EEG. The properties of ai is very informative on the identified artifactual signal characteristics.
Furthermore, the time domain modulation of the identified artifact signals closely resemble vertical head acceleration values measured simultaneously with the EEG. To summarize the significance: the precise measurement of the position of each electrode were mapped via a nonlinear transformation to the measured EEG signals. This final position projections were then linearly correlated to the IMU measured head acceleration signals. It's found that the identified nonlinear transformation yield high linear correlation values compared with the acceleration sensor data. The linear correlation between the raw EEG and the head acceleration, without any processing, is found to be small, compared to what has been transformed from the position data. An acceleration sensor is a very low-cost device which is already an integral part of many commercial EEG systems. Thus, accessing the synchronized acceleration data from any of those systems poses no difficulties. Even systems without the acceleration sensors can be used by placing an external sensor to the forehead of the subject and synchronizing the data with the EEG signals, as has been done in this study. The high linear correlation between these quantities is a promising prospect to use the acceleration values for a sample-by-sample, real-time cleaning of EEG signal from motion artifacts. One very important property of using the forehead acceleration (IMU) sensor measurements as a reference signal is that it allows for filtering the entire scalp EEG locations. We would also like to stress that the values reported are linear correlations. Therefore, with high levels of confidence, we can say that the usage of acceleration sensor values as a reference input to the adaptive noise canceller is possible. The projection of the forehead acceleration values, via our nonlinear method, to identify a projection that is already highly linearly correlated with the acceleration values should yield similar or better results.
Another observation, confirming the literature on movement artifact removal methods as discussed, is the variability of the identified artifacts by channel locations. The contamination levels (as judged by the spectral peaks and harmonics) are also variable by sensor location as expected. The time domain characteristics of a single channel artifact within the same recording session also show variability. Sample adaptive formulations in this sense allow us to identify informed projections with varying levels of contamination in time domain.
Our next step is to use all 3-axis head acceleration values (measured by the forehead IMU and gravity compensated) as reference signals to our framework and compare the cleaning performance and linear correlation values with the cleaning using position reference.
Note that by using the 3-axis acceleration values as a common reference to all EEG locations, we are able to plot a scalp distribution of values.
Another important property to notice is the distribution of relevance among scalp locations. Apparently, as stated in the introduction, due to the high dynamic interaction between the cable bundle and the electrode, bundle sway and other factors that cause non-linear mapping, the contamination level, and correlation sign varies greatly for all scalp locations, and does not follow a clear distribution. The power spectral comparison of raw and clean EEG signals show a very effective cleaning process, which is selective of individual signal contamination level. We would like to stress that the same reference signal and frequency bins were used for all scalp areas as a reference signal. Comparing the harmonic peaks appearances and the prominences, it is clear that each electrode experiences different non-linear contamination dynamics. As an example, the Cz raw signal peaks show a sparse distribution compared to say the C5 and C6 electrodes and the ˜5 Hz harmonic peak is not visible in Cz, Fz or in Pz. Yet the robust adaptive nature of our framework is capable of identifying when there is a relevant peak and when there is not, keeping the frequency information in the signal intact when needed (see the inset plots for C6 and Pz in
We have also tested our motion artifact cleaning framework for the same representative subject for simplicity, but for all walking speed conditions. The variability of the artifact harmonics are also prominent in all walking conditions. Plots were generated for continuous 4 minutes of treadmill walking data. Note that the slower walking speeds (especially 1-mph) have far less artifact contamination. The clean EEG spectra does not show any clear sign of remaining artifacts. It should be noted that the presence of an artifact peak or harmonics does not necessarily mean the contamination for the entire experimental duration. Rather, what we have observed is that the artifacts, especially for slower speeds, manifest themselves in a discontinuous manner, showing stronger appearance at some sections and no apparent contamination on others.
The variability of artifact appearance instances and dynamics prevents us from generating group statistics for a continuous, long segment time/frequency analysis, and gauge the performance of our framework. One way of accomplishing this is investigating the event dependent nature of the artifacts to rule out the cancellation os artifactual features. For a walking task, the Event-Related Spectral Perturbations (ERSP) analysis is a good way of determining the extent of our framework. Per subject and walking speed, we have calculated the time-frequency spectrum of EEG data from all scalp locations and segmented them with respect to the gait events. We have then time warped the segments to mean gait duration and excluded the gait durations that are of above or below 3std of the mean gait duration.
As can be seen from the 1-mph walking condition, the raw data has gait locked events around 2.5 Hz and 14 Hz range. However, checking the entire power spectrum of the same channel (
These examples can be extended to all scalp electrode locations, subjects and speeds as we do not bound our framework, or the input data to any electrode, subject, or condition-specific variable, which are hard to measure. Instead, as justified before, the IMU data is found to be applicable for all conditions and scalp spatial locations.
Discussion: We have provided a comprehensive filtering framework for handling one of the most significant, and yet to be solved problem associated with all EEG recording paradigms, especially ones that require mobile tasks. Of course, the term mobile is used to represent any EEG recording session that results in head motion which causes motion artifacts. Our framework can be used as an offline post-processing tool for any recordings that provides a synchronized set of IMU sensor data with the EEG data. We are able to use the acceleration data of the IMU unit and used the quaternions calculated using the gyroscope and magnetometer. Although the users can calculate the quaternions via many known and very well established methods, most IMU systems provide the information as an already calculated output signal from their IMU systems, as in the APDM OPAL system used in this study. The quaternions, of course, used for compensating the gravitational acceleration constant from all axis. The resulting acceleration data is proven to be adequate as an input signal to our method. One significant advantage of our method is its real-time applicability. The gravity compensated acceleration can be measured from many commercial systems. Our method utilized this data and updated the parameters of the non-linear projection from the acceleration to each EEG channel separately, in a sample adaptive basis. This means for each sample recorded, a modified (cleaned) signal is outputted from our framework. Of course, the Volterra series expansion is not the most computationally efficient way of representing the nonlinearity. But we believe there is a good tradeoff between the performance and computational load, as we have used a 2nd order representation, with 3 input signals. Our implementation (in Matlab C-MEX) on a Windows PC with dual 2.39 GHz processors is able to handle the cleaning of EEG data, from all 60 electrodes locations in real-time, with a safety margin of 6 times the real-time recording rate. For users looking for even faster processing can use the embedded coding version of our framework, or utilize a multi-threading approach, at least electrode-wise. Another alternative is to use the bilinear filter representation instead of the Volterra representation, with somewhat reduced performance, as discussed in. Another option can be the use of a reduced number of spectral target peaks. We have used all the identified spectral peaks as calculated from the IMU sensor, however one can also use every second (or nth) spectral peak, by choosing the bandwidth of the filter bank member to capture the overlapping frequencies.
For the purpose of this discussion, introducing our method as a solution to the motion artifact problem, we have focused on targeted applications and frequency ranges that most suffer from the motion artifacts. We have not done any prior processing, except for the ocular artifacts, that cleans muscle artifact, electrode pops, and missing samples. Rather our implementation is very specific to the motion artifacts. Using our method for ocular artifacts, as we have done here, we have a unified framework for real-time filtering of two of the major EEG contaminants. We have limited our efforts to the 0.3-15 Hz range as above the 15 Hz, the muscle artifacts are expected, which is beyond our target implementation. To represent our method's performance, we had to limit our efforts to a frequency range that ensures a specific contamination type, and targeted implementation for the motion artifact problem. We have, however already used our framework for beyond 15 Hz, and we believe there is no limitation of an effective cleaning of motion artifacts for the higher frequency ranges.
Additional uses of the method base: This the H∞ robust adaptive filter cleaning methods are also applicable to other physiological or non-physioligical artifacts, that contaminate neural signal measurements in general. Sample usages to other applications of either the original H∞/TV formulation used for ocular artifact cleaning or the method presented are summarized in further discussion. The cleaning applied to simultaneous Transcranial Alternating Current Stimulation (tACS) is summarized and EEG measurement application. The stimulation on phases in the plots show only the clean data for simplicity, as the raw data amplitude is several orders of magnitude higher, and makes the comparison impossible. Stimulation off phases show both the clean and the raw data. Note that the data set is continuous with only the change in stimulation intensity and on/off times. Our filter is active at all times. As an example of recovering the underlying EEG features, the recovered eye-blink waveforms on 6 EEG channels are presented. The input to the filter is the reference stimulation waveform from the stimulation device. Examples are shown for 10 Hz, 6 Hz and 20 Hz stimulations. The cleaning of ballistocardiographic (BCG) artifacts from EEG data is also summarized, when measured simultaneously with functional Magnetic Resonance Imaging (fMRI). The figure show the before and after cleaning power spectra of a sample channel. The artifact peaks and harmonics are present in multiple frequencies on raw data, whereas clean data show no contamination, and no residual after cleaning. Here the reference signal is selected as a reference ECG sensor, or the template of artifact created from all scalp sensor data.
Our method can also be generalized when there is no reference data present. In general, EEG/fMRI gradient artifact, BCG artifacts, and many other types of artifacts are handled by building a template of the artifact via various methods, most general being the average of EEG sensor data for all channels. This average template is then subtracted from the scalp EEG sensors one by one. Although some level of cleaning is achieved using this method, often residual artifacts remain, due to amplitude variations on the artifact, and thus not following the mean template perfectly. Our robust adaptive method can receive this template as a reference input, and adapt to the amplitude changes automatically, sample-by-sample, thus leaving no residual artifacts behind, and resulting in a far better performance and precision cleaning.
Results of Removal of Other Artifacts (BCG & tACS): Subjects and Setup: All subjects provided informed consent before participating. For all experiments, Brain Products (GmbH) wet electrode systems (64 channel) have been used. For the ocular (100 Hz), BCG (5000 Hz), and motion (1000 Hz) artifact removal paradigms, 4 channels were allocated for capturing bipolar eye blink and eye-movement activity (EOG). For the motion artifact removal paradigm, subjects were also equipped with 1 forehead Inertial Measurement Unit (IMU) to capture the overall head dynamics. Finally, for the tACS artifact removal paradigm (5000 Hz), 5 channels (T7, CP2, CP5, FC3, CP3) were assigned for stimulation. An intermittent, 6 Hz stimulation frequency was used.
Sample Adaptive Filter for Additional Artifact Types:
BCG/EEG artifacts: Methods that utilize the average template of the artifacts for subtracting it from the measurements are very common in neural signal denoising. The biggest problem for these methods is to eliminate the residual artifacts after cleaning due to the time domain amplitude variations in the signal, and thus (short duration) mismatch of the template and the artifacts. We have utilized our adaptation scheme to accommodate these variations for the BCG signals. Instead of using the common approach of handling residuals via an ANC method after standard template subtraction, we have used our H∞ method as the subtraction step utilizing also the adaptation to amplitude variations, simultaneously. After cleaning the EEG of gradient artifact using standard tools, we cleaned the EEG data from ocular artifacts using our abovementioned method. Then an ECG average was calculated using the contaminated channels to serve as a template. This template, without further processing, was used as a reference signal to our H∞ framework.
tACS/EEG artifacts:
The tACS stimulation artifacts manifest themselves as very high amplitude contaminants (˜104 times the expected EEG amplitude). Similar to the BCG removal method described above, we have generated a template of the contamination to be used as a reference signal for our method. To accomplish this, first the nonlinear drifts in the data set were removed using a 5th order polynomial (corresponding to lower than 0.1 Hz oscillations). EEG data amplitudes are then normalized to the [−1 1] range, while saving the scaling coefficients (calculated using a clean segment) for later use. Then each EEG channel data were smoothed using a Savitzky-Golay filter (order 7 and frame length of 301). This smoothing is intended to eliminate actual neural signals oscillations superimposed onto the high amplitude artifacts. We then implemented a Hilbert transform and derived the instantaneous phase and amplitude of the signal and generated a synthetic data set that has amplitude of 1 and phase identical to the values derived from the Hilbert transform. At this point, the synthetic data has no amplitude modulation information, but the phase information matches that of the average artifacts. Using this phase matched reference signal, we have implemented our H∞ method to clean the EEG and scaled the output back to original levels using the saved amplitude scaling coefficients.
Conclusion: Our novel processing framework, and its sample adaptive formulation can be applied to multiple types of artifactual neural measurements. Our implementation is already real-time compatible for the ocular and motion artifacts. For the BCG artifacts, instead of using the template as a reference channel, a dedicated ECG channel can be used, which makes the practical implementation real-time compatible. For the tACS/EEG methodology, the signal could be buffered to include at least two artifactual oscillations to generate an initial template, and this template can be updated using a circular buffer implementation, which allows for real-time use. Our method can also be used as an offline tool for post-processing, allowing for the recovery of the valuable underlying information content. Considering the statistical similarities, our method can also be applied to a multitude of neural measurement paradigms, including fNIRS and MEG, and can also be used for other types of artifacts, including EMG and gradient artifacts.
Embodiments described herein are included to demonstrate particular aspects of the present disclosure. It should be appreciated by those of skill in the art that the embodiments described herein merely represent exemplary embodiments of the disclosure. Those of ordinary skill in the art should, in light of the present disclosure, appreciate that many changes can be made in the specific embodiments described, including various combinations of the different elements, components, steps, features, or the like of the embodiments described, and still obtain a like or similar result without departing from the spirit and scope of the present disclosure. From the foregoing description, one of ordinary skill in the art can easily ascertain the essential characteristics of this disclosure, and without departing from the spirit and scope thereof, can make various changes and modifications to adapt the disclosure to various usages and conditions. The embodiments described hereinabove are meant to be illustrative only and should not be taken as limiting of the scope of the disclosure.
This application claims the benefit of U.S. Provisional Patent Application Nos. 62/791,564 filed on Jan. 11, 2019 and 62/801,242 filed on Feb. 5, 2019, which are incorporated herein by reference.
This invention was made with government support under Grant Nos. R01NS075889 from the National Institute of Neurological Disorders and Stroke (NINDS) and 1827769 from the National Science Foundation (NSF). The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US20/13285 | 1/13/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62801242 | Feb 2019 | US | |
62791564 | Jan 2019 | US |