The applicants appreciate that Alzheimer's is a complex disease, progressing over many years, with limited treatment options. In this context improved neurophysiological assessments could accelerate basic understanding of Alzheimer's disease, accelerate development of novel therapeutics, monitor patients receiving treatment, stratify patient populations or function as a companion diagnostic for treatment selection. In some instances, it may be challenging to statistically observe a therapeutic effect unless a smaller subset of the study population is selected. It may also be the case that specific therapeutic mechanisms may target subsets of Alzheimer's patients with specific disease features, physiology or likelihood of progression. For these instances, additional data may be collected using a “companion diagnostic” to help stratify patients by their likelihood of responding to a given therapy. Companion diagnostics may also be used to stratify patients by their likelihood of experiencing an adverse event, which helps clinicians select which patients should not receive a given therapy.
One of the most challenging types of data to collect and interpret is electroencephalography (EEG) data. EEG is a method of capturing the electrical impulses generated by the nervous system via sensing devices (‘electrodes’ or other technologies) that are placed on the scalp. EEG signals are useful for the assessment of patients with Alzheimer's disease not only in clinical trials, but also clinically as a diagnostic assessment of typical or atypical neural function. For example, it is increasingly recognized that subsets of patients with Alzheimer's disease demonstrate overt epileptiform seizures with significant epileptiform discharges present between seizure events. Other patients have been noted to have epileptiform discharges in the absence of clinical seizures. Assessment of electrophysiologic biomarkers to identify such patient segments would aid in the design of targeted clinical trials, to develop improved prognostic assessments or as an indication for therapy.
The collection and analysis of EEG signals and other biomarkers associated with EEG data collection requires specialized equipment, skilled technicians and medical experts to perform data acquisition and interpretation. This standard approach often limits both the clinical utility of EEG and its utility in clinical trials due to extended technical set-up time, ongoing monitoring of the state of the system by trained experts, significant time and expense to arrange for expert interpretation, and delays in returning final interpretations to a study sponsor or the primary healthcare provider responsible for the care of a patient to make any relevant clinical decision(s). Furthermore, certain electrophysiologic biomarkers may be identified by statistical models that would not otherwise represent standard signatures recognized by trained experts. Standard clinical workflows also require medical experts to spend significant time scanning through large volumes of EEG data with frequent non-diagnostic artifacts in order to identify clinically-relevant signals.
The applicants appreciate that EEG interpretation methods and apparatuses are not optimal in their ease of use, ability to handle large volumes of EEG data, ability to deliver rapid turn-around reporting, ability to identify novel electrophysiologic signatures or ability to be implemented as a companion diagnostic, among other issues. In particular, it is appreciated that clinical trials designed to evaluate novel medical devices, pharmaceutical interventions or other therapeutic interventions may benefit from EEG-based methods to determine trial eligibility, stratify patient populations, detect adverse events, identify endpoints or otherwise monitor trial participants for assessments at a given time point or over a period of time. It is also appreciated that human clinicians often demonstrate variability in interpretation of the same raw EEG data, where obtaining expert consensus is time-consuming and requires additional investment. In many cases, standard-of-care clinical interpretations are delivered without the clinician having read every data point available from the study. Given the volume of data generated by routine EEG assessment, it is almost impossible for a clinician to manipulate the data into multiple montage perspectives for all data points, even though particular montage displays may be the key view to make the diagnosis.
Some conventional systems provide for automated interpretation of EEG signals. However, the wide majority of these systems are implemented for the purposes of direct clinical intervention, such as modulation of an implanted electrostimulation device. In contrast, in some embodiments described herein, a system and method is optimized for the use of EEG signal detection and interpretation of electrophysiologic signatures in Alzheimer's disease patients. It should be appreciated that the implementation of such a system in the setting of a clinical trial for a novel therapeutic could also provide high-quality clinical data to support subsequent implementation more broadly in relevant clinical paradigms.
Some embodiments described herein include systems and methods comprising an EEG examination device, a display device, cloud-enabled processing and data accessibility, and a distributed computing environment that can be shared amongst one or more of the EEG examination device, the display device, and/or a remote computing device implemented for the purposes of assessing patients with Alzheimer's disease. In one embodiment the EEG examination device is a traditional EEG examination device commonly available for clinical or research use. In another embodiment, said EEG examination device is a low-profile head-mounted wireless recording device that may be rapidly applied to the individual being examined.
Data collected by the EEG examination device is ingested into an analytics system which is capable of identifying features, signatures or patterns that have significance for diagnosis, prognosis, risk-stratification or other medically-pertinent observations, estimates or predictions. The analytics system may employ a distributed computing environment in which computing is distributed across a traditional EEG examination device, a local computing device, head-mounted recording device, a display device, a remote computing device, or some combination thereof. This analysis may be performed with or without the use of other additional data such as electronic medical record data, ingestion of other data streams (e.g., heart rate, temperature or pulse oximetry) and/or a corpus of previously collected EEG data. A display device enables the operator to provide input to set the parameters of the exam, if required. The same or a different display device may provide real-time data feeds or summary analysis to an expert and/or a healthcare provider responsible for care of the subject and/or the subject themselves for interpretation. In some embodiments, a display device provides data to a clinician for interpretation. In other embodiments, some or all interpretation may be performed through the use of said analytics system, which may employ statistical modeling, machine learning algorithms, or other mathematical interpretation. Said statistical or machine learning models may be trained, stored and implemented to perform a range of analysis tasks. A machine learning or statistical model may be implemented to perform one seven exemplary tasks: 1) Identify patients with Alzheimer's disease and/or discriminate between patients with Alzheimer's disease and other neurodegenerative diseases such as frontotemporal dementia or Lewy Body dementia, 2) identify or predict patient subgroups with regard to neurophysiology, likelihood of cognitive decline or other clinically-relevant grouping, 3) screen patients prior to enrollment for inclusion or exclusion criteria, 4) identify or predict correlations with other clinically-relevant biomarkers such as protein-based biomarkers or imaging features, or clinical assessments such as cognitive scoring systems, 5) identify indication for therapeutic intervention, 6) identify signals associated with clinical benefit or predict clinical benefit following exposure to one or more therapies, and/or 7) identify signals associated with adverse events or predict adverse events following exposure to one or more therapies. Said methods may optionally include comparison to a disease-free, healthy normal distribution or other relevant data set. The analytics system may perform fully-automated interpretation or generate virtual guide elements to aid a human expert perform the interpretation. The system is also capable of generating comprehensive static or dynamic reports describing the results of relevant analyses that have been performed with or without confirmation by a licensed practitioner. In one embodiment, said system may employ data collection, analysis, and reporting in real-time. In another embodiment, one or a combination of these actions may be performed asynchronously with respect to data collection.
Further, various embodiments described herein may be performed alone or in combination with any of the embodiments described in U.S. patent application Ser. No. 17/863,803 entitled “SYSTEMS AND METHODS FOR RAPID NEUROLOGICAL ASSESSMENT OF CLINICAL TRIAL PATIENTS” filed Jul. 13, 2022, incorporated by reference by its entirety.
According to one aspect an electroencephalography (EEG) processing system for training one or more statistical models to analyze neurophysiology associated with Alzheimer's disease is provided. The system comprises an EEG detector device comprising an array of sensors, one or more processors, a computer-readable storage media coupled to the one or more processors, and an analysis pipeline implemented by the one or more processor configured to ingest and process electrical signals from the EEG detector device, wherein said one or more computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform: retrieving a trained statistical model from at least one storage device, wherein the trained statistical model is trained on a plurality of annotated EEG data, wherein the plurality of EEG training data includes at least one annotation describing an entity of interest selected from a group comprising: specific dementia diagnosis, cognitive scores, behavioral scores, rate of cognitive decline, survival time, drug response, patient level phenotype, genetic mutations, protein biomarkers, imaging biomarkers, likelihood of adverse reaction, inclusion or exclusion criteria for a clinical trial, processing, using the first trained statistical model, EEG data from a subject to generate relevant output labels for said entity of interest; and storing the predicted entity of interest on the at least one storage device.
According to one embodiment said one or more computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform: retrieving a first trained statistical model from at least one storage device, wherein the trained statistical model is trained on a plurality of annotated EEG data, wherein the plurality of EEG training data includes at least one annotation describing EEG features or signatures for at least one segment of EEG waveforms, processing, using the first trained statistical model, EEG data from a subject to generate relevant output labels for EEG features or signatures, extracting values for one or more features or signatures from the EEG data annotated by the first trained statistical model, retrieving a second trained statistical model from the at least one storage device, wherein the second trained statistical model is trained on extracted values for the one or more features from the plurality of annotated EEG data, processing, using the second trained statistical model, the values for the one or more features extracted from the annotated EEG data to predict an entity of interest from said group of entities of claim 1, and storing the predicted entity of interest on the at least one storage device.
According to one embodiment said EEG features or signatures are one or more from the group comprising: epileptiform discharges, seizures, power spectral frequencies, small sharp spikes, and sleep spindles. According to one embodiment said values for one or more features or signatures are one or more from the group comprising: number of epileptiform discharges, rate of epileptiform discharges, topographic distribution of epileptiform discharges, amplitude of epileptiform discharges, number of seizures, duration of seizures, changes in power spectral frequency distributions, number of small sharp spikes, rate of small sharp spikes, topographic distribution of small sharp spikes, amplitude of small sharp spikes, number of sleep spindles, rate of sleep spindles, and topographic distribution of sleep spindles.
According to one embodiment said prediction of said entity of interest is provided to a clinician as a companion diagnostic as an indication for a therapy for Alzheimer's disease. According to one embodiment at least one of the first or second trained statistical model is a convolutional neural net. According to one embodiment at least one of the first or second trained statistical models include a generalized linear model, a random forest, a support vector machine, and/or a gradient boosted tree.
According to one aspect an electroencephalography (EEG) processing system for training one or more statistical models to analyze neurophysiology associated with Alzheimer's disease is provided. The system comprises an EEG detector device comprising an array of sensors, one or more processors, a computer-readable storage media coupled to the one or more processors, and an analysis pipeline implemented by the one or more processor configured to ingest and process electrical signals from the EEG detector device, wherein said one or more computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform: accessing a plurality of training annotated EEG recordings associated with a group of patients in a randomized controlled clinical trial or an observational study, wherein each of the plurality of training annotated EEG recordings is associated with diagnosis data for a respective patient, wherein each of the plurality of training EEG recordings includes at least one annotation describing an EEG feature signature or characteristic category for a portion of the recording, wherein the plurality of training annotated EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group without Alzheimer's disease or with a specific alternate diagnosis, training one or more statistical models based on said plurality of training EEG data and said plurality of annotations, storing the one or more trained models on at least one storage device, and processing, using the one or more trained models, EEG data to predict the diagnosis of a new individual or new group of patients not previously used to train said one or more statistical models.
According to one embodiment said plurality of annotated training EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease with accelerated cognitive decline, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group with Alzheimer's disease but without accelerated cognitive decline, and said computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to process, using the one or more trained models, EEG data to predict accelerated cognitive decline in a new individual or new group of patients not previously used to train said one or more statistical models.
According to one embodiment said plurality of annotated training EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease that meet inclusion and exclusion criteria for a clinical trial, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group with Alzheimer's disease but do not meet inclusion and exclusion criteria for a clinical trial, and said computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to process, using the one or more trained models, EEG data to predict whether a new individual or new group of patients not previously used to train said model meet inclusion and exclusion criteria for a clinical trial.
According to one embodiment said plurality of annotated training EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease that are treated with a therapy, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group with Alzheimer's disease but are not treated with the same therapy, and said computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to process, using the one or more trained models, EEG data to predict whether a new individual or new group of patients not previously used to train said model is treated with said therapy.
According to one embodiment said plurality of annotated training EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease that are treated with a therapy, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group with Alzheimer's disease but are not treated with a different therapy, and said computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to process, using the one or more trained models, EEG data to predict the clinical response of a new individual or new group of patients not previously used to train said model to either said therapy used to treat the first group of patients or said therapy used to treat the second group of patients.
According to one embodiment said plurality of annotated training EEG recordings includes: a first plurality of annotated EEG recordings associated with a first group of patients with Alzheimer's disease that are treated with a therapy, and a second plurality of annotated EEG recordings associated with a second group of patients belonging to a control group with Alzheimer's disease but are not treated with a different therapy, and said computer-readable storage media are further configured to store processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to process, using the one or more trained models, EEG data to predict the likelihood of adverse events of a new individual or new group of patients not previously used to train said model to either said therapy used to treat the first group of patients or said therapy used to treat the second group of patients.
According to one aspect a method is provided. The method comprises receiving electroencephalography (EEG) signals associated with a subject, processing, by a statistical model, the EEG signals associated with the subject, and performing a determination, by the statistical model, of an annotation of the EEG signals associated with the subject relevant to neurophysiology associated with Alzheimer's disease.
According to one embodiment the annotation includes one or more of a group comprising: specific dementia diagnosis, cognitive scores, behavioral scores, rate of cognitive decline, survival time, drug response, patient level phenotype, genetic mutations, protein biomarkers, imaging biomarkers, likelihood of adverse reaction, and inclusion or exclusion criteria for a clinical trial.
According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject has Alzheimer's disease. According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject more likely has a neurogenerative disease other than Alzheimer's disease. According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject should be categorized in a subgroup of subjects having similar neurophysiology. According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject is predicted to have clinically relevant biomarkers, imaging features, or clinical assessments.
According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject is indicated for a therapeutic intervention. According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject is predicted to have a clinical benefit to one or more therapies. According to one embodiment, the method further comprising an act of identifying, responsive to the determination, whether the subject is predicted to have an adverse event responsive to receiving one or more therapies.
This summary is a simplified and condensed description of the underlying conceptual framework, enabling technology, and possible embodiments, which are further explained in the Detailed Description below. This Summary is not intended to provide a comprehensive description of essential features of the invention, nor is it intended to define the scope of the claimed subject matter.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, any and all combinations of the subject matter (both claimed and not) are contemplated as being part of the inventive subject matter disclosed herein.
Various non-limiting embodiments of the technology will be described with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale. In the figures:
Some embodiments discussed herein comprise a system and method of obtaining, processing and analyzing an EEG signal in order to do one, two, or all of: simplifying the application of an EEG acquisition device, reducing examinee discomfort, improving the quality or quantity of the data acquired during the examination, reducing inter-operator variability, decreasing interpretation turn-around time, functioning as a qualified companion diagnostic and/or identifying features, signatures, patterns or characterizations that were previously difficult or impossible to reproducibly assess by qualified experts.
In some embodiments, a system is provided that includes a head-mounted EEG detector device, including an array of electrodes that adhere to the forehead, as well as zero, one, or a plurality of medical examination devices that include but are not limited to, pulse oximeters, temperature probes, accelerometers, gyroscopes etc. One or more components of the head-mounted device may be disposable. Such a system may also include a display device to gather input from the subject or the operator about zero, one, or more of the following: the exam being performed, specific symptoms the subject is experiencing, biometric data or other relevant medical or personal data. During an examination, the head-mounted EEG detection device ingests, processes and transmits EEG data and/or data from other sensors. The method by which local processing is performed may or may not be updated throughout the examination based on feedback from the local processor or via a communications network such as Bluetooth, Wi-Fi, or some other communication method or protocol.
At least some methods utilized herein include analysis of raw or processed EEG data to generate virtual guiding elements to guide review, annotation or interpretation of the EEG data by a non-expert or a qualified expert such as a medical doctor. These virtual guiding elements may include but are not limited to instruction in the form of plain text or otherwise, one or more arrows, targets, circles, color-changing elements, progress bars, transparent elements, angles, projections of real-time or stored EEG or sensor data, or other virtual items. These virtual items may be related to features or signatures that may be previously known to the field via publication, training, word-of-mouth or expert annotations. Alternatively, these virtual items may be related to novel features, signatures or elements that are extracted algorithmically from individual EEG exam data or from a corpus of previous or simultaneous EEG exams. Other embodiments of the system are capable of generating static or dynamic reports that describe clinical outputs of interest, such as likelihood of disease progression, presence or absence of specific features, identification of patient subgroups, indications for a given therapeutic, likelihood of response to one or more therapeutics or likelihood of an adverse effect from one or more therapeutics among other read-outs.
To use this system, the examinee or an examiner, who may be a healthcare provider, technician or other operator would apply traditional EEG detection electrodes 103 to the examinee's head.
In one embodiment, the traditional EEG detection system 102 includes a plurality of sensors in addition to EEG electrodes 103 including but not limited to: oximetry, sound sensors, light sensors, temperature sensors, gyroscopes, accelerometers and heart-rate sensors. In another embodiment, the device includes a minority of these sensors in addition to the EEG electrodes, or only includes the EEG electrodes with no additional sensors.
To use this system, the examinee or an examiner, who may be a healthcare provider, technician or other operator would apply the EEG detection device electrode array 128 to the examinee's forehead. In one embodiment, the device would initiate monitoring of the subject or patient's EEG data autonomously. In some embodiments, the system may be capable of monitoring more than one device at a time (e.g., multiple subjects).
In another embodiment, the operator could use the personal display device 136 to enter data and/or select and initiate an exam, where display device 136 may receive data and/or information to display via receiving CPU 134. A computing system to manage data ingestion, processing, management, and/or annotation may include a local processing module 130 which may use Wi-Fi, Bluetooth or other protocol to access a communication network 110 and interface with a remote processing module 112, other computing system(s) 114, other detector system(s) 116, or database(s) or data repositories 118 to perform the acquisition, ingestion and processing of EEG or other sensor data. In one embodiment of the system, the interpretation of the EEG data could be performed by an algorithm stored on a computer-readable medium either locally on the device's local processing module 130 or on a remote processing module 112. In another embodiment, a qualified human expert may review the raw or processed data and may perform the interpretation via personal display device 136 or other interface that would enable annotation of features, signatures or other data elements with or without an overall interpretation and summary of the findings. In any of the above embodiments, the final interpretation could be returned to the examinee, the operator or another individual requiring the information via the display device, some other method of electronic communication, or by uploading the report to an electronic medical record associated with the subject.
In one embodiment, the head-mounted EEG detection system 126 includes a plurality of sensors in addition to EEG electrodes including but not limited to: oximetry, sound sensors, light sensors, temperature sensors, gyroscopes, accelerometers and heart-rate sensors. In another embodiment, the device includes a minority of these sensors in addition to the EEG electrodes, or only includes the EEG electrodes with no additional sensors.
Said operating system 212 establishes the underlying software infrastructure that provides configuration instructions to the hardware components and the local processing module 108 and processes, analyzes, stores and/or otherwise manages data collected by the exam device, wherein the device may be a traditional EEG system 102 or a reduced-montage wireless device 126. Components of said processing environment 210 and said operating system 212 may be implemented across multiple processing modules described in
Said data pre-processing engine 220 accesses sensor data that may include but is not limited to EEG, cameras, gyroscopes, accelerometers, oximeters, temperature probes, heart-rate monitors, depth-sensing systems and other sensors or systems. In an exemplary embodiment, an EEG low-pass filter (LPF) and high-pass filter (HPF) module pre-processes data collected by the EEG electrode array. In this embodiment, the data pre-processing engine also includes processing modules for impedance 224 and accelerometer 226 data. In other embodiments, additional processing modules are implemented to process data collected from cameras, gyroscopes, oximeters, temperature probes, heart-rate monitors, depth-sensing systems or other sensors or systems.
Said feedback generation engine 230 controls modules that generate feedback signals that are implemented by the EEG hardware components. In an exemplary embodiment of the disclosed technology, the feedback generation engine 230 enables an LED activation module 232 to control illumination, frequency, duration, color or other parameter for one or a plurality of light-emitting diodes (LEDs) or other light-emitting indicators on the EEG exam device 128 described in
Said signal processing, capture and storage engine 240 enables management of pre-processed data that are generated by the data pre-processing engine 220, or, in other embodiments, data that are collected directly from the EEG electrodes or other relevant sensors. Said signal processing, capture and storage engine 240 enables implementation of a data parsing module 2402 and a data processing module 2404, which prepare data for further analysis or storage. The data logging and storage model 2406 and events logging and storage module 2408 interact with the storage layer 290 to capture and store data and events respectively, as determined by the requirements of the exam, or input from the examiner, examinee or other operator.
Said data analysis engine 250 allows real-time or asynchronous assessment of output from the EEG detector device or other sensor. The data quality analysis module 2502 evaluates the data that is being generated by the EEG detector device or other sensors compared to predetermined thresholds or real-time calculated thresholds that indicate sufficient data quality to enable interpretation or other data processing, storage or analysis tasks. In one embodiment, said evaluation is performed with a real-time updated estimator of data quality and an associated confidence estimator.
The feature extraction and identification module 2504 enables raw, pre-processed or processed data to be evaluated for the presence, frequency, characteristics or other parameters of clinical features. One example of a clinical feature that may be detected is an epileptiform spike. The pattern extraction and identification module 2506 enables raw, pre-processed or processed data to be evaluated for the presence, frequency, or characteristics of patterns that are constructed from multiple features, consist of observable changes over time, or represent a statistical association with a parameter or outcome of interest. One example of a pattern that may be used is burst-suppression, a brain state seen in stages of general anesthesia, coma, or hypothermia. The patient stratification module 2508 enables patient-level data to be evaluated in the context of one or a plurality of other patient-level data in order to identify commonalities, differences, sub-group definitions, sub-group characteristics, or other analyses relevant to features, patterns, statistical associations or other analytical conclusions that may be made about a group of individuals.
The data annotation module 2510 creates data annotations that capture the results of data analysis performed by other elements of the data analysis engine 250, aid the system in future data retrieval and/or aid the examinee, examiner, other operator or qualified expert in reading, surveying, interpreting or otherwise interacting with the data. Said data annotation module may communicate with the visual guide element generation module 276 in order to make said annotations available to the examiner, examinee, other operator or qualified expert. The report element generation module converts the analyses performed by other elements of the data analysis engine 250 into elements that are represented in a report describing the results of the exam, in summary format, with detail about specific analytical results or with a combination thereof. Generated report elements may be created as a printed element, electronically generated in a static or dynamic format, or may be modified into any other format by which the results of the exam may be communicated to the examiner, examinee, other operator, qualified expert, family member or other authorized party.
Said data management engine 260 enables data ingestion and relay from multiple sources. In an exemplary embodiment, the data ingestion 262 and relay services 264 modules enable data and analysis results from the signal processing capture and storage engine 240 and data analysis engine 250 to be relayed to the storage layer 290. In said embodiment, the data ingestion module 262 also enables ingestion of EEG or other sensor, laboratory, imaging or other clinical data such that these data may be added to relevant databases in the storage layer 290 and/or accessed by other elements of the processing environment to aid in the performance of analyses, determination of relevant configurations, creation of report elements, or any other action performed by said system.
Said user display generation and input registration engine 270 enables the examinee, examiner, other operator, qualified expert or other authorized party to interact with the system. The data display module 272 generates an interface whereby one or a plurality of said users can view elements including, but not limited to: real-time data streams, results of analyses, warnings or alarms, metrics related to exam quality, metrics related to connectivity, technical information about the system or other information relevant to the performance, management or interpretation of the exam. The user input detection and registration module 274 enables the system to capture a range of inputs from the examinee, examiner, other operator, qualified expert or other authorized party in order to aid the user in making one or a plurality of selections, inputting data, modifying the exam or other functions that require input from the user or aiding the system in the performance of setting exam parameters, setting configuration, data collection, data processing, data analysis, or any other function performed by the system. One or a plurality of said users can provide information via said user input detection and registration module 274 via one or a plurality of interactions including but not limited to: pressing a button, moving a switch, performing a touch-sensitive interaction with a screen, typing information, speaking a command or creating another sound, eye tracking, gesture detection or any other mechanism by which the user communicates information to the system
Said user input detection and registration module 274 also enables one or a plurality of said users to provide information based on prior knowledge, expertise or active problem solving to interpret the data, analysis, report or other artifact generated by the system. In some cases, the information provided by one or a plurality of said users may be to verify the accuracy of the data, analyses or report generated by the system. The visual guide element generation module 276 may utilize data such as information about the exam environment, the examinee, features or signatures within the EEG waveform data or other data available about the patient, exam or data feeds, data collected from the subject's electronic medical record, data stored in the storage layer or other supplementary data. Using these data sources, the visual guide element generation module 276 is able to produce the design, content, appearance and other features of virtual exam guide elements for the purpose of helping the examinee, examiner, other operator, qualified expert or other authorized user to identify relevant features signatures or other observable elements of the data in order to perform an accurate interpretation.
In another embodiment, the interpretation is performed autonomously by an algorithm stored in a computer-readable format or generated de novo for the specified analysis, and the visual elements are used to explain the interpretation that was generated automatically (e.g., through machine learning/AI, or other statistical analysis technique). The visual elements generated by said visual guide element generation module 276 may include but are not limited to text instructions, guiding graphics such as one or more arrows, targets, circles, color-changing elements, progress bars, transparent elements, angles, ‘ghost’ outlines of comparator data signatures, projections of real-time or stored imaging data, or other virtual items. The guiding elements may exist and change according to a predetermined set of instructions, or in response to feedback elements such as navigation through data feeds or new annotations performed by one or a plurality of authorized users, passage of time, acquisition of new exam data, other user input, examinee input/The guiding elements generated by the method may be updated before, during, or after the exam via any of these inputs alone or in combination. The disclosed method may generate or adapt visual or graphical elements in accordance with completion or lack of completion of the exam or a subset thereof.
Storage layer 290 is an integrated data layer that allows high-throughput ingestion, retrieval, and inference of signal data. Encoding is performed by the signal processing and integration engine 240, which in one embodiment of the disclosed method, implements a compression codec that utilizes block-based and pre-entropy-coding inter-channel decorrelation. Said data stored in the storage layer 290 may be used as an input for other functional components of the system. In particular Data may be retrieved from said data analysis engine 250 for inference, interpretation or other subsequent analysis by the signal analysis engine 250.
In one exemplary embodiment, said storage layer 290 maintains a notion of semantically versioned processes that interact with and generate database content (reports, annotations, derived signals/artifacts), a request/response-driven task management workflow for extracting insight from domain experts, a notion of programmable votes/ballots for aggregating expert consensus, and a browser application that facilitates task-oriented machine-learning augmented workflows for biosignal recording labeling, monitoring, and review. Stored data may include, but is not limited to, exam profile data 2902, examinee profile data 2904, a data log repository 2906, an exam report repository 2910, one or a plurality of databases for quality comparison 2912, one or a plurality of databases for feature identification 2914, one or a plurality of databases for pattern identification 2916 and one or a plurality of databases for patient stratification and population analysis 2918, or other stored data types or categories. The statistical model architecture and parameter repository 2920 stores architectures, parameters and any other variables necessary to implement a machine learning or statistical model to identify features, patterns, patient stratification or other clinically relevant read-outs and/or other outputs. Said components of the storage layer may be stored in one or a plurality of distributed machine-readable media connected by a communications network.
In some embodiments, the software architecture described in
In said exemplary embodiments where the receiver CPU and display devices are integrated into the same device, the Bluetooth Low Energy (BLE) transmitter module 222 communicates data 312 to the receiving device computer processing unit 106 or 134 which implements the signal processing capture and storage engine 240. Although BLE may be used, other communication protocols may be used.
As previously described in
One or a plurality of analysis results generated by said data analysis modules working alone or in combination may alternately or simultaneously be communicated to the signals display 342 or to the visual guide element display 344. In an exemplary embodiment of the disclosed technology, the visual guide elements are displayed in conjunction with the underlying signal such that the visual guide elements highlight, annotate, or otherwise identify portions of the underlying data that contribute to the summary results of the analysis. Said visual guide elements may also indicate the confidence, strength of association or other indication of relativity with regards to the resulting assessment. One or a plurality of analysis results generated by said data analysis modules working alone or in combination may alternately or simultaneously be processed to supply configuration instructions 334 to said signal processing, capture and storage engine 240, or subsequently as configuration instructions 310 to the Bluetooth Low Energy transmitter 222 to update the instructions, function, methods, algorithms or other settings implemented by the local processing modules 108 or 130 described in
In the aforementioned exemplary embodiment where the receiver CPU and display devices are integrated into the same device, the device is configured with sufficient local data storage for at least 14 days continuous recording within the storage layer 290. Alternatively, the receiving and display device is configured to offload portions of the data using communications network(s) 110 to a cloud storage solution to reduce the amount of data that is stored locally. In said embodiment where data is offloaded to a cloud storage solution, the duration of data that may be stored by the system is limited only by the availability of cloud storage. For said local data storage, said cloud data storage, or other form of data storage, clinical exam data are stored using a common standardized format and encryption protocol such that the same data ingestion and interpretation methods may be used to post-process data stored by multiple instances or embodiments of the disclosed system. The receiver and display devices 132 or 102 as disclosed herein are also capable of initiating and maintaining real-time data streams of signals that are collected from the acquisition device which are then communicated to a cloud-based computing and storage platform, either directly over an encrypted connection 110 if external network access is available, or via protected intra-network endpoints that securely relay necessary outgoing/incoming connections. Said device is enabled with encryption algorithms to reconstitute data communicated by the acquisition device and/or to encrypt data that is subsequently communicated to other components of the system, such as a cloud-based remote computing module or data storage solution 118. Said encryption algorithms are of sufficient quality to prevent any unauthorized third party from intercepting and reconstituting transmitted data into a usable form that would in any way contain personally-identifiable health data or any data on the status or function of any aforementioned components of some embodiments. In the circumstance that a third party attempted to intercept said encrypted data transmission the system is capable of recognizing such interference, halting data transmission and notifying the examinee, examiner, other operator or system administrator of said interference. Said device is enabled with Wi-Fi, a cellular SIM card, or any other communication device to communicate with said cloud storage solution and for other communication with central data repositories 118 or computing systems 114 as described in
Method 400 may be performed locally on an electronic device such as local processing module 108 depicted in
If a specific exam maneuver is required by the examiner, examinee, or other user at step 424, then at step 434 the system may optionally generate instructions to correct or guide performance of said maneuver. In some embodiments, maneuvers may include but are not limited to cognitive tasks, such as picture naming, visualization, speaking, remembering a number, word, or shape, or tracing a figure by hand. Maneuvers may include physical movements, repositioning, or other maneuvers that may benefit the exam as well. In some embodiments, instructions may include written word or text, while in other embodiments said instructions may include visual guide elements shown on a display device. Other embodiments may include some combination thereof. If a maneuver is required, data is collected at step 422, following optional instruction generation at step 432. If multiple maneuvers are required, step 424, optional step 434, and step 422 may be repeated until no additional maneuvers are required at step 424. If no additional maneuver is required, then the method proceeds to step 430 at which the quality of subsequent data streams is assessed in accordance with exam parameters to determine if the quality of said data is adequate for the desired outcome. Additional detail for step 430 is provided in
In one embodiment, some or all of the analysis occurs online via a cloud system. In another embodiment, the analysis may be performed by remote processing module 112 or computer system(s) 114. In another embodiment some or all of the analysis occurs on a local processing unit (e.g., local processing unit 108 or 130 as detailed in
Following analysis at step 440, the system, exam instructions, examiner, other user(s), analysis result, or data quality may dictate that additional data is required at step 442. If additional data is required, the system proceeds to step 432. Otherwise, if no additional data is required, the system proceeds to step 444 where a report of the exam and its results may be optionally generated. In some embodiments, report generation occurs via report element generation module 2512 described in
Method 420 may be performed locally on an electronic device such as local processing module 108 depicted in
In some embodiments, analyses of electrode signals may include one or more machine learning algorithms known to one skilled in the art. In one embodiment a neural network trained to recognize common EEG artifacts may be employed. A neural network may recognize very low-frequency oscillations consistent with sweat artifact, small spike-like discharges associated with rapid eye movements, glossokinetic potential from tongue movement, spiky waveforms associated with talking, chewing artifact, movement artifact, pulse artifact, electrode “pop” artifact, or a variety of other EEG artifacts known to one skilled in the art. In some embodiments, a machine-learning algorithm could be designed to function as an integrated classifier and grade EEG signals as “adequate” or “inadequate” based on features learned from a training database composed of EEG signals previously assessed by individuals skilled in EEG interpretation. Said methods could similarly be applied to data streams from one or a plurality of sensors, such as in step 4312, including but not limited to accelerometers, oximeters, temperature probes, gyroscopes or other sensors. In another embodiment, said methods are used to create mixed models that generate outputs based on a plurality of sensor inputs or clinical metadata such as lab values, demographics, radiology reports, pathology reports, clinical assessments or other clinical data.
In one embodiment, recognition of these artifacts may be done in conjunction with an EEG safety read examination, particularly when artifacts can simulate seizure-like activity. Recognition of false seizure-like activity is particularly important when testing the safety of a new therapeutic, where false safety alarms may result in unnecessarily halting a clinical trial. At step 4306, variability and noise in the signal are analyzed. Variability and noise may be present if an electrode connection is loose, one or more electrodes is faulty, one or more electrodes is making poor surface contact with the skin, or for a plurality of other reasons. Analyzing variability and noise helps ensure the fidelity of the signal. Other common calculations known to one skilled in the art may be performed at 4306, such as calculating a signal to noise ratio. At step 4308, signal drift over time is analyzed. Signal drift may occur if the gel used at the interface of the electrode and the skin is not stable, or if there is poor skin contact. At step 4310 the system may search for common or expected patterns and/or waveforms for a given exam. In one embodiment, the system may search for waveforms expected in conjunction with a maneuver, specific patterns based on the given exam, or other commonly studied waveforms such as delta, theta, alpha, sigma, and beta waves. At step 4312 the system may optionally analyze other sensor data including but not limited to accelerometer, oximeter, temperature probe, gyroscope or other sensor data. In one embodiment, gyroscope and/or accelerometer data may be used in conjunction with EEG data to increase confidence when identifying movement artifact. In some embodiments, a moisture sensor may be used to increase confidence in sweat artifact. At step 4314, the system uses the previous analysis performed in previous steps to generate a quality score, metric, or decision. In some embodiments, data may be compared to predetermined threshold or real-time calculated threshold that indicate sufficient data quality to enable interpretation or other data processing, storage, or analysis tasks. In some embodiments, said evaluation is performed with a real-time updated estimator of data quality and an associated confidence estimator. In another embodiment, data pertaining to the quality of exam data or to the analysis of features, patterns, signatures or patient stratification interpretation may be communicated via a communication network to experts skilled in the art, such as neurologists, for manual quality interpretation or for verification of the accuracy of automated interpretation or results. In said embodiment, the automated quality interpretation and results could be alternatively used to calibrate or score the accuracy of expert interpreters, or to train other individuals to perform quality interpretation. At 4316 the method ends and the system returns to step 440 or step 442 described in
Method 440 may be performed locally on an electronic device such as local processing module 108 depicted in
In step 4406, the system performs signal processing and data parsing methods commonly known to one skilled in the art. In an exemplary embodiment, step 4406 includes parsing data via one or more data transforms. An example of such a data transform includes a Fourier transform in which signal is decomposed into frequency components for further analysis. Said decomposition and analysis may be performed by the signal processing, capture and storage engine 240 described as a component of the system's processing environment 210. In said exemplary embodiment or in another exemplary embodiment of step 4406, the system may parse data into discretized components and/or convert one data file type to another. The system may perform pre-processing techniques to filter, sort, denoise, clean or organize the data for further analysis and display. At 4408, the system may perform data normalization. Data normalization is commonly employed when assessing multiple data inputs with varying scales and/or absolute values. In one such embodiment, the system may normalize various frequency bands, pulse oximeter data, accelerometer data, and/or other data. At 4410, the system optionally performs feature extraction. In some embodiments, ML algorithms could be incorporated for various functions such as for example, spike burden, population-level interpretation at step 4222 and 4224, or other aspects of functionality. In an exemplary embodiment, the system may calculate the derivative of a given signal, reduce the dimensionality of a feature-rich dataset to decrease future processing requirements, and/or employ other common feature extraction techniques known to one skilled in the art. At 4420, the system optionally defines a baseline and/or “normal” comparator for further analysis of the index patient or patient population. Details of sub-method 4420 is provided in
At 4424, data are ingested for analysis by one or multiple machine learning or statistical models implemented by the data analysis engine 250 described as a component of the system's processing environment 210. In one embodiment, said one or multiple machine learning or statistical models are trained using method 460 from
Once the system has identified one or more clinical features, the system may optionally generate visual guide elements, annotations, and/or notes showing and/or describing said features at step 4426. Several example embodiments of visual guide elements and/or annotations are shown in
Each human expert may additionally be provided with the interpretations of the other human experts in order to consider alternate interpretations. In another embodiment, the system may enable digital communication mechanisms such as peer-to-peer chat, audio voice call technology, video call technology or asynchronous written communication to enable the multiple human experts to provide the rationale for their interpretation such that the group of multiple experts may discuss the data and determine an adjudicated final consensus interpretation. In other embodiments, an automated consensus measure may be performed, and in some implementations, this measure may evaluate a consensus threshold (e.g., similar to the confidence threshold). In some embodiments, contacting of human experts may be based on the threshold and/or other measures (e.g., cost).
In another embodiment of the disclosed system and method, human experts are provided with digital tools to create and share virtual guide elements to identify components of the underlying data that contributed to their interpretation and communicate these findings to the one or more additional human experts. Said virtual guide elements may appear as static or dynamic and may be integrated with said audio, video or written communication systems. In each of the aforementioned embodiments in which human interpretation and/or user input is required, the system would implement step 4438 following implementation of step 4430 and determination of whether user input is required. In any embodiment in which human interpretation is not required, the system would proceed to step 4450 (i.e. Analysis without user input (fully autonomous).
If the system has determined that user input is required and proceeds to step 4430, then data is subsequently displayed to one or more users. Data may include but is not limited to EEG data, results of previous analyses, other sensor data, other user interpretations and/or annotations, and/or guide elements and/or annotations generated in previous steps by the system claimed herein. Data may be displayed via display device 104 and/or personal display device 136 as shown in
Following step 4404, if the system determines Analysis without algorithmic assistance is desired, then at step 4442 data is subsequently displayed to one or more users. Optionally, at or before step 4442, a data presentation step may be performed that presents data for functions such as unit conversions, friendly names, and data visualization related transformations (e.g. gain, band filtering, log scale). Data may be displayed via display device 104 and/or personal display device 136 as shown in
At step 4444, one or more users may optionally highlight features of interest in the data using visual guide elements and/or annotations. In one exemplary embodiment, said user(s) may employ guide elements similar to those displayed in
At step 4450 the system optionally generates notification(s) to alert an examiner, examinee, and/or other operator of specific features, diagnoses, and/or anomalies. In one exemplary embodiment, the system may identify an epileptiform spike and subsequently generate an alert. This finding may be included in report generation step 444 in method 400; however, an alert such as an audible noise, visual alert, or otherwise may be employed to more promptly alert an examiner, examinee, or other operator to said finding in order to facilitate appropriate action such as providing a therapeutic intervention or ceasement of a causative intervention. At step 4452, the system returns to step 442.
Method 450 starts at 4502. In one embodiment a computer-readable storage medium stores a pre-defined baseline and/or “normal” database. Said database may be downloaded or accessed in step 4504 for further analysis. Alternately, in step 4506, the system may access previously collected EEG data and/or annotated labels from a computer-readable storage medium. In step 4508, the system may select a subset of the data relevant to characteristics of the patient data currently being analyzed. Subsequently in step 4510, the data are ingested by the machine learning or statistical algorithm with or without said annotations. In said exemplary embodiment, user input may be collected in step 404 or 406 of method 400 or elsewhere. In another embodiment, the baseline and/or “normal” comparator and/or comparator based on previously collected EEG is a statistical summary such as a high-dimensional vector, matrix or array representation of underlying comparator data.
In step 4512, the system may optionally update the machine learning or statistical algorithm or model with a new definition of baseline and/or “normal”. In said exemplary embodiment, said operation involves updating parameter weights in a neural net, updating data clusters or performing analyses to determine characteristics of the test population versus the comparator population established in step 4504 or 4508. In step 4514, the system returns to step 4420.
In step 4610, the statistical model is trained on the plurality of EEG data and the one or more annotations or labels. EEG data may be segmented into smaller segments (e.g., 1 second) during training. For example, models may be trained to interpret features such as those that a create burst suppression severity index, model changes in spike burden over time, stratify patients based on electrophysiologic subgroup, or other clinically-relevant output. Further, predictive models may be trained to generate an alert output when a change in the burst suppression severity index (for example) is likely to increase or when a subject is likely to enter a certain sleep stage, predict progression of cognitive decline over time, predict correlations with protein-based biomarkers, predict correlations with imaging findings, predict response to one or more therapies, predict adverse effects from one or more therapies or other clinically-relevant predictions or prognoses.
Models may be trained using a plurality of EEG data from step 4604 in a supervised or unsupervised manner depending on the zero, one or more annotations or labels that are retrieved from step 4606. Machine learning model(s) may be prespecified or selected by one or more users for a specific analysis. In one embodiment, said features may be identified autonomously by an algorithm stored in a computer-readable format or identified de novo for the specified analysis. In an exemplary embodiment, said machine learning models and/or statistical models may identify epileptiform spikes indicative of seizure-like activity, generalized slowing, atypical triphasic waves, and/or multifocal sharp waves indicative of neurotoxicity, 60 Hz artifact, interictal epileptiform spikes, burst suppression, small sharp spikes and/or a plurality of other EEG features indicative of normal and/or pathologic disease states known to one skilled in the art of EEG interpretation. Some number of said EEG features may be referred to by synonyms known to one skilled in the art, for example small sharp spikes may alternately be referred to as benign sporadic sleep spikes (BSSS) or benign epileptiform transient of sleep (BETS). Said machine learning models and/or statistical models may also identify EEG features or signatures that may not be recognized by an expert skilled in the art of EEG interpretation, but may be identified to be predictive of, correlate with or are otherwise related to prognosis, therapeutic response, adverse events or other clinically-relevant observations.
The statistical or machine learning model trained using method 460 or some other method may be employed in a clinical trial or general clinical setting according to seven exemplary embodiments: 1) Identify patients with Alzheimer's disease and/or discriminate between patients with Alzheimer's disease and other neurodegenerative diseases, 2) identify or predict patient subgroups with regard to neurophysiology, likelihood of cognitive decline or other clinically-relevant grouping, 3) screen patients prior to enrollment for inclusion or exclusion criteria, 4) identify or predict correlations with other clinically-relevant biomarkers such as protein-based biomarkers or imaging features, or clinical assessments such as cognitive scoring systems, 5) identify indication for therapeutic intervention, 6) identify signals associated with clinical benefit or predict clinical benefit following exposure to one or more therapies, and/or 7) identify signals associated with adverse events or predict adverse events following exposure to one or more therapies.
In the first embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying patients with Alzheimer's disease and/or discriminating between patients with Alzheimer's disease and other neurodegenerative diseases, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify EEG features, signatures or patient stratifications that are associated with relevant disease state. For example, in one embodiment said statistical machine learning model is trained on sleep EEG such that differences in sleep architecture (e.g., the duration of time spent in relevant REM, N1 or N2 sleep stages, or frequency or pattern of changes between stages) may be used to distinguish between Alzheimer's disease, frontotemporal dementia or other related neurodegenerative disease.
In the second embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying or predicting patient subgroups, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify EEG signatures that are statistically correlated with or predictive of neurophysiologic sub-groups, likelihood of cognitive decline or other clinically-relevant grouping. Said subgroups may include, but are not limited to, patients with more rapid clinical or cognitive decline, patients with slower clinical or cognitive decline, patients more or less likely to exhibit or develop clinical seizures, patients more or less likely to exhibit or develop behavioral disturbances and/or patients more or less likely to develop sleep disturbances.
In the third embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of screening patients for inclusion or exclusion criteria, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify clinical features, signatures or patient stratifications that are associated with relevant disease features (e.g., clinical or sub-clinical epilepsy). In one embodiment, a disease state may be pre-specified by trial investigators as criteria either for inclusion or exclusion in an investigational study (e.g., to test an investigational drug for the treatment of Alzheimer's patients with known epilepsy, or to exclude Alzheimer's patients with epilepsy who may be at higher risk of adverse effects from a therapy aimed at treating a different condition). Alternatively, the machine learning or statistical model(s) may be trained or otherwise configured to identify patients most likely to benefit from an investigational therapeutic. For example, if a mutation in a given gene locus is known to increase risk of rapid cognitive decline by inducing either gain-of-function or loss-of-function, then the machine learning algorithm may be trained to segment these populations in order to determine likelihood of response to a given therapy aimed at inhibiting or enhancing the activity of the protein encoded by said gene locus. Implementation of one or more machine learning algorithms of this nature in a clinical trial setting would support development of a companion diagnostic that would ultimately identify patients likely to benefit from a given therapeutic intervention as an indication for treatment, as discussed in other embodiments in this section.
In the fourth embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying or predicting correlations with other clinically-relevant biomarkers, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify EEG signatures that are statistically correlated with or predictive of other biomarkers such as protein-based biomarkers (for example, amyloid-beta detected in cerebrospinal fluid), imaging features (for example, grey or white matter volume determined by MRI), functional imaging features (for example, fMRI activity during a cognitive task), pathologic features (for example amyloid-beta or tau deposition noted on examination of brain tissue), and/or clinical cognitive scoring systems (for example, the standardized Montreal Cognitive Assessment). Said predictive EEG biomarkers may be utilized, for example, as surrogate biomarkers to avoid the need for frequent invasive or logistically-burdensome exams, predictive of future findings at an earlier timepoint, or to improve predictive accuracy for an outcome of interest in combination with one or more of said biomarkers.
In the fifth embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying indications for therapy, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify neurophysiologic features that may be used as an indication for initiation, switching or discontinuing therapy and/or may be used in the creation of an overall plan of care. Said indication may be predicated on inclusion or exclusion in a given sub-group, or may be determined on an individual basis. Said indication may be for the initiation of therapy in the general sense, may specify a given therapeutic class defined by mechanism of action or other grouping, may specify a specific therapeutic agent, or may specify a combination therapy defined either by single or multiple therapeutic classes or single or multiple therapeutic agents.
In the sixth embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying efficacy of a therapeutic intervention, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify clinical features, signatures, patient stratification or other signals that are associated with a response or lack of response to the intervention. Said determination may take into account differences between an individual's baseline and treated states, differences between treated and untreated individuals, baseline control data, normal-comparative database or other comparator. Said therapeutic response may be performed in the context of a clinical trial or in the context of broader clinical practice.
In the seventh embodiment of said group of exemplary embodiments, wherein the system and method is implemented for the purpose of identifying adverse events, said statistical or machine learning model trained in method 460 and implemented in method 470 is configured to identify clinical features, signatures or other signals that are associated with an adverse effect of a therapeutic intervention, an adverse effect of the patient's underlying disease condition, an adverse effect of some other medication or exposure, or any other adverse effect. Said adverse events could represent a risk to the patient or a reportable event identified in a clinical trial's protocol or safety monitoring plan, identified in a healthcare institutional safety program, or identified in a national or international safety program. In an exemplary embodiment, one or more machine learning algorithms and/or statistical models may be configured to identify generalized periodic discharge, generalized rhythmic delta activity, slowing, and/or generalized spike-and-waves, any of which may indicate drug-induced neurotoxicity.
The system and method described herein is capable of generating a wide range of virtual guide elements. Said elements may be generated as the output of a statistical model trained on a large volume of EEG data in order to identify segments of data of likely diagnostic importance, or to directly annotate said segment with a diagnostic interpretation. Alternatively, an expert human interpreter may generate a virtual guide element to mark his or her own expert interpretation or annotation such that another operator, another expert, the patient or the original expert may subsequently view the virtual guide element. Virtual guide elements may appear in a wide range of form-factors, including but not limited to the exemplary embodiments illustrated in
In all of said exemplary embodiments of virtual guide elements, said elements may be colored, textured, sized, or otherwise highlighted or enhanced to make them visually appealing, easy to locate, or to communicate information such as event category. In another exemplary embodiment of the technology, said virtual guide elements are not static, but dynamically respond to the collection of data, addition of new annotations or other user or automated inputs.
In said exemplary embodiment, an expert is presented with a virtual dashboard that enables visualization and navigation of the raw recorded data 602, optionally segmented or organized by the relevant electrode 612 or alternate data stream, such as an electrocardiogram lead 624. The interface displays visual guide elements, such as automated annotations 604 that are created by statistical models to identify segments of raw data that have a given probability of identifying diagnostically-relevant information. Other virtual tools 606 enable the operator to manually annotate segments of interest to add expert interpretation or to confirm automatically annotated interpretations. The data is viewed with relevant time stamps 608 enabling ease of navigation. In said exemplary embodiment, the interface also displays a consolidated log of events or spans of diagnostic relevance 614, which may include automated annotations, manual annotations or both. Said element in the interface may also display a diagnostic interpretation such as “seizure” 616, which may similarly be generated by automated interpretation, manual interpretation or a combination of automated and manual inputs. An additional interface element 618 summarizes overall statistics from the relevant collected data or any defined subset thereof. In said embodiment, the interface also includes additional tools to define key signal processing parameters 622 including but not limited to: gain, high-pass filter value, low-pass filter value or time window to analyze.
In an exemplary embodiment of the report, element 710 represents a graphical summary of electroencephalographic events that have occurred during the recording period. In said exemplary body, the graphical summary identifies clinically-relevant EEG events that have occurred prior to administration of a therapeutic agent and after administration of said agent such that the reader may interpret the summary effect of said therapeutic agent. Exemplary events include seizure activity, interepileptic discharges, power spectral band slowing, generalized slowing, or other electroencephalographic features or signatures that are interpretable to a specialist or non-specialist clinician. In an exemplary embodiment of the report, element 712 provides additional detail on specific EEG signatures of interest. Element 714 describes quantitative metrics associated with the analysis, such as pre-defined spectral band ranges. Element 716 provides a graphical snapshot with primary EEG data that were used to construct the interpretation. In one embodiment element 716 presents selected examples of EEG data representative of the interpretation. In another embodiment, element 716 presents all examples of EEG data that were used to construct the interpretation.
In one embodiment of the report, element 718 demonstrates a summary time-based EEG activity profile for the relevant patient, subject or patient group. In one embodiment, said EEG activity profile demonstrates the time relationships of interpretations presented in elements 710, 712 and 716. In another embodiment, the EEG activity profile is a sleep profile. In yet another embodiment, the EEG activity profile is an interactive timeline that the user can manipulate to highlight the time points associated with specific interpretations in relation to other timepoints over the course of the study period.
In said exemplary embodiment of the report, element 720 is a text summary of the findings. Said text may be automatically generated by a statistical machine learning algorithm. Said text may alternately be written or dictated by a clinician skilled in the art of EEG interpretation consistent with the interpretation delivered in the report. Element 722 provides a signature or alternate affirmation of the authenticity of the report. In an exemplary embodiment, said affirmation is a signature of a clinician who has reviewed and approved the interpretation automatically generated by a statistical or machine learning model. In another embodiment, said affirmation is a signature of a clinician who has generated the report based on said clinician's expert skill in the art of EEG interpretation and interpretation of associated data.
In some embodiments, the report is stored in a computer-readable memory in a static format such as a .pdf document or equivalent digital document. In said embodiment, the report may be converted to a physical document using standard printing technology. In another embodiment, the report may be stored in a computer-readable memory with a dynamic format in which elements may be programmed to be responsive to user input. Said input may be performed using a standard mouse-based graphical user interface where user input may include hovering a cursor, clicking, dragging, typing hotkey commands, typing text, or manipulating controls such as buttons, scroll bars, text fields, zoom tools, selection tools, or tab navigation tools. In another embodiment, said input may alternately be performed using touchscreen-based technology, where user input may include tapping, swiping, pinching, scrolling or other hand gestures.
The system may additionally or alternately be configured to Identify correlations between EEG-based neurobiomarkers and other established biomarkers such as MRI imaging-based biomarkers, protein biomarkers such as amyloid-beta in cerebrospinal fluid (CSF) or other biomarker. In another embodiment, the system is configured to assess one or more of EEG-based neurobiomarkers, cognitive scoring tests, demographic data, protein-based biomarkers, imaging or any other relevant clinical data in order to evaluate for trial eligibility, including assessment of inclusion or exclusion criteria. In another embodiment, the system may be configured and implemented to perform monitoring for adverse safety events in a clinical trial setting (e.g., verifying that initial doses of a drug did not cause seizure-like activity) or in a routine clinical setting. In another embodiment the system may be configured to monitor for efficacy of a given intervention, for example reduction in epileptiform spike burden.
In various embodiments, the system may perform other functions using EEG and/or other types of signals. For instance, the system may perform compliance functions such as user verification, security processes, activity logging features and other compliance features. Further, the system may be used during different use cases in general clinical practice or in the setting of clinical trials. For instance, the system may be used to perform safety reads, may provide companion diagnostics, used to measure clinical features of interest, used for measuring enrollment criteria, or other clinical use.
The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. It should be appreciated that any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions. The one or more controllers can be implemented in numerous ways, such as with dedicated hardware or with one or more processors programmed using microcode or software to perform the functions recited above.
In some embodiments, a machine learning model, artificial intelligence model or other type of statistical model may be used to determine environmental parameters to be used to process one or more EEG or other signal types. In one implementation, a model may be trained based on EEG signals. Certain outcomes, diagnoses, or other outputs may be also used to train the model and permit the model to predict outcomes based on the input parameters. The model may be part of a computer system used to provide indications to one or more users. Other implementations and systems can be used that take EEG signals to appropriately predict one or more outcomes.
In this respect, it should be appreciated that one implementation of the embodiments comprises at least one non-transitory computer-readable storage medium (e.g., a computer memory, a portable memory, a compact disk, etc.) encoded with a computer program (i.e., a plurality of instructions), which, when executed on a processor, performs the above-discussed functions of the embodiments. The computer-readable storage medium can be transportable such that the program stored thereon can be loaded onto any computer resource to implement the aspects discussed herein. In addition, it should be appreciated that the reference to a computer program which, when executed, performs the above-discussed functions, is not limited to an application program running on a host computer. Rather, the term computer program is used herein in a generic sense to reference any type of computer code (e.g., software or microcode) that can be employed to program a processor to implement the above-discussed aspects.
Various aspects may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
Also, embodiments may be implemented as one or more methods, of which an example has been provided. The acts performed as part of the method(s) may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term).
The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.
Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting.
The terminology utilized herein is intended to describe specific embodiments only and is in no way intended to be limiting of the invention. The term “and/or”, as used herein, includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well as the singular forms unless the context clearly indicates otherwise. As used herein, the terms “comprises” and/or “comprising” are intended to specify the presence of features, elements, or steps, but do not preclude the presence or addition of one or more additional features, elements, and/or steps thereof. The term “device” as used herein should be interpreted as a broad term defined as a thing adapted for a particular purpose. This includes but is not limited to medical examination tools such as EEG electrodes, EKG electrodes, oximetry probes, temperature probes, but also other things a user may utilize in performing an exam such as a user's own hands, ears, nose, or eyes. The terms “electroencephalography electrode” or “EEG electrode” should be understood to mean an electrode capable of detecting electrical activity, including depolarization and repolarization of neuronal tissue spatially arranged in the central nervous system, including various regions of the brain. Such electrical activity detected by an EEG electrode is generally thought by those skilled in the art to represent the summed electrical activity of a plurality of neurons acting synchronously or asynchronously individually, in groups, or in networks that are spatially arranged in the area detected by the electrode.
Unless defined otherwise, all terms used herein have the same meaning as commonly understood by one having ordinary skill in the art to which the invention relates. Furthermore, it will be understood that terms should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure and will not be interpreted in an overly formal or idealized sense unless expressly defined herein.
To avoid confusion, this description will refrain from repeating every possible combination of features, elements, and/or steps. However, it will be understood that any and all of the listed features, elements and/or steps each has individual benefit and can also be used in conjunction with one or more, or in some cases all, of the other disclosed items, particularly in the claims. Thus, the specification and claims should be read with the understanding that any of these combinations are entirely within the scope of the invention and the claims.
This Application is a Non-Provisional of Provisional (35 USC 119(e)) of U.S. Application Ser. No. 63/392,333, filed Jul. 26, 2022, entitled “SYSTEM AND A METHOD FOR DETECTING AND QUANTIFYING ELECTROENCEPHALOGRAPHIC BIOMARKERS IN ALZHEIMER'S DISEASE” and this Application is a Continuation-in-part of U.S. application Ser. No. 17/863,803, filed Jul. 13, 2022, entitled “SYSTEMS AND METHODS FOR RAPID NEUROLOGICAL ASSESSMENT OF CLINICAL TRIAL PATIENTS”, which is a Non-Provisional of Provisional (35 USC 119(e)) of U.S. Application Ser. No. 63/221,746, filed Jul. 14, 2021, entitled “SYSTEMS AND METHODS FOR RAPID NEUROLOGICAL ASSESSMENT OF CLINICAL TRIAL PATIENTS”, each of these applications is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63392333 | Jul 2022 | US | |
63221746 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17863803 | Jul 2022 | US |
Child | 18358333 | US |