Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
The present disclosure relates to the field of patient monitoring. In some examples, the disclosure relates to monitoring the depth of consciousness of a patient under anesthetic sedation.
Sedation indicators based on a single quantitative electroencephalogram (QEEG) feature have been criticized for limited performance as estimates of the sedation level of a patient. Thus, an improved way to estimate sedation level may be desirable. There is a need for robust sedation level monitoring systems to accurately track sedation levels across all drug classes, sex, and age groups.
The present disclosure describes systems and methods for estimating sedation level of a patient using machine learning. For example, the integration of multiple quantitative electroencephalogram (QEEG) features into a single sedation level estimation system 200 using machine learning could result in a significant improvement in the predictability of the levels of sedation, independent of the sedative drug used.
In one example, 102 subjects were studied, with 36 subjects given propofol (N=36), 36 subjects given sevoflurane (N=36), and 30 subjects given dexmedetomidine (N=30). Subject sedation level was assessed using the Modified Observer's Assessment of Alertness/Sedation (MOAA/S) score. Forty four QEEG features estimated from the EEG data were used in a logistic regression based automated system to predict the level of sedation. Elastic-net regularization method was used for feature selection and model training. Evaluation were performed using leave-one-out cross validation methodology. The area under the receiver operator characteristic curve (AUC) and Spearman rank correlation coefficient (p) were used to assess the performance of the logistic regression model. The following performances were obtained when the system was trained and tested as drug dependent mode to distinguish between awake and sedated states (mean AUC±SD): propofol—0.97 (0.03), sevoflurane—0.74 (0.25), and dexmedetomidine—0.77 (0.10). The drug independent system resulted in a mean AUC=0.83 (0.17) to discriminate between awake and sedated states, and mean Spearman rank correlation ρ=0.54 (0.21) to predict continuous levels of sedation.
The present disclosure advantageously allows for the incorporation of large numbers of QEEG features and machine learning into the next-generation monitors of sedation level. Different QEEG features may be selected for different sedation drugs, such as propofol, sevoflurane and dexmedetomidine groups. However, the sedation level estimation system can maintain a high performance for detecting level of sedation, independent of the drug used.
In some embodiments, disclosed herein is a method for generating a sedation level estimate. The method can include receiving an electroencephalography (EEG) signal from a sensor electrode attached to the patient. The EEG can include a plurality of channels. The method can further include segmenting the EEG signal into smaller epochs for each channel. The method can also include extracting features of the EEG signal in each epoch. The method can further include determining a median of features among the plurality of channels for each epoch. In some instances, the method can include determining, by a classifier, a probabilistic estimate of a patient sedation. The method can also include generating, using a determined correlation, a sedation level estimate, the sedation level estimate comprising a continuous sedation score. In some instances, the method can include displaying an indication of the sedation level estimate. The method can be performed by one or more hardware processors.
In some instances of the preceding method, the epoch can be 4 seconds. In some instances, the features include quantitative electroencephalogram (QEEG) features. In some instances, extracting features include extracting at least 44 QEEG features. In some instances, extracting features include extracting at least some of the 44 QEEG features. In some instances, the classifier includes a binary classifier trained by a machine learning model. Further, in some instances, the binary classifier is trained using awake and sedated epoch data, the awake and sedated epoch data can include a plurality of epochs having sedation scores. The sedation scores can include a score on a scale of 0 to 5. The sedation scores can also include a score between 0 and 100. The sedation scores can also include MOAA/S scores. Furthermore, the determined correlation can include a correlation between the probabilistic estimate of the patient sedation and a sedation score. In some instances, the determined correlation include a spearman rank-correlation.
In some embodiments, disclosed herein is a system for generating a sedation level estimate. The system can include one or more hardware processors. The one or more hardware processors can receive an electroencephalography (EEG) signal from a sensor electrode attached to the patient. The EEG can include a plurality of channels. The one or more hardware processors can segment the EEG signal into smaller epochs for each channel. The one or more hardware processors can extract features of the EEG signal in each epoch. The one or more hardware processors can determine a median of features among the plurality of channels for each epoch. In some instances, the one or more hardware processors can determine, by a classifier, a probabilistic estimate of a patient sedation. The one or more hardware processors can also generate, using the determined correlation, a sedation level estimate, the sedation level estimate including a continuous sedation score. In some instances, the one or more hardware processors can cause a display to display an indication of the sedation level estimate.
In some instances of the preceding system, the epoch can be 4 seconds. In some instances, the features include quantitative electroencephalogram (QEEG) features. In some instances, extracting features include extracting at least 44 QEEG features. In some instances, extracting features include extracting at least some of the 44 QEEG features. In some instances, the classifier includes a binary classifier trained by a machine learning model. Further, in some instances, the binary classifier is trained using awake and sedated epoch data, the awake and sedated epoch data can include a plurality of epochs having sedation scores. The sedation scores can include a score on a scale of 0 to 5. The sedation scores can also include a score between 0 and 100. The sedation scores can also include MOAA/S scores. Furthermore, the determined correlation can include a correlation between the probabilistic estimate of the patient sedation and a sedation score. In some instances, the determined correlation include a spearman rank-correlation.
In some embodiments, disclosed herein is a method of selecting a subset of EEG features for use in an electronic determination of sedation state across a plurality of drugs. The method can include receiving EEG signal data from an EEG sensor for a plurality of administered drugs. The method can also include associating a human determined sedation score corresponding to patient's sedation state with the received EEG signal data. The method can further include selecting EEG signals with the associated human determined sedation score that has a high degree of confidence. The method can also include extracting a plurality of features from the selected EEG signals. The method can further include training the plurality of features with the corresponding human determined sedation scores. In some instances, the method can include identifying a set of features with a high degree of correlation based on the training.
In some embodiments, disclosed herein is a system of selecting a subset of EEG features for use in an electronic determination of sedation state across a plurality of drugs. The system can include one or more hardware processors. The one or more hardware processors can receive EEG signal data from an EEG sensor for a plurality of administered drugs. The one or more hardware processors can associate a human determined sedation score corresponding to patient's sedation state with the received EEG signal data. The one or more hardware processors can select EEG signals with the associated human determined sedation score that has a high degree of confidence. The one or more hardware processors can extract a plurality of features from the selected EEG signals. The one or more hardware processors can further train the plurality of features with the corresponding human determined sedation scores. In some instances, the one or more hardware processors can identify a set of features with a high degree of correlation based on the training.
In some embodiments, disclosed herein is a method of determining sedation state of a patient. The method can include receiving EEG signal data from an EEG sensor. The method can further include extracting features from the received EEG signal data. The method can also include applying non-linear machine learning model to the received EEG signal data. The method can also include determining the sedation state based on the application of the non-linear machine learning model.
The preceding method can include any combination of the following features, wherein the non-linear machine learning model includes ensemble tree with bagging model; wherein the non-linear machine learning model includes random forest model; wherein the non-linear machine learning model includes support vector machine with Gaussian kernel model; wherein the non-learning model comprises elastic net logistic regression; wherein the features include at least power in alpha band and power in beta band; wherein the features include at least BSR, standard deviation of FM, SVDE and FD.
In some embodiments, disclosed herein is a system of determining sedation state of a patient. The system can include one or more hardware processors. The one or more hardware processors can receive EEG signal data from an EEG sensor. The one or more hardware processors can extract features from the received EEG signal data. The one or more hardware processors can apply non-linear machine learning model to the received EEG signal data. The one or more hardware processors can determine the sedation state based on the application of the non-linear machine learning model.
The preceding system can include any combination of the following features, wherein the non-linear machine learning model includes ensemble tree with bagging model; wherein the non-linear machine learning model includes random forest model; wherein the non-linear machine learning model includes support vector machine with Gaussian kernel model; wherein the non-learning model comprises elastic net logistic regression; wherein the features include at least power in alpha band and power in beta band; wherein the features include at least BSR, standard deviation of FM, SVDE and FD.
Optimal management of the level of sedation becomes increasingly important during surgical procedures to ensure minimal side effects and rapid recovery of the patient. Current practice for monitoring the sedation state during anesthesia relies mainly on the behavioral assessments of the patients' response to a verbal and/or tactile stimulus. However, these behavioral assessments can only be performed intermittently by the anesthesiologist. Moreover, it may be difficult to quantify sedative effect once all visible responsiveness of the patient to verbal, tactile or noxious stimuli have disappeared. Without the availability of continuous information on the sedation level, anesthetic drug administration can result in over or under dosage, leading to a variety of complications. For the past few decades developing electroencephalogram (EEG) based sedation level monitoring techniques has been an active area of research and many such techniques have been developed. However, their performance is limited due to drug specificity and inter-(and intra-) subject variability. Neurophysiological distinctions, age and sex-dependent EEG changes between sedation drugs highlight the need for more robust techniques to monitor sedation levels or patient sedation index.
Anesthetic-induced oscillatory dynamics (e.g., time-varying amplitude and frequency characteristics) are readily observed on the EEG, and can be used to monitor patients receiving general anesthesia in real time. Each anesthetic drug class induces drug-specific oscillatory dynamics that can be related to circuit level mechanisms of anesthetic actions. Further, anesthetic induced oscillatory dynamics change with age. Thus, a EEG based sedation level monitoring system should be robust with drug and/or age specific variations.
Different sedatives produce specific EEG signatures. An important design factor in any sedation monitoring algorithms is the choice of relevant features to extract from the EEG signal. The present disclosure relates to a machine learning based drug-independent sedation level monitoring system and sedation level estimate using a large set of QEEG features. In some examples, several features derived from the frontal EEG, commonly reported for EEG and sedation level analysis, can be used as input to a logistic regression for automatically predicting a patient's sedation level. Furthermore, the systems and methods described herein can enable determination of a specific set of features from the larger set that are suitable for particular conditions—including patient characteristics and drug characteristics. For example, the systems and methods described herein enable feature selection to be used in determination of sedation for a particular group of drugs. Furthermore, once the features are selected, the systems and methods described herein can store the features and corresponding weights that enable determination of level of sedation.
This disclosure describes systems and methods for a machine learning-based automatic drug independent sedation level estimation system 200. The sedation level estimation system 200 can use a large set of QEEG features to generate a sedation level estimate. Example QEEG features are described below. The set of QEEG features can include features that may capture EEG dynamics in time, frequency and entropy domains. The set of QEEG features may be derived from frontal EEG. The level estimation system 200 can be based on probability output of the logistic regression evaluated on healthy subjects during anesthesia with propofol (N=36), sevoflurane (N=36) and dexmedetodimine (N=30) infusion. The model can be assessed using AUC as a metric. In one example, the level estimation system 200 resulted in a mean AUC=0.97 (0.03), 0.74 (0.25), 0.77 (0.10) for propofol, sevoflurane and dexmedetomidine, respectively, to discriminate between awake and sedated states in drug dependent mode. In another example, the sedation level estimation system 200, when used in a drug independent mode, resulted in an overall mean AUC=0.83 (0.17). Thus, by pooling the dataset from multiple anesthetic drugs, it is possible to obtain a reliable sedation level estimation system 200 with the help of machine learning algorithms.
The systems and methods disclosed take a multidimensional approach using machine learning techniques and a large set of QEEG features to predict levels of sedation. The performance of the sedation level estimation system 200 can depend on the type of anesthetic drug used for training the classifier model. For example, different features can be selected by the prediction system for propofol, sevoflurane and dexmedetomidine. This may be due to different anesthetics targeting different molecules and engaging different neural circuit mechanisms, which in turn relate to different drug-specific EEG signatures.
An ideal sedation level estimate should be easy to interpret and should not be influenced by the type of anesthetics used. With the help of large set of QEEG features and machine learning algorithms, it is possible to develop a drug independent sedation level estimation system 200. To implement the disclosed system in clinical settings, features selected by an EN algorithm on the training set can help predict sedation levels on the new subject. For a new patient, for each 4 second incoming EEG signal, the system may estimate only those features selected by the EN algorithm and input those features into the optimal model to predict the probability of being in a sedated state.
In some examples, the performance of sedation level estimation system 200 was lower with dexmedetomidine sedation when compared to propofol and sevoflurane. One possible reason is due to the difference in the mechanism of dexmedetomidine-induced sedation from that of propofol and sevoflurane induced sedation. Patients receiving dexmedetomidine can be in a “moderate sedation” state, often remain cooperative and can be awakened/aroused easily. In contrast, propofol and sevoflurane induces the patient in a general anesthetic state, and therefore the machine learning algorithm can easily discriminate between different sedation levels when compared to dexmedetomidine.
The disclosed sedation level estimation system 200 has several advantages: 1) It can be objective, free from human behavioral assessment error, 2) it can provide an approximately continuous probabilistic measure for meaningful clinical interpretation, 3) it can have good time-resolution (in other words, it can provide a sedation level estimate once every 4 s), and 4) it can be used across multiple drugs.
In one example, EEG data for propofol and sevoflurane were recorded using a 16 channel EEG monitor in healthy volunteers with a sampling frequency=5 kHz and were later reduced to 1 kHz during transition to extraction file. In another example, 32 channel EEG data for dexmedetomidone was recorded using an amplifier with a recorder at a sampling rate of 5 kHz. In some examples, subjects were asked to keep their eyes closed for the entire study duration. Subjects can be excluded in cases of body weight being less than 70% or more than 130% of ideal body weight, pregnancy, neurological disorder, diseases involving the cardiovascular, pulmonary, gastric, and endocrinological system, or recent use of psycho-active medication or intake of more than 20 g of alcohol daily. The collection and processing of EEG data can be performed by the controller 1400 as discussed in more detail with respect to
Propofol can be administered through an intravenous line. The pharmacokinetic-dynamic (PKPD) model of Schnider can be used to predict the effect-site concentration (CePROP). After 2 minutes of baseline measurements, a “staircase” step-up and step-down infusion of propofol can be administered. For example, the initial CePROP can be set to 0.5 μg mL−1 followed by successive steps toward target concentration of 1, 1.5, 2.5, 3.5, 4.5, 6 and 7.5 μg mL−1.
Sevoflurane can be titrated to target or maintain an approximately constant end-tidal sevoflurane (ETSEVO). For example, the initial ETSEVO can be set to 0.2 vol % followed by successive ETSEVO of 0.5, 1, 1.5, 2.5, 3.5, 4, 4.5 vol %. The upwards staircase can be followed until tolerance/no motoric response to all stimuli is observed and a significant burst suppression ratio of at least 40% is attained (for both propofol and sevoflurane). Next, the downward staircase steps can be initiated, using identical targets but in reverse order. In order to obtain a pharmacological steady state at each new step in drug titration, a 12 minutes equilibration time can be maintained once the desired predicted effect-site concentration of propofol or the measured end-tidal vol % of sevoflurane was reached.
Dexmedetomidine can be delivered by using effect site target controlled infusion. Before dexmedetomidine is administered, and after checking adequate function of all monitors, a 5 minutes baseline measurement can be performed during which the volunteer is asked not to talk, to close the eyes and relax for some minutes. After gathering initial baseline data, dexmedetomidine infusion can be initiated, with the following targeted effect site concentrations: 1.0 ng/ml (0-40 mins), 3.0 ng/ml (40-90 mins), 4.0 ng/ml (90-130 mins), 5.0 ng/ml (130-170 mins), and 6.0 ng/ml (170-200 mins). Adventageously, this protocol can allow all effect sites to reach an approximately steady state. Fifty minutes after increasing to a concentration of 6.0 ng/ml, dexmedetomidine infusion can be ceased.
The Modified Observer's Assessment of Alertness/Sedation (MOAA/S) score can be used to score patient sedation levels. MOAA/S scores can range from 5 (responsive to auditory stimuli) to 0 (deeply sedated/unresponsive/comatose). In some examples, the performance of the disclosed system was tested to discriminate between two MOAA/S groups that are clearly well distinguishable from a clinical viewpoint in terms of patient's level of consciousness: awake [MOAA/S 5 and 4] versus sedated [MOAA/S 1 and 0]. An example illustrating the behavioural response and its corresponding EEG spectrogram for an example propofol induced subject is shown in
A sedation level estimation system 200 can include a number of processes implemented by a controller 1400 as discussed with respect to
In one example, the controller 1400 may perform the preprocessing step signals from EEG channels and segmenting them into short duration epochs. Several QEEG features can be extracted from each EEG epoch and combined by taking the median across channels. The feature vectors can then be fed to the classifier and the probability of each sedation level can be obtained for each EEG epoch. An automated probabilistic sedation level estimate can then obtained. The architecture can be implemented as programmed instructions executed by one or more hardware processors. The architecture can be implemented with Masimo Sedline® Brain Function Monitor (sold by Masimo of Irvine, CA). An example architecture is described in more specificity with respect to
In some examples, four frontal EEG channels arranged in bipolar configuration may be used to collect EEG signals. For example, EEG signals from Fp1, Fp2, F7, and F8 electrodes in a Fp1-F7 and Fp2-F8 configuration may be used. However, other configurations of EEG electrodes may be used.
As shown in
1. EEG Preprocessing and Epoch Extraction
Referring to
At an EEG epoch segmentation block 212, the controller 1400 can segment the output from preprocessor block 210 into smaller epochs. For example, output from preprocessor block 210 may be one-minute EEG segments. The controller 1400 can segment the one minute EEG segments into 4-second epochs with 0.1 s shift. The output of the EEG epoch segmentation block 212 can be segmented EEG signals.
2. Feature Extraction
With continued reference to
At a median block 216, the controller 1400 can obtain a median across channels. The output of the median block 216 can include a set of features for the subject. For example, where the controller 1400 extracted 44 features across two channels for the subject at feature extraction block 214, the controller 1400 can determine a median of the features in the two channels, resulting in a dataset of 44 features for the subject. The controller 1400 can also use other averaging operators instead of taking median.
3. Classification and Post-Processing
With continued reference to
where λ is the penalty parameter, ∥⋅∥1, ∥⋅∥2 are the l1-norm and l2-norm, respectively, and a controls the relative contribution of the ∥⋅∥1, ∥⋅∥2 penalties. Setting α=1 can result in least absolute shrinkage and selection operator (LASSO) and α=0 can be equivalent to ridge regression. Elastic-net regularization feature selection method is ideal for datasets with multiple features that are highly correlated. In applicants where several highly correlated features are used, LASSO performs sparse selection, tending to select one feature and discarding the rest. Ridge regression can shrink the model coefficients towards each other for highly correlated features. The elastic-net produces sparse feature representation through l1 penalty and selects corresponding features through l2 penalty. In short, for a given λ as α increases from 0 to 1, the number of sparse model coefficients (coefficients equal to 0) monotonically increase from 0 to LASSO sparse estimation. For example, α=0.5 can be set for equal contribution from ∥⋅∥1, ∥⋅∥2 penalties.
At a DTC transform block 222, the controller 1400 can convert the predicted output Ŷ=bTX of the model to posterior probabilities via the logit transform to obtain the patient sedation level as:
This patient sedation level can indicate the sedation state of a patient. The patient sedation level can be a probability score on a continuous scale of 0 to 1. Thus, the output of the DTC transform block 222 can be a continuous score representative of patient sedation level.
4. Metrics
The area under the receiver operator characteristic curve (AUC) can be used to assess the performance of the logistic regression model. In addition, the Spearman rank correlation coefficient (ρ) can be obtained between the sedation level estimate and continuous MOAA/s assessments.
5. Cross Validation
Leave-one-out cross validation (LOOCV) can be used to assess performance of the prediction system, to provide an unbiased estimate of out-of-sample model performance. In one example, data from (N−1) subjects were used for parameter selection and model training and the left-out subject was used for testing. This process was repeated until the recording from each subject was used once for testing (N iterations in total). This LOOCV process can be referred to as outer cross validation to distinguish it from a separate inner cross validation process performed on each fold of the training data (see below). Whereas the purpose of outer cross validation may be to obtain an unbiased estimate of model performance, the purpose of inner cross validation may be optimization of model parameters.
6. Training
The training data can be selected for training based on a quality metric for the training data. For example, the training data may include MOAA/S scores. Since MOAA/S scores are subjective measures, scores closer to the extremes of 0 and 5 (e.g. scores 0, 1, and 4, 5) are more likely to be correct than scores in the center of the scale (e.g. scores 2, 3). Thus, to more accurately train the model, data relating to MOAA/S scores closer to the center of the MOAA/S scale may be excluded from the training data.
The training data (i.e. data from (N−1) subjects) can be used to select features for inclusion in the model and to estimate the optimal values for the coefficients in the linear regression model. Feature selection can involve selecting a hyperparameter λ in the elastic-net regularization function, such as 10-fold cross validation on the training data (i.e. dividing training data into 10 parts). In each fold of inner cross validation, a series of models can be trained using a range of penalty values λ, and tested each model on the remaining fold of the training data. The value of λ that minimizes the mean deviance across all 10 inner cross validation folds can be selected as the optimal λ and later used to train a single final model on all of the training data.
7. Testing
A model resulting from internal cross validation can be tested in the outer loop of LOOCV by evaluating its performance on the held-out testing data. The trained regression model with parameters that were optimal for the training data can be used to predict the probability of patient responses on the left-out testing data. Thus, the model selection routine can be performed only on the training data (using inner cross validation) and be independent from the testing data.
In some examples, nine different training and testing combinations were tested to demonstrate the effect of drug dependency (drug dependent system): 1) PP—train and test on propofol data, 2) SP—train on sevoflurane and test on propofol data, 3) DP—train on dexmedetomidine and test on propofol data, 4) SS—train and test on sevoflurane data, 5) PS—train on propofol and test on sevoflurane data, 6) DS—train on dexmedetomidine and test on sevoflurane data, 7) DD—train and test on dexmedetomidine data, 8) PD—train on propofol and test on dexmedetomidine data, and 9) SD—train on sevoflurane and test on dexmedetomidine data. In some examples, to test the performance of the system in drug independent mode (drug independent system), a cross-anesthetic model by pooling data from all three anesthetics was developed. For example, propofol, sevoflurane and dexmedetomidine data can be combined as one dataset and a LOOCV can be performed.
8. Statistical Analysis
In some examples, analysis of variance (ANOVA) followed by post-hoc testing with the Tukey Honest Significant difference test was used to assess group differences. In some examples, tests were two-sided with alpha=0.05. In some examples, coding and analysis was performed using the MATLAB 2018a scripting language (Natick, USA).
1. Performance of Individual Features
Example distributions of AUC values for individual features across three drugs are shown in
2. Drug Dependent System
Example performance of the sedation level estimation system 200 using different feature domains is given in Table 2. As shown in Table 2, performance was significantly better (p<0.05) using combined QEEG features (time+frequency+entropy) when compared to other individual domain features. In some examples, the following performances were obtained for different training and testing combinations using QEEG features: PP—0.97 (0.03), SS—0.74 (0.25), DD—0.77 (0.10), SP—0.93 (0.06), DP=0.82 (0.11), PS=0.73 (0.23), DS=0.66 (0.18), PD=0.74 (0.09), and SD=0.71 (0.10). In some examples, the performance of the system when trained and tested on the same drug outperformed the system when trained and tested on different drugs. Example distributions of AUC values using QEEG features are shown in
3. Drug Independent System
In one example of a drug independent system, data was combined from three anesthetic drugs (a total of 102 iterations), resulting in slight decrease in the performance of 2% for propofol (mean AUC=0.97 (0.03) to 0.95 (0.05), p=3.5E-4), 1% for sevoflurane (mean AUC=0.74 (0.25) to 0.73 (0.22), p=0.40), 1% in the case of dexmedetomidine (mean AUC 0.77 (0.10) to 0.76 (0.10), p=0.48), and an overall mean AUC=0.83 (0.17) using QEEG features.
Table 3 shows an example list of the top 20 features selected (reported as
where FTN=number of times a given feature was chosen across all subjects during training process, and N refers to number of patients in each drug class) by an EN algorithm across all subjects for propofol, sevoflurane and dexmedetomidine. Not all 44 QEEG features may be significant to the system and the features may vary across each patient.
Table 3 shows an example list of top 20 features and the number of times each feature was selected (reported as reported as
where FTN=number of times a given feature was chosen across all subjects during training process, and N refers to number of patients in each drug class) by EN algorithm across all subjects for propofol, sevoflurane and dexmedetomidine.
An example distribution of performance values of an example of the system in drug independent mode across different drugs is shown in
Using a multi-class classification or a multinomial regression may not be efficient due to the annotation noise and limited dataset in intermediate sedation states which will provide a discrete score. To overcome these limitations, a model may be trained by the controller 1400 to discriminate between two extreme levels of sedation and use a logit transform to convert it into a continuous probability sedation level score or patient state index. This approach is beneficial because MOAA/S scores are not continuous response assessments, (i.e. they are performed intermittently), and can therefore limit the number of assessments in individual scores, creating an imbalanced dataset. Additionally, the approach is beneficial because it can reduce the score “annotation-noise” due to interobserver variability during sedation level assessment.
A continuous sedation level or patient state index can be obtained by developing a binary classifier, which is trained only on awake and sedated epochs, to assign a probability score to all EEG epochs corresponding to all MOAA/S scores (0 to 5). For example, the controller 1400, through a binary classifier, may determine a probability score of 0.6 or 60% probability of the patient being sedated for an epoch having a MOAA/S score of 3. The Spearman rank-correlation (p) can then be obtained between a binary classifier sedation level and all MOAA/S scores. For example, the controller 1400 may determine a probability score for a number of epochs that may have a range of MOAA/S scores. The controller 1400 may then determine a correlation between that dataset and the MOAA/S scores. That correlation can be used to generate a continuous sedation level estimation. Once determined, the correlation can be used to generate a continuous sedation level estimate as part of a patient monitoring system.
Table 4 summarizes the performance of the system 200 to predict a continuous sedation level. In one example, this approach provided promising results, with a mean p=0.54 (0.21) significantly better than a random chance p=0.12 (0.09), suggesting that the system trained as a binary model may ultimately provide a continuous level estimation of sedation level.
In addition to the models discussed above, the controller 1400 can also use alone or in combination non-linear machine learning models. In total, 204 EEG recordings from 66 healthy volunteers were used to determine performance of several nonlinear machine learning algorithms to predict sedation levels or patient state index. In some instances, the following disclosure can be used to develop a robust and reliable real-time automatic sedation level prediction system implemented by the controller 1400 that is invariant across all conditions.
Each volunteer was scheduled to receive four sessions of anesthesia with different drug combinations in a random order, with a minimal interval of one week between sessions. The four sessions were named “propofol alone”, “sevoflurane alone”, “propofol combined with remifentanil”, and “sevoflurane combined with remifentanil”. In each session that needed blood sampling, the volunteer received an arterial line before any drug was administered. Propofol and remifentanil were administered through an intravenous line by a Fresenius Base Primea docking station (Fresenius-Kabi, Bad Homburg, Germany) carrying two Fresenius Module DPS pumps, that were controlled by RUGLOOPII software (Demed, Temse, Belgium) in order to steer target-controlled infusion (TCI). RUGLOOPII is a computer-controlled drug delivery and data collection software package. Both drugs were titrated towards a target effect-site concentration using a three compartmental pharmacokinetic-dynamic (PKPD) model enlarged with an effect-site compartment. For propofol, the pharmacokinetic-dynamic (PKPD) model of Schnider et al 2 was used to predict the effect-site concentration (CePROP). For Remifentanil, the effect-site concentration (CeREMI) was predicted by the PKPD model of Minto et al. 3. Sevoflurane was titrated using the proprietary closed loop algorithm of the Zeus® ventilator (Software version 4.03.35, Dräger Medical, Lübeck, Germany) to target and maintain a constant end-tidal sevoflurane (ETSEVO).
Each session contained two separate titration-phases: a stepwise up and down administration of drugs towards consecutive steady state conditions. The sequence of events and study observations are shown in the original study 1. After 2 minutes of baseline measurements, a “staircase” step-up and step-down infusion of anesthetic drugs was administered. For the propofol alone group, the initial CePROP was set to 0.5 μg mL−1 followed by successive steps toward target concentration of 1, 1.5, 2.5, 3.5, 4.5, 6 and 7.5 μg mL−1. For the sevoflurane alone group, the initial ETSEVO was set to 0.2 vol % followed by successive ETSEVO of 0.5, 1, 1.5, 2.5, 3.5, 4, 4.5 vol %. The upwards staircase was followed until tolerance/no motoric response to all stimuli was observed and a significant burst suppression ratio of at least 40% was attained. Next, the downward staircase steps were initiated, using identical targets but in reverse order. For the sessions with remifentanil, the same procedure was conducted, although 2 minutes prior to the start of propofol or sevoflurane, a CeREMI of 2 or 4 ng mL−1 was targeted in accordance with the stratification and maintained during the entire study. After the predicted CePROP or ETSEVO reached the target at each step, an equilibration time of 12 minutes was maintained to allow optimal equilibration between plasma- or end-tidal concentration and the corresponding effect-site concentration resulting in a pharmacological condition called a “pseudo-steady state.”
After the 12 minutes of equilibration time, an additional minute of baseline electroencephalographic and hemodynamic measurements was performed. Thereafter, the Modified Observer's Assessment of Alertness/Sedation (MOAA/S) scale 4 was scored followed by two minutes of response time. This score is ranging from 5 (responding readily to name spoken in normal tone) to 0 (not responding to a painful trapezius squeeze). The observed MOAA/S is followed by drawing an arterial blood sample for analyzing the concentration of propofol and/or remifentanil. For sevoflurane, the measured ETSEVO at this steady-state condition was registered. A schematic diagram of the order of events can be found in the supplements of the original study1. Two minutes after MOAA/S, an electrical stimulus was administered during maximal 30 seconds and the tolerance/motoric responsiveness to electrical stimulation was observed, again followed by two minutes of response time.
In each session, the volunteers started by breathing spontaneously through a tight-fitting face mask connected to an anesthesia ventilator (Zeus®, Software version 4.03.35, Dräger Medical, Lübeck, Germany). End-tidal sevoflurane (ETSEVO), carbon dioxide and oxygen concentration were measured using a gas-analyzer of the anesthesia ventilator. If deemed necessary, respiratory support was applied to guarantee an unobstructed airway, adequate oxygenation (SpO2>92%) and CO2 (35-45 mmHg) homeostasis. Throughout the study, all volunteers were connected to a vital signs monitor to monitor the oxygen saturation (measured by pulse oximetry), electrocardiogram (ECG) and intermittently measured non-invasive blood pressure at 1-min intervals.
Raw electro-encephalographic waves were measured using a 16 channel Neuroscan® EEG monitor (Compumedics USA, Limited, Charlotte, NC, USA) existing of a SynAMPRT headbox (model 9032) and system unit, that respectively collected and amplified the raw electro-encephalographic signals towards a laptop computer running SCAN4 proprietary recording software (Compumedics, Charlotte, USA). The volunteer wore a cap over the head mounted with standard 10-20 electrode montage. Through these electrodes we recorded raw EEG of locations A1, A2 (references on left and right earlobe), F3, Fz, F4 (electrodes in the cap above the motor cortex and frontal lobes), T7, C3, Cz, C4 and T8 (electrodes above the temporal parietal lobes and the somatosensory cortex), P3, Pz, P4 (electrodes above the occipital region) and Oz as a cranial and centrally located reference point). In addition, a bilateral PSI electrode was attached on the forehead of the volunteer and connected to a Masimo Root Monitor (Model-RDS-7, Masimo, Irvine, USA) that runs SEDLine® brain function software, in concordance with the manufacturer's instructions (Masimo Inc., Irvine, USA). This frontal adhesive electrode recorded 4 additional channels being: L1, R1, L2, R2 (L1 and R1 are the Fp1 and Fp2 leads of the standard 10-20 system). Even and uneven numbers in the standard 10-20 electrode montage correspond respectively to the right and left side of the brain.
An example Masimo SEDLine connection cable used here is described with respect to
Propofol (2,6-diisopropylphenol) plasma concentrations were measured using a validated gas chromatography-mass spectrometric analysis, whereas remifentanil plasma concentrations were measured using a liquid chromatography-tandem mass 5.
Patient Inclusion
Subjects were Included Age- and Sex Stratified into 3 Age Categories (18-34, 35-49 and 50-70 years). Volunteers were excluded if their body mass index was greater than 30 kg/m2 or less than 18 kg/m2, if they had any significant cardiovascular risk factor or disease, if they had neurological disorders, suffered from dementia, schizophrenia, psychosis, drug or alcohol abuse or daily use of more than 20 g of alcohol, depression requiring anti-depressive drug treatment or when they recently used psycho-active medication. Furthermore, volunteers could not participate in this trial when they were pregnant or breastfeeding, had a bilateral non-patent arteria ulnaris or suffered from any relevant medical condition.
Study Design
Each volunteer underwent two study sessions with at least one week between both sessions. On the first study day, volunteers received dexmedetomidine administered through target-controlled infusion (TCI) with targeted effect site concentrations of consecutively 1, 2, 3, 5 and 8 ng/ml. On their second study day (>1 week later), subjects received a similar stepwise increasing infusion of remifentanil with effect site targets of 1, 2, 3, 5 and 7 ng/ml. Consecutively, after allowing remifentanil to washout, volunteers received dexmedetomidine TCI with a targeted effect site concentration of 2 ng/ml. As soon as this effect site concentration was reached, an infusion of remifentanil was added with increasing effect site target concentrations of respectively 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 and 4.0 ng/ml.
Study Procedures
Study participants were instructed to fast from 6 h before the start of their scheduled study sessions. Furthermore, they were instructed not to consume alcohol for 2 days prior to the study, not to smoke tobacco for 1 week and not to use recreational drugs for 2 weeks prior to their study days. When they arrived at the research unit, a 20-gauge intravenous cannula was placed and subjects were connected to a vital signs monitor. Under ultrasound guidance and after injection of a local anesthetic, a 20-gauge arterial cannula was placed for blood sampling and hemodynamic monitoring. During the study sessions a nurse anesthetist and anesthesiologist were present and responsible for the drug administration, monitoring, respiratory support and providing emergency care when needed. A complete anesthetic workstation was present as well as an anesthesia ventilator (Zeus Infinity C500 ventilator, Dräger Medical, Lübeck, Germany). A research physician and nurse performed all other study procedures. Volunteers were connected to the ventilator using a tight-fitting face mask with an inspired oxygen fraction set to 25%. The anesthesiologist supported respiration, if deemed necessary, by verbal stimulation, jaw thrust or chin lift, or by adding pressure support or positive pressure ventilation using the anesthesia ventilator.
Assessment of Cerebral Drug Effect
The cerebral drug effect was measured using 17-channel electroencephalography (EEG), with a BrainAmp DC32 amplifier and a Brainvision recorder (Brain Products GmbH, Gilching, Germany) recording at a sampling rate of 5 kHz. Furthermore, the level of sedation was tested at baseline, at each infusion step and during the recovery phase, using the MOAA/S score. In addition, prior to each increase in infusion step, a laryngoscopy was performed if the MOAA/S score was less than 2. Before the start of the infusions, volunteers were placed in supine position and they were asked to close their eyes. Except for the MOAA/S assessments, volunteers were not stimulated and ambient noise was kept low throughout the study session. Prior to the start of the drug infusions, baseline measurements of EEG and vital sign parameters were performed during a 5-minute period.
Drug Administration
Dexmedetomidine and remifentanil were both administered using target-controlled infusion. For dexmedetomidine, this TCI was based on the pharmacokinetic and pharmacodynamic (PKPD) models developed by Hannivoort and Colin et al. 6 using the effect site of the MOAA/s. For the first 3 infusion targets, the infusion rate was limited to 6 μg·kg−1·h−1 and for the highest two targets to 10 μg·kg−1·h−1. This was done in order to decrease initial hypertensive reactions as seen with bolus administration. To target remifentanil effect site concentrations, a PKPD model developed by Eleveld et al 7 was used.
Recovery Phase
Drug infusion was stopped after completion of all infusion steps or after a subject tolerated a laryngoscopy. In addition, when one of the safety criteria was met and deemed relevant by the anesthesiologist, drug infusion was also ceased.
Those safety criteria were:
Rescue medication with 0.5 mg atropine was administered if deemed necessary and then dexmedetomidine and/or remifentanil infusion was ceased. For maintaining acceptable blood pressure, volunteers were put in a slight Trendelenburg position. A rescue dose of 5 mg ephedrine was administered if needed and then drug infusion was ceased.
After cessation of drug infusion, the recovery phase began. Measurements and monitoring continued until the volunteer met the criteria of our hospitals post anesthesia care unit, then after also the last blood sample was taken, he or she was discharged home.
Blood Sampling, Storage and Analysis
Blood samples were drawn from the arterial line at baseline prior to changing the targeted effect site concentration (at steady-state), and at predefined time points during the recovery period. The samples were collected in EDTA tube and stored on ice for a maximum of 15 minutes for remifentanil or 1 hour for dexmedetomidine. Samples were centrifuged at 4° C. for 5 minutes and with 1,754×g (Labofuge 400R, Heraeus Holding GmBh, Hanau, Germany). The plasma was then transferred into cryovials. Sample stability for remifentanil was improved by adding 1.5 μL of formic acid per milliliter of plasma. The cryovials were then stored at or below −20° C.
Analysis of plasma concentrations was done using ultra-high-performance liquid chromatography-mass spectrometry (UPLC-MS/MS by a Xevo Triple Quadrupole mass spectrometer, Waters Corporation, Milford, Massachusetts, USA). The upper and lower limits of quantification were 20 ng/ml and 0.05 ng/ml respectively for both drugs. Samples thought to be above the upper quantification limit were diluted prior to sample treatment. The coefficient of variation was below 9% for remifentanil and below 8% for dexmedetomidine.
As discussed above, the Modified Observer's Assessment of Alertness/Sedation (MOAA/S) score 15 was used to assess the level of sedation. A MOAA/S score of 5 indicates awakeness and MOAA/S=0 corresponds to a deep sedated state. A binary classification between two MOAA/S subgroups: awake [MOAA/S 5 and 4] versus sedated [MOAA/S 1 and 0], discarding the remaining MOAA/S scores was performed.
Sedation Level Prediction System
The controller 1400 can extract the following 44 quantitative EEG (QEEG) features from each 4 s EEG epoch in this study:
These features can be extracted separately for each bipolar frontal montage channel and then obtained a median across channels to combine the channel information. These features were then used to train a machine learning algorithm to obtain the probability of the sedated state for each 4 s EEG epoch.
Metrics
The controller 1400 can use the area under the receiver operator characteristic curve (AUC) to evaluate the model performance. The controller 1400 can also report sensitivity, specificity, F1-score for the best performing machine learning model.
Machine Learning Model Development
The controller 1400 can evaluate the performance of four machine learning algorithms: elastic net logistic regression (EN-LR), support vector machine with Gaussian kernel (SVM-G), random forest (RF), and Ensemble tree with bagging (ET-B). The controller 1400 can evaluate the performance of the proposed system using a leave-one-out cross-validation technique i.e. the controller 1400 divided the data into N−1 folds. In each iteration, the controller 1400 used N−1 EEG recordings for training the machine learning model and the left-out unseen recording for testing, resulting in a total of N iterations. In each fold, features in the training data were Z-score standardized (by subtracting the mean and dividing by the standard deviation) and the testing data features were normalized with respect to the Z-score normalization factor of the training data before using them for classification. The controller 1400 performed grid search to identify the optimal hyper-parameters of these models (summarized in supplementary table 1) through 10-fold cross-validation within the training data and the final optimal model was then used to estimate the sedation level probability on the testing data. This was repeated until each data was used once for testing and is illustrated in
First, the controller 1400 performed binary classification to differentiate between awake and sedated state using pooled dataset during propofol, sevoflurane and dexmedetomidine infusion. Then the controller 1400 added remifentanil data to this pooled dataset to evaluated the robustness and stability of the machine learning models. By this way the controller 1400 identified the machine learning model that is invariant after the addition of new drug (remifentanil in this case).
Performance of Individual QEEG Features
Performance of Machine Learning Models
The performance of different machine learning models to predict sedation levels using the proposed architecture is summarized in table 1, shown below. The performance of ensemble tree with bagging outperformed other machine learning models and was stable after the inclusion of remifentanil
All models had AUC's above 0.8 without remifentanil but the AUC's dropped significantly when interacted with remifentanil. However, the performances of the tree based methods were not sensitive to the addition of remifentanil and the ET-B model achieved the highest AUC of 0.88 (0.84-0.89). All subsequent results will be based on the performance of ET-B including remifentanil.
Discriminative Features
Effect of Age
To evaluate the effect of age on the performance of the ET-B model, the controller 1400 divided the dataset into three sub groups: group1 —18 to 35 years, group 2 —35 to 50 years and group 3 —50 to 70 years. The controller 1400 then performed three different training testing combinations: (i) train on group 1 test on groups 2 and 3, (ii) train on group 2 test on groups 1 and 3 and (iii) train on group 3 test on groups 1 and 2. The following table provides summary of AUC's (mean AUC (95% CI)) obtained for each model when trained and tested across different age groups.
The performance of the model was nearly similar when trained and tested within the same age group, however, it dropped significantly (approximately 10% reduction in the overall AUC) during cross training and testing (trained and tested on different groups).
Effect of Sex
To evaluate the influence of sex, the controller 1400 performed cross training and testing i.e, the controller 1400 trained the ET-B model on male and tested it on female and vice-versa. When trained and tested within the same sex the prediction performance of the ensemble model was similar: AUC=0.88 (0.82-0.92) and 0.90 (0.85-0.94) for male and female, respectively. However, the overall performance dropped by 9% (0.79 (0.75-0.85)) and 8% (0.82 (0.77-0.88)) for male and female, respectively during cross training and testing.
Model
In recent years, there is a growing interest in developing EEG-based level of sedation monitors. However, among several unresolved important questions, it was not clear why these monitors failed to perform across different anesthetic drugs and patient groups. Using a large set of 44 QEEG features, the ensemble tree with bagging (ET-B) machine learning model achieved the best prediction performance of AUC>0.85 to discriminate between awake and sedated states. Thus, in some instances, this model can be used for a drug-independent nonlinear machine learning based sedation level prediction system. In some instances, individual features and/or features derived from spectral domain are not sufficient for real-time sedation level prediction at population level. Further, in some instances, addition of remifentanil affects the prediction performance of different features. Moreover, in some aspects, it is important to include all age groups and sex to develop a robust patient-independent sedation level monitoring system.
The EEG is the only technique available to accurately monitor sedation levels in real-time. One of the issues in developing EEG based sedation level monitors is the “feature engineering”: which features should be used to accurately predict sedation states? Current EEG based sedation level monitors either use a single feature or few expert defined spectral features to predict sedation levels. Additionally, the addition of remifentanil significantly decreased the predictive ability of all features as shown in
Except for tree based methods, the performance of all other machine learning models was significantly influenced by the addition of remifentanil. ET-B is an ensemble algorithm that develops a predictive model by combining multiple decisions to decrease bias/variance via bagging or bootstrap aggregation. A highly robust predictive decision is obtained by majority voting of decisions from individual classifiers in each ensemble. The ET-B algorithm selected a different combination of features to differentiate between awake and sedated states. Only four features: BSR, standard deviation of FM, SVDE and FD were commonly selected in all conditions making it an important feature to predict sedation levels. It should be noted that only two features from the spectral domain (power in alpha band and power in beta band) were selected by the ET-B algorithm suggesting that features derived from the traditional spectral analysis alone are not sufficient to track sedation levels.
Accordingly, by pooling data from different drugs, age and sex groups, it is possible to develop a robust realtime sedation level prediction system using advanced nonlinear machine learning algorithms. Features derived from traditional spectrogram alone may not be sufficient to accurately predict levels of sedation.
The EEG hardware system 1200 can include an EEG-adaptor cable 1220 for carrying the electrical signals from the EEG sensor 1210 to an adaptor 1230. The EEG adaptor cable 1220 can include an interface 1302 as shown in
The EEG hardware system 1200 can include an adaptor 1230 for interfacing with both the EEG sensor 1210 and a patient monitor 1250. The adaptor 1230 can be a hardware module including circuits and other hardware components for processing EEG signals. In an embodiment, the adaptor 1230 can include one or more hardware processors 1232, a memory 1234, and power electronics 1236. The hardware processor 1232 can be programmed to implement the processes described herein for analyzing EEG signals. The memory 1234 can store instructions that can be executed by the hardware processor 1232. The memory 1234 can also store system parameters, including predetermined thresholds and conditions. The power electronics 1236 can include circuits for analog to digital conversion. The power electronics 1236 can also include filter circuitry for processing EEG signals. Some of the filters are stored as executable instructions and can be executed by the hardware processor 1232. The adaptor 1230 can generate outputs based on the received EEG signals and transmit the generated output to the patient monitor 1250. In some embodiments, the hardware processor 1252 of the patient monitor 1250 does not need to process any of the EEG signals. The patient monitor 1250 can receive the generated output for display or calculation of other health parameters. The adaptor 1230 and the patient monitor 1250 can be coupled with the adaptor-monitor cable 1240. The adaptor-monitor cable 1240 can include an interface 1304 as shown in
The patient monitor 1250 can be a multi-parameter patient monitor for processing and analyzing sensor signals. The patient monitor 1250 includes one or more hardware processors 1252, a memory 1254, a display 1256, and power electronics 1258. The hardware processors 1252 of the patient monitor can be programmed to execute instructions stored in either an onboard memory of the adapter 1234 or the memory 1254 of the patient monitor. The patient monitor 1250 can also include a display 1256 that can display health parameters and graphs generated from the analysis of the received raw EEG signals or signals processed by the adaptor 1220.
The controller 1400 can include a signal collection engine 1410 for collecting and storing EEG signals in a memory. In an embodiment, the signal collection engine 1410 can store a circular buffer of EEG signals in a memory, which can refer to the memory 1234 or 1254 or a combination. The circular buffer can be 1.2 seconds. In other embodiments, the circular buffer can be more than 1.2 seconds, such as 2.4 seconds, 5 seconds, 10 seconds or more. Yet, in other embodiments, the circular buffer can be less than 1.2 seconds.
The controller 1400 can also include a display engine 1404. The display engine 1404 can generate a user interface for displaying the DSA on a display 1256 of the patient monitor 1250. In an embodiment, the display engine displays a state of sedation for the patient as determined above by the machine learning models. The display engine 1404 can also generate graphs and health parameters for display based on the determined state of sedation.
In some instances, the controller 1400 can include a Feature Extraction Engine 1412 to extract the QEEG features (as shown in Table 1 above) from the EEG signals.
As discussed herein, the terms “sedation level”, “patient state indicator”, and “sedation index” and the like are used interchangeably and refer to an indicia of measurement that is measured and tracked internally by the controller 1400. The patient state indicator can be a numerical value, a textual description, a color indicator, or the like. In an embodiment, the patient state indicator is based on a numerical scale from 0 to 100, where 100 would represent an awake state and a lower number would indicate that a patient is likely in one of the sedated states. The patient state indicator may also be a textual description indicating that the patient is awake, in light sedation, moderate sedation, deep sedation, or the like. One of ordinary skill in the art will understand that in some embodiments, the patient state indicator may be expressed as both a numerical indicator and as a textual description and/or the like at the same time. One of ordinary skill in the art will also understand that a patient state indicator or sedation index that is expressed as a numerical value can be converted to a textual description, color indicator, or the like. The patient state indicator or sedation index can provide a measure for the state of consciousness of a patient. The models described above provide rules for improving the automatic determining of the patient state indicator. For example, in some instances, certain machine learning models were found to provide a better estimate (as discussed above. Moreover, certain features were also found to provide a better estimate across a diverse cross-section. The rules improve the correlation between the patient state indicator and the physiological state of the patient. Accordingly, caregivers can use the patient state indicator to guide them while treating a patient undergoing anesthesia. For example, if the indicator shows that the patient is coming out of anesthesia prematurely, the anesthesiologist can increase dosage. Furthermore, the anesthesiologist can also use the index to monitor and deliver the sedative drug. Accordingly, the models described above improve the determination of sedation index and correspondingly improve the field of medicine and the treatment provided to a patient.
Embodiments have been described in connection with the accompanying drawings. However, it should be understood that the figures are not drawn to scale. Distances, angles, etc. are merely illustrative and do not necessarily bear an exact relationship to actual dimensions and layout of the devices illustrated. In addition, the foregoing embodiments have been described at a level of detail to allow one of ordinary skill in the art to make and use the devices, systems, etc. described herein. A wide variety of variation is possible. Components, elements, and/or steps can be altered, added, removed, or rearranged. While certain embodiments have been explicitly described, other embodiments will become apparent to those of ordinary skill in the art based on this disclosure.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment.
Depending on the embodiment, certain acts, events, or functions of any of the methods described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the method). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores, rather than sequentially.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both as discussed above.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments of the inventions described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain inventions disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Number | Name | Date | Kind |
---|---|---|---|
4960128 | Gordon et al. | Oct 1990 | A |
4964408 | Hink et al. | Oct 1990 | A |
5319355 | Russek | Jun 1994 | A |
5337744 | Branigan | Aug 1994 | A |
5341805 | Stavridi et al. | Aug 1994 | A |
D353195 | Savage et al. | Dec 1994 | S |
D353196 | Savage et al. | Dec 1994 | S |
5377676 | Vari et al. | Jan 1995 | A |
D359546 | Savage et al. | Jun 1995 | S |
5431170 | Mathews | Jul 1995 | A |
5436499 | Namavar et al. | Jul 1995 | A |
D361840 | Savage et al. | Aug 1995 | S |
D362063 | Savage et al. | Sep 1995 | S |
D363120 | Savage et al. | Oct 1995 | S |
5456252 | Vari et al. | Oct 1995 | A |
5479934 | Imran | Jan 1996 | A |
5482036 | Diab et al. | Jan 1996 | A |
5494043 | O'Sullivan et al. | Feb 1996 | A |
5533511 | Kaspari et al. | Jul 1996 | A |
5561275 | Savage et al. | Oct 1996 | A |
5590649 | Caro et al. | Jan 1997 | A |
5602924 | Durand et al. | Feb 1997 | A |
5638816 | Kiani-Azarbayjany et al. | Jun 1997 | A |
5638818 | Diab et al. | Jun 1997 | A |
5645440 | Tobler et al. | Jul 1997 | A |
5671914 | Kalkhoran et al. | Sep 1997 | A |
5726440 | Kalkhoran et al. | Mar 1998 | A |
D393830 | Tobler et al. | Apr 1998 | S |
5743262 | Lepper, Jr. et al. | Apr 1998 | A |
5747806 | Khalil et al. | May 1998 | A |
5750994 | Schlager | May 1998 | A |
5758644 | Diab et al. | Jun 1998 | A |
5760910 | Lepper, Jr. et al. | Jun 1998 | A |
5890929 | Mills et al. | Apr 1999 | A |
5919134 | Diab | Jul 1999 | A |
5987343 | Kinast | Nov 1999 | A |
5997343 | Mills et al. | Dec 1999 | A |
6002952 | Diab et al. | Dec 1999 | A |
6010937 | Karam et al. | Jan 2000 | A |
6027452 | Flaherty et al. | Feb 2000 | A |
6040578 | Malin et al. | Mar 2000 | A |
6066204 | Haven | May 2000 | A |
6115673 | Malin et al. | Sep 2000 | A |
6124597 | Shehada et al. | Sep 2000 | A |
6128521 | Marro et al. | Oct 2000 | A |
6129675 | Jay | Oct 2000 | A |
6144868 | Parker | Nov 2000 | A |
6152754 | Gerhardt et al. | Nov 2000 | A |
6184521 | Coffin, IV et al. | Feb 2001 | B1 |
6232609 | Snyder et al. | May 2001 | B1 |
6241683 | Macklem et al. | Jun 2001 | B1 |
6255708 | Sudharsanan et al. | Jul 2001 | B1 |
6280381 | Malin et al. | Aug 2001 | B1 |
6285896 | Tobler et al. | Sep 2001 | B1 |
6308089 | von der Ruhr et al. | Oct 2001 | B1 |
6317627 | Ennen et al. | Nov 2001 | B1 |
6321100 | Parker | Nov 2001 | B1 |
6334065 | Al-Ali et al. | Dec 2001 | B1 |
6360114 | Diab et al. | Mar 2002 | B1 |
6368283 | Xu et al. | Apr 2002 | B1 |
6411373 | Garside et al. | Jun 2002 | B1 |
6415167 | Blank et al. | Jul 2002 | B1 |
6430437 | Marro | Aug 2002 | B1 |
6430525 | Weber et al. | Aug 2002 | B1 |
6463311 | Diab | Oct 2002 | B1 |
6470199 | Kopotic et al. | Oct 2002 | B1 |
6487429 | Hockersmith et al. | Nov 2002 | B2 |
6505059 | Kollias et al. | Jan 2003 | B1 |
6525386 | Mills et al. | Feb 2003 | B1 |
6526300 | Kiani et al. | Feb 2003 | B1 |
6534012 | Hazen et al. | Mar 2003 | B1 |
6542764 | Al-Ali et al. | Apr 2003 | B1 |
6580086 | Schulz et al. | Jun 2003 | B1 |
6584336 | Ali et al. | Jun 2003 | B1 |
6587196 | Stippick et al. | Jul 2003 | B1 |
6587199 | Luu | Jul 2003 | B1 |
6595316 | Cybulski et al. | Jul 2003 | B2 |
6597932 | Tian et al. | Jul 2003 | B2 |
6606511 | Ali et al. | Aug 2003 | B1 |
6635559 | Greenwald et al. | Oct 2003 | B2 |
6639668 | Trepagnier | Oct 2003 | B1 |
6640116 | Diab | Oct 2003 | B2 |
6640117 | Makarewicz et al. | Oct 2003 | B2 |
6658276 | Kiani et al. | Dec 2003 | B2 |
6661161 | Lanzo et al. | Dec 2003 | B1 |
6697656 | Al-Ali | Feb 2004 | B1 |
6697658 | Al-Ali | Feb 2004 | B2 |
RE38476 | Diab et al. | Mar 2004 | E |
RE38492 | Diab et al. | Apr 2004 | E |
6738652 | Mattu et al. | May 2004 | B2 |
6760607 | Al-Ali | Jul 2004 | B2 |
6788965 | Ruchti et al. | Sep 2004 | B2 |
6816241 | Grubisic | Nov 2004 | B2 |
6822564 | Al-Ali | Nov 2004 | B2 |
6850787 | Weber et al. | Feb 2005 | B2 |
6850788 | Al-Ali | Feb 2005 | B2 |
6876931 | Lorenz et al. | Apr 2005 | B2 |
6920345 | Al-Ali et al. | Jul 2005 | B2 |
6934570 | Kiani et al. | Aug 2005 | B2 |
6943348 | Coffin IV | Sep 2005 | B1 |
6956649 | Acosta et al. | Oct 2005 | B2 |
6961598 | Diab | Nov 2005 | B2 |
6970792 | Diab | Nov 2005 | B1 |
6985764 | Mason et al. | Jan 2006 | B2 |
6990364 | Ruchti et al. | Jan 2006 | B2 |
6998247 | Monfre et al. | Feb 2006 | B2 |
7003338 | Weber et al. | Feb 2006 | B2 |
7015451 | Dalke et al. | Mar 2006 | B2 |
7027849 | Al-Ali | Apr 2006 | B2 |
D526719 | Richie, Jr. et al. | Aug 2006 | S |
7096052 | Mason et al. | Aug 2006 | B2 |
7096054 | Abdul-Hafiz et al. | Aug 2006 | B2 |
D529616 | Deros et al. | Oct 2006 | S |
7133710 | Acosta et al. | Nov 2006 | B2 |
7142901 | Kiani et al. | Nov 2006 | B2 |
7225006 | Al-Ali et al. | May 2007 | B2 |
RE39672 | Shehada et al. | Jun 2007 | E |
7254429 | Schurman et al. | Aug 2007 | B2 |
7254431 | Al-Ali et al. | Aug 2007 | B2 |
7254434 | Schulz et al. | Aug 2007 | B2 |
7274955 | Kiani et al. | Sep 2007 | B2 |
D554263 | Al-Ali et al. | Oct 2007 | S |
7280858 | Al-Ali et al. | Oct 2007 | B2 |
7289835 | Mansfield et al. | Oct 2007 | B2 |
7292883 | De Felice et al. | Nov 2007 | B2 |
7341559 | Schulz et al. | Mar 2008 | B2 |
7343186 | Lamego et al. | Mar 2008 | B2 |
D566282 | Al-Ali et al. | Apr 2008 | S |
7356365 | Schurman | Apr 2008 | B2 |
7371981 | Abdul-Hafiz | May 2008 | B2 |
7373193 | Al-Ali et al. | May 2008 | B2 |
7377794 | Al-Ali et al. | May 2008 | B2 |
7395158 | Monfre et al. | Jul 2008 | B2 |
7415297 | Al-Ali et al. | Aug 2008 | B2 |
7438683 | Al-Ali et al. | Oct 2008 | B2 |
7483729 | Al-Ali et al. | Jan 2009 | B2 |
D587657 | Al-Ali et al. | Mar 2009 | S |
7500950 | Al-Ali et al. | Mar 2009 | B2 |
7509494 | Al-Ali | Mar 2009 | B2 |
7510849 | Schurman et al. | Mar 2009 | B2 |
7514725 | Wojtczuk et al. | Apr 2009 | B2 |
7519406 | Blank et al. | Apr 2009 | B2 |
D592507 | Wachman et al. | May 2009 | S |
7530942 | Diab | May 2009 | B1 |
7593230 | Abul-Haj et al. | Sep 2009 | B2 |
7596398 | Al-Ali et al. | Sep 2009 | B2 |
7606608 | Blank et al. | Oct 2009 | B2 |
7620674 | Ruchti et al. | Nov 2009 | B2 |
D606659 | Kiani et al. | Dec 2009 | S |
7629039 | Eckerbom et al. | Dec 2009 | B2 |
7640140 | Ruchti et al. | Dec 2009 | B2 |
7647083 | Al-Ali et al. | Jan 2010 | B2 |
D609193 | Al-Ali et al. | Feb 2010 | S |
D614305 | Al-Ali et al. | Apr 2010 | S |
7697966 | Monfre et al. | Apr 2010 | B2 |
7698105 | Ruchti et al. | Apr 2010 | B2 |
RE41317 | Parker | May 2010 | E |
RE41333 | Blank et al. | May 2010 | E |
7729733 | Al-Ali et al. | Jun 2010 | B2 |
7761127 | Al-Ali et al. | Jul 2010 | B2 |
7764982 | Dalke et al. | Jul 2010 | B2 |
D621516 | Kiani et al. | Aug 2010 | S |
7791155 | Diab | Sep 2010 | B2 |
RE41912 | Parker | Nov 2010 | E |
7880626 | Al-Ali et al. | Feb 2011 | B2 |
7909772 | Popov et al. | Mar 2011 | B2 |
7919713 | Al-Ali et al. | Apr 2011 | B2 |
7937128 | Al-Ali | May 2011 | B2 |
7937129 | Mason et al. | May 2011 | B2 |
7941199 | Kiani | May 2011 | B2 |
7957780 | Lamego et al. | Jun 2011 | B2 |
7962188 | Kiani et al. | Jun 2011 | B2 |
7976472 | Kiani | Jul 2011 | B2 |
7990382 | Kiani | Aug 2011 | B2 |
8008088 | Bellott et al. | Aug 2011 | B2 |
RE42753 | Kiani-Azarbayjany et al. | Sep 2011 | E |
8028701 | Al-Ali et al. | Oct 2011 | B2 |
8048040 | Kiani | Nov 2011 | B2 |
8050728 | Al-Ali et al. | Nov 2011 | B2 |
RE43169 | Parker | Feb 2012 | E |
8118620 | Al-Ali et al. | Feb 2012 | B2 |
8130105 | Al-Ali et al. | Mar 2012 | B2 |
8182443 | Kiani | May 2012 | B1 |
8190223 | Al-Ali et al. | May 2012 | B2 |
8203438 | Kiani et al. | Jun 2012 | B2 |
8203704 | Merritt et al. | Jun 2012 | B2 |
8219172 | Schurman et al. | Jul 2012 | B2 |
8224411 | Al-Ali et al. | Jul 2012 | B2 |
8229532 | Davis | Jul 2012 | B2 |
8233955 | Al-Ali et al. | Jul 2012 | B2 |
8255026 | Al-Ali | Aug 2012 | B1 |
8265723 | McHale et al. | Sep 2012 | B1 |
8274360 | Sampath et al. | Sep 2012 | B2 |
8280473 | Al-Ali | Oct 2012 | B2 |
8315683 | Al-Ali et al. | Nov 2012 | B2 |
RE43860 | Parker | Dec 2012 | E |
8346330 | Lamego | Jan 2013 | B2 |
8353842 | Al-Ali et al. | Jan 2013 | B2 |
8355766 | MacNeish, III et al. | Jan 2013 | B2 |
8374665 | Lamego | Feb 2013 | B2 |
8388353 | Kiani et al. | Mar 2013 | B2 |
8401602 | Kiani | Mar 2013 | B2 |
8414499 | Al-Ali et al. | Apr 2013 | B2 |
8418524 | Al-Ali | Apr 2013 | B2 |
8428967 | Olsen et al. | Apr 2013 | B2 |
8430817 | Al-Ali et al. | Apr 2013 | B1 |
8437825 | Dalvi et al. | May 2013 | B2 |
8455290 | Siskavich | Jun 2013 | B2 |
8457707 | Kiani | Jun 2013 | B2 |
8471713 | Poeze et al. | Jun 2013 | B2 |
8473020 | Kiani et al. | Jun 2013 | B2 |
8509867 | Workman et al. | Aug 2013 | B2 |
8515509 | Bruinsma et al. | Aug 2013 | B2 |
8523781 | Al-Ali | Sep 2013 | B2 |
D692145 | Al-Ali et al. | Oct 2013 | S |
8571617 | Reichgott et al. | Oct 2013 | B2 |
8571618 | Lamego et al. | Oct 2013 | B1 |
8571619 | Al-Ali et al. | Oct 2013 | B2 |
8577431 | Lamego et al. | Nov 2013 | B2 |
8584345 | Al-Ali et al. | Nov 2013 | B2 |
8588880 | Abdul-Hafiz et al. | Nov 2013 | B2 |
8630691 | Lamego et al. | Jan 2014 | B2 |
8641631 | Sierra et al. | Feb 2014 | B2 |
8652060 | Al-Ali | Feb 2014 | B2 |
8666468 | Al-Ali | Mar 2014 | B1 |
8670811 | O'Reilly | Mar 2014 | B2 |
RE44823 | Parker | Apr 2014 | E |
RE44875 | Kiani et al. | Apr 2014 | E |
8688183 | Bruinsma et al. | Apr 2014 | B2 |
8690799 | Telfort et al. | Apr 2014 | B2 |
8702627 | Telfort et al. | Apr 2014 | B2 |
8712494 | MacNeish, III et al. | Apr 2014 | B1 |
8715206 | Telfort et al. | May 2014 | B2 |
8723677 | Kiani | May 2014 | B1 |
8740792 | Kiani et al. | Jun 2014 | B1 |
8755535 | Telfort et al. | Jun 2014 | B2 |
8755872 | Marinow | Jun 2014 | B1 |
8764671 | Kiani | Jul 2014 | B2 |
8768423 | Shakespeare et al. | Jul 2014 | B2 |
8771204 | Telfort et al. | Jul 2014 | B2 |
8781544 | Al-Ali et al. | Jul 2014 | B2 |
8790268 | Al-Ali | Jul 2014 | B2 |
8801613 | Al-Ali et al. | Aug 2014 | B2 |
8821397 | Al-Ali et al. | Sep 2014 | B2 |
8821415 | Al-Ali et al. | Sep 2014 | B2 |
8830449 | Lamego et al. | Sep 2014 | B1 |
8840549 | Al-Ali et al. | Sep 2014 | B2 |
8852094 | Al-Ali et al. | Oct 2014 | B2 |
8852994 | Wojtczuk et al. | Oct 2014 | B2 |
8897847 | Al-Ali | Nov 2014 | B2 |
8911377 | Al-Ali | Dec 2014 | B2 |
8989831 | Al-Ali et al. | Mar 2015 | B2 |
8998809 | Kiani | Apr 2015 | B2 |
9066666 | Kiani | Jun 2015 | B2 |
9066680 | Al-Ali et al. | Jun 2015 | B1 |
9095316 | Welch et al. | Aug 2015 | B2 |
9106038 | Telfort et al. | Aug 2015 | B2 |
9107625 | Telfort et al. | Aug 2015 | B2 |
9131881 | Diab et al. | Sep 2015 | B2 |
9138180 | Coverston et al. | Sep 2015 | B1 |
9153112 | Kiani et al. | Oct 2015 | B1 |
9192329 | Al-Ali | Nov 2015 | B2 |
9192351 | Telfort et al. | Nov 2015 | B1 |
9195385 | Al-Ali et al. | Nov 2015 | B2 |
9211095 | Al-Ali | Dec 2015 | B1 |
9218454 | Kiani et al. | Dec 2015 | B2 |
9245668 | Vo et al. | Jan 2016 | B1 |
9267572 | Barker et al. | Feb 2016 | B2 |
9277880 | Poeze et al. | Mar 2016 | B2 |
9307928 | Al-Ali et al. | Apr 2016 | B1 |
9323894 | Kiani | Apr 2016 | B2 |
D755392 | Hwang et al. | May 2016 | S |
9326712 | Kiani | May 2016 | B1 |
9392945 | Al-Ali et al. | Jul 2016 | B2 |
9408542 | Kinast et al. | Aug 2016 | B1 |
9436645 | Al-Ali et al. | Sep 2016 | B2 |
9445759 | Lamego et al. | Sep 2016 | B1 |
9474474 | Lamego et al. | Oct 2016 | B2 |
9480435 | Olsen | Nov 2016 | B2 |
9510779 | Poeze et al. | Dec 2016 | B2 |
9517024 | Kiani et al. | Dec 2016 | B2 |
9532722 | Lamego et al. | Jan 2017 | B2 |
9560996 | Kiani | Feb 2017 | B2 |
9579039 | Jansen et al. | Feb 2017 | B2 |
9622692 | Lamego et al. | Apr 2017 | B2 |
D788312 | Al-Ali et al. | May 2017 | S |
9649054 | Lamego et al. | May 2017 | B2 |
9697928 | Al-Ali et al. | Jul 2017 | B2 |
9717458 | Lamego et al. | Aug 2017 | B2 |
9724016 | Al-Ali et al. | Aug 2017 | B1 |
9724024 | Al-Ali | Aug 2017 | B2 |
9724025 | Kiani et al. | Aug 2017 | B1 |
9749232 | Sampath et al. | Aug 2017 | B2 |
9750442 | Olsen | Sep 2017 | B2 |
9750461 | Telfort | Sep 2017 | B1 |
9775545 | Al-Ali et al. | Oct 2017 | B2 |
9778079 | Al-Ali et al. | Oct 2017 | B1 |
9782077 | Lamego et al. | Oct 2017 | B2 |
9787568 | Lamego et al. | Oct 2017 | B2 |
9808188 | Perea et al. | Nov 2017 | B1 |
9839379 | Al-Ali et al. | Dec 2017 | B2 |
9839381 | Weber et al. | Dec 2017 | B1 |
9847749 | Kiani et al. | Dec 2017 | B2 |
9848800 | Lee et al. | Dec 2017 | B1 |
9861298 | Eckerbom et al. | Jan 2018 | B2 |
9861305 | Weber et al. | Jan 2018 | B1 |
9877650 | Muhsin et al. | Jan 2018 | B2 |
9891079 | Dalvi | Feb 2018 | B2 |
9924897 | Abdul-Hafiz | Mar 2018 | B1 |
9936917 | Poeze et al. | Apr 2018 | B2 |
9955937 | Telfort | May 2018 | B2 |
9965946 | Al-Ali et al. | May 2018 | B2 |
D820865 | Muhsin et al. | Jun 2018 | S |
9986952 | Dalvi et al. | Jun 2018 | B2 |
D822215 | Al-Ali et al. | Jul 2018 | S |
D822216 | Barker et al. | Jul 2018 | S |
10010276 | Al-Ali et al. | Jul 2018 | B2 |
10086138 | Novak, Jr. | Oct 2018 | B1 |
10111591 | Dyell et al. | Oct 2018 | B2 |
D833624 | DeJong et al. | Nov 2018 | S |
10123729 | Dyell et al. | Nov 2018 | B2 |
D835282 | Barker et al. | Dec 2018 | S |
D835283 | Barker et al. | Dec 2018 | S |
D835284 | Barker et al. | Dec 2018 | S |
D835285 | Barker et al. | Dec 2018 | S |
10149616 | Al-Ali et al. | Dec 2018 | B2 |
10154815 | Al-Ali et al. | Dec 2018 | B2 |
10159412 | Lamego et al. | Dec 2018 | B2 |
10188348 | Al-Ali et al. | Jan 2019 | B2 |
RE47218 | Al-Ali | Feb 2019 | E |
RE47244 | Kiani et al. | Feb 2019 | E |
RE47249 | Kiani et al. | Feb 2019 | E |
10205291 | Scruggs et al. | Feb 2019 | B2 |
10226187 | Al-Ali et al. | Mar 2019 | B2 |
10231657 | Al-Ali et al. | Mar 2019 | B2 |
10231670 | Blank et al. | Mar 2019 | B2 |
RE47353 | Kiani et al. | Apr 2019 | E |
10279247 | Kiani | May 2019 | B2 |
10292664 | Al-Ali | May 2019 | B2 |
10299720 | Brown et al. | May 2019 | B2 |
10327337 | Schmidt et al. | Jun 2019 | B2 |
10327713 | Barker et al. | Jun 2019 | B2 |
10332630 | Al-Ali | Jun 2019 | B2 |
10383520 | Wojtczuk et al. | Aug 2019 | B2 |
10383527 | Al-Ali | Aug 2019 | B2 |
10388120 | Muhsin et al. | Aug 2019 | B2 |
D864120 | Forrest et al. | Oct 2019 | S |
10441181 | Telfort et al. | Oct 2019 | B1 |
10441196 | Eckerbom et al. | Oct 2019 | B2 |
10448844 | Al-Ali et al. | Oct 2019 | B2 |
10448871 | Al-Ali et al. | Oct 2019 | B2 |
10456038 | Lamego et al. | Oct 2019 | B2 |
10463340 | Telfort et al. | Nov 2019 | B2 |
10471159 | Lapotko et al. | Nov 2019 | B1 |
10505311 | Al-Ali et al. | Dec 2019 | B2 |
10524738 | Olsen | Jan 2020 | B2 |
10532174 | Al-Ali | Jan 2020 | B2 |
10537285 | Shreim et al. | Jan 2020 | B2 |
10542903 | Al-Ali et al. | Jan 2020 | B2 |
10555678 | Dalvi et al. | Feb 2020 | B2 |
10568553 | O'Neil et al. | Feb 2020 | B2 |
RE47882 | Al-Ali | Mar 2020 | E |
10608817 | Haider et al. | Mar 2020 | B2 |
D880477 | Forrest et al. | Apr 2020 | S |
10617302 | Al-Ali et al. | Apr 2020 | B2 |
10617335 | Al-Ali et al. | Apr 2020 | B2 |
10637181 | Al-Ali et al. | Apr 2020 | B2 |
D886849 | Muhsin et al. | Jun 2020 | S |
D887548 | Abdul-Hafiz et al. | Jun 2020 | S |
D887549 | Abdul-Hafiz et al. | Jun 2020 | S |
10667764 | Ahmed et al. | Jun 2020 | B2 |
D890708 | Forrest et al. | Jul 2020 | S |
10721785 | Al-Ali | Jul 2020 | B2 |
10736518 | Al-Ali et al. | Aug 2020 | B2 |
10750984 | Pauley et al. | Aug 2020 | B2 |
D897098 | Al-Ali | Sep 2020 | S |
10779098 | Iswanto et al. | Sep 2020 | B2 |
10827961 | Iyengar et al. | Nov 2020 | B1 |
10828007 | Telfort et al. | Nov 2020 | B1 |
10832818 | Muhsin et al. | Nov 2020 | B2 |
10849554 | Shreim et al. | Dec 2020 | B2 |
10856750 | Indorf et al. | Dec 2020 | B2 |
D906970 | Forrest et al. | Jan 2021 | S |
D908213 | Abdul-Hafiz et al. | Jan 2021 | S |
10918281 | Al-Ali et al. | Feb 2021 | B2 |
10932705 | Muhsin et al. | Mar 2021 | B2 |
10932729 | Kiani et al. | Mar 2021 | B2 |
10939878 | Kiani et al. | Mar 2021 | B2 |
10956950 | Al-Ali et al. | Mar 2021 | B2 |
D916135 | Indorf et al. | Apr 2021 | S |
D917046 | Abdul-Hafiz et al. | Apr 2021 | S |
D917550 | Indorf et al. | Apr 2021 | S |
D917564 | Indorf et al. | Apr 2021 | S |
D917704 | Al-Ali et al. | Apr 2021 | S |
10987066 | Chandran et al. | Apr 2021 | B2 |
10991135 | Al-Ali et al. | Apr 2021 | B2 |
D919094 | Al-Ali et al. | May 2021 | S |
D919100 | Al-Ali et al. | May 2021 | S |
11006867 | Al-Ali | May 2021 | B2 |
D921202 | Al-Ali et al. | Jun 2021 | S |
11024064 | Muhsin et al. | Jun 2021 | B2 |
11026604 | Chen et al. | Jun 2021 | B2 |
D925597 | Chandran et al. | Jul 2021 | S |
D927699 | Al-Ali et al. | Aug 2021 | S |
11076777 | Lee et al. | Aug 2021 | B2 |
11114188 | Poeze et al. | Sep 2021 | B2 |
D933232 | Al-Ali et al. | Oct 2021 | S |
D933233 | Al-Ali et al. | Oct 2021 | S |
D933234 | Al-Ali et al. | Oct 2021 | S |
11145408 | Sampath et al. | Oct 2021 | B2 |
11147518 | Al-Ali et al. | Oct 2021 | B1 |
11185262 | Al-Ali et al. | Nov 2021 | B2 |
11191484 | Kiani et al. | Dec 2021 | B2 |
D946596 | Ahmed | Mar 2022 | S |
D946597 | Ahmed | Mar 2022 | S |
D946598 | Ahmed | Mar 2022 | S |
D946617 | Ahmed | Mar 2022 | S |
11272839 | Al-Ali et al. | Mar 2022 | B2 |
11289199 | Al-Ali | Mar 2022 | B2 |
RE49034 | Al-Ali | Apr 2022 | E |
11298021 | Muhsin et al. | Apr 2022 | B2 |
D950580 | Ahmed | May 2022 | S |
D950599 | Ahmed | May 2022 | S |
D950738 | Al-Ali et al. | May 2022 | S |
D957648 | Al-Ali | Jul 2022 | S |
11382567 | O'Brien et al. | Jul 2022 | B2 |
11389093 | Triman et al. | Jul 2022 | B2 |
11406286 | Al-Ali et al. | Aug 2022 | B2 |
11417426 | Muhsin et al. | Aug 2022 | B2 |
11439329 | Lamego | Sep 2022 | B2 |
11445948 | Scruggs et al. | Sep 2022 | B2 |
D965789 | Al-Ali et al. | Oct 2022 | S |
D967433 | Al-Ali et al. | Oct 2022 | S |
11464410 | Muhsin | Oct 2022 | B2 |
11504058 | Sharma et al. | Nov 2022 | B1 |
11504066 | Dalvi et al. | Nov 2022 | B1 |
D971933 | Ahmed | Dec 2022 | S |
D973072 | Ahmed | Dec 2022 | S |
D973685 | Ahmed | Dec 2022 | S |
D973686 | Ahmed | Dec 2022 | S |
D974193 | Forrest et al. | Jan 2023 | S |
D979516 | Al-Ali et al. | Feb 2023 | S |
D980091 | Forrest et al. | Mar 2023 | S |
11596363 | Lamego | Mar 2023 | B2 |
11627919 | Kiani et al. | Apr 2023 | B2 |
11637437 | Al-Ali et al. | Apr 2023 | B2 |
D985498 | Al-Ali et al. | May 2023 | S |
11653862 | Dalvi et al. | May 2023 | B2 |
D989112 | Muhsin et al. | Jun 2023 | S |
D989327 | Al-Ali et al. | Jun 2023 | S |
11678829 | Al-Ali et al. | Jun 2023 | B2 |
11679579 | Al-Ali | Jun 2023 | B2 |
11684296 | Vo et al. | Jun 2023 | B2 |
11692934 | Normand et al. | Jul 2023 | B2 |
11701043 | Al-Ali et al. | Jul 2023 | B2 |
D997365 | Hwang | Aug 2023 | S |
11721105 | Ranasinghe et al. | Aug 2023 | B2 |
11730379 | Ahmed et al. | Aug 2023 | B2 |
D998625 | Indorf et al. | Sep 2023 | S |
D998630 | Indorf et al. | Sep 2023 | S |
D998631 | Indorf et al. | Sep 2023 | S |
D999244 | Indorf et al. | Sep 2023 | S |
D999245 | Indorf et al. | Sep 2023 | S |
D999246 | Indorf et al. | Sep 2023 | S |
11766198 | Pauley et al. | Sep 2023 | B2 |
D1000975 | Al-Ali et al. | Oct 2023 | S |
11803623 | Kiani et al. | Oct 2023 | B2 |
11832940 | Diab et al. | Dec 2023 | B2 |
11872156 | Telfort et al. | Jan 2024 | B2 |
11879960 | Ranasinghe et al. | Jan 2024 | B2 |
11883129 | Olsen | Jan 2024 | B2 |
20010034477 | Mansfield et al. | Oct 2001 | A1 |
20010039483 | Brand et al. | Nov 2001 | A1 |
20020010401 | Bushmakin et al. | Jan 2002 | A1 |
20020058864 | Mansfield et al. | May 2002 | A1 |
20020133080 | Apruzzese et al. | Sep 2002 | A1 |
20030013975 | Kiani | Jan 2003 | A1 |
20030018243 | Gerhardt et al. | Jan 2003 | A1 |
20030144582 | Cohen et al. | Jul 2003 | A1 |
20030156288 | Barnum et al. | Aug 2003 | A1 |
20030212312 | Coffin, IV et al. | Nov 2003 | A1 |
20040106163 | Workman, Jr. et al. | Jun 2004 | A1 |
20050055276 | Kiani et al. | Mar 2005 | A1 |
20050234317 | Kiani | Oct 2005 | A1 |
20060073719 | Kiani | Apr 2006 | A1 |
20060189871 | Al-Ali et al. | Aug 2006 | A1 |
20070073116 | Kiani et al. | Mar 2007 | A1 |
20070180140 | Welch et al. | Aug 2007 | A1 |
20070244377 | Cozad et al. | Oct 2007 | A1 |
20080064965 | Jay et al. | Mar 2008 | A1 |
20080094228 | Welch et al. | Apr 2008 | A1 |
20080103375 | Kiani | May 2008 | A1 |
20080221418 | Al-Ali et al. | Sep 2008 | A1 |
20090036759 | Ault et al. | Feb 2009 | A1 |
20090093687 | Telfort et al. | Apr 2009 | A1 |
20090095926 | MacNeish, III | Apr 2009 | A1 |
20090247984 | Lamego et al. | Oct 2009 | A1 |
20090275813 | Davis | Nov 2009 | A1 |
20090275844 | Al-Ali | Nov 2009 | A1 |
20100004518 | Vo et al. | Jan 2010 | A1 |
20100030040 | Poeze et al. | Feb 2010 | A1 |
20100099964 | O'Reilly et al. | Apr 2010 | A1 |
20100234718 | Sampath et al. | Sep 2010 | A1 |
20100270257 | Wachman et al. | Oct 2010 | A1 |
20110028806 | Merritt et al. | Feb 2011 | A1 |
20110028809 | Goodman | Feb 2011 | A1 |
20110040197 | Welch et al. | Feb 2011 | A1 |
20110082711 | Poeze et al. | Apr 2011 | A1 |
20110087081 | Kiani et al. | Apr 2011 | A1 |
20110118561 | Tari et al. | May 2011 | A1 |
20110137297 | Kiani et al. | Jun 2011 | A1 |
20110172498 | Olsen et al. | Jul 2011 | A1 |
20110230733 | Al-Ali | Sep 2011 | A1 |
20120123231 | O'Reilly | May 2012 | A1 |
20120165629 | Merritt et al. | Jun 2012 | A1 |
20120209084 | Olsen et al. | Aug 2012 | A1 |
20120226117 | Lamego et al. | Sep 2012 | A1 |
20120283524 | Kiani et al. | Nov 2012 | A1 |
20130023775 | Lamego et al. | Jan 2013 | A1 |
20130041591 | Lamego | Feb 2013 | A1 |
20130060147 | Welch et al. | Mar 2013 | A1 |
20130096405 | Garfio | Apr 2013 | A1 |
20130296672 | O'Neil et al. | Nov 2013 | A1 |
20130345921 | Al-Ali et al. | Dec 2013 | A1 |
20140166076 | Kiani et al. | Jun 2014 | A1 |
20140180160 | Brown et al. | Jun 2014 | A1 |
20140187973 | Brown et al. | Jul 2014 | A1 |
20140275871 | Lamego et al. | Sep 2014 | A1 |
20140275872 | Merritt et al. | Sep 2014 | A1 |
20140316217 | Purdon et al. | Oct 2014 | A1 |
20140316218 | Purdon et al. | Oct 2014 | A1 |
20140323897 | Brown et al. | Oct 2014 | A1 |
20140323898 | Purdon et al. | Oct 2014 | A1 |
20150005600 | Blank et al. | Jan 2015 | A1 |
20150011907 | Purdon et al. | Jan 2015 | A1 |
20150073241 | Lamego | Mar 2015 | A1 |
20150080754 | Purdon et al. | Mar 2015 | A1 |
20150099950 | Al-Ali et al. | Apr 2015 | A1 |
20150106121 | Muhsin et al. | Apr 2015 | A1 |
20160196388 | Lamego | Jul 2016 | A1 |
20160367173 | Dalvi et al. | Dec 2016 | A1 |
20170024748 | Haider | Jan 2017 | A1 |
20170042488 | Muhsin | Feb 2017 | A1 |
20170173632 | Al-Ali | Jun 2017 | A1 |
20170251974 | Shreim et al. | Sep 2017 | A1 |
20170311891 | Kiani et al. | Nov 2017 | A1 |
20180103874 | Lee et al. | Apr 2018 | A1 |
20180242926 | Muhsin et al. | Aug 2018 | A1 |
20180247353 | Al-Ali et al. | Aug 2018 | A1 |
20180247712 | Muhsin et al. | Aug 2018 | A1 |
20180256087 | Al-Ali et al. | Sep 2018 | A1 |
20180296161 | Shreim et al. | Oct 2018 | A1 |
20180300919 | Muhsin et al. | Oct 2018 | A1 |
20180310822 | Indorf et al. | Nov 2018 | A1 |
20180310823 | Al-Ali et al. | Nov 2018 | A1 |
20180317826 | Muhsin et al. | Nov 2018 | A1 |
20180353084 | Wainright | Dec 2018 | A1 |
20190015023 | Monfre | Jan 2019 | A1 |
20190117070 | Muhsin et al. | Apr 2019 | A1 |
20190200941 | Chandran et al. | Jul 2019 | A1 |
20190239787 | Pauley et al. | Aug 2019 | A1 |
20190320906 | Olsen | Oct 2019 | A1 |
20190374139 | Kiani et al. | Dec 2019 | A1 |
20190374173 | Kiani et al. | Dec 2019 | A1 |
20190374713 | Kiani et al. | Dec 2019 | A1 |
20200060869 | Telfort et al. | Feb 2020 | A1 |
20200111552 | Ahmed | Apr 2020 | A1 |
20200113435 | Muhsin | Apr 2020 | A1 |
20200113488 | Al-Ali et al. | Apr 2020 | A1 |
20200113496 | Scruggs et al. | Apr 2020 | A1 |
20200113497 | Triman et al. | Apr 2020 | A1 |
20200113520 | Abdul-Hafiz et al. | Apr 2020 | A1 |
20200138288 | Al-Ali et al. | May 2020 | A1 |
20200138368 | Kiani et al. | May 2020 | A1 |
20200163597 | Dalvi et al. | May 2020 | A1 |
20200196877 | Vo et al. | Jun 2020 | A1 |
20200253474 | Muhsin et al. | Aug 2020 | A1 |
20200275841 | Telfort et al. | Sep 2020 | A1 |
20200288983 | Telfort et al. | Sep 2020 | A1 |
20200321793 | Al-Ali et al. | Oct 2020 | A1 |
20200329983 | Al-Ali et al. | Oct 2020 | A1 |
20200329984 | Al-Ali et al. | Oct 2020 | A1 |
20200329993 | Al-Ali et al. | Oct 2020 | A1 |
20200330037 | Al-Ali et al. | Oct 2020 | A1 |
20210022628 | Telfort et al. | Jan 2021 | A1 |
20210104173 | Pauley et al. | Apr 2021 | A1 |
20210113121 | Diab et al. | Apr 2021 | A1 |
20210117525 | Kiani et al. | Apr 2021 | A1 |
20210118581 | Kiani et al. | Apr 2021 | A1 |
20210121582 | Krishnamani et al. | Apr 2021 | A1 |
20210161465 | Barker et al. | Jun 2021 | A1 |
20210236729 | Kiani et al. | Aug 2021 | A1 |
20210256267 | Ranasinghe et al. | Aug 2021 | A1 |
20210256835 | Ranasinghe et al. | Aug 2021 | A1 |
20210275101 | Vo et al. | Sep 2021 | A1 |
20210290060 | Ahmed | Sep 2021 | A1 |
20210290072 | Forrest | Sep 2021 | A1 |
20210290080 | Ahmed | Sep 2021 | A1 |
20210290120 | Al-Ali | Sep 2021 | A1 |
20210290177 | Novak, Jr. | Sep 2021 | A1 |
20210290184 | Ahmed | Sep 2021 | A1 |
20210296008 | Novak, Jr. | Sep 2021 | A1 |
20210330228 | Olsen et al. | Oct 2021 | A1 |
20210386382 | Olsen et al. | Dec 2021 | A1 |
20210402110 | Pauley et al. | Dec 2021 | A1 |
20220026355 | Normand et al. | Jan 2022 | A1 |
20220039707 | Sharma et al. | Feb 2022 | A1 |
20220053892 | Al-Ali et al. | Feb 2022 | A1 |
20220071562 | Kiani | Mar 2022 | A1 |
20220096603 | Kiani et al. | Mar 2022 | A1 |
20220151521 | Krishnamani et al. | May 2022 | A1 |
20220218244 | Kiani et al. | Jul 2022 | A1 |
20220287574 | Telfort et al. | Sep 2022 | A1 |
20220296161 | Al-Ali et al. | Sep 2022 | A1 |
20220361819 | Al-Ali et al. | Nov 2022 | A1 |
20220379059 | Yu et al. | Dec 2022 | A1 |
20220392610 | Kiani et al. | Dec 2022 | A1 |
20230028745 | Al-Ali | Jan 2023 | A1 |
20230038389 | Vo | Feb 2023 | A1 |
20230045647 | Vo | Feb 2023 | A1 |
20230058052 | Al-Ali | Feb 2023 | A1 |
20230058342 | Kiani | Feb 2023 | A1 |
20230069789 | Koo et al. | Mar 2023 | A1 |
20230087671 | Telfort et al. | Mar 2023 | A1 |
20230110152 | Forrest et al. | Apr 2023 | A1 |
20230111198 | Yu et al. | Apr 2023 | A1 |
20230115397 | Vo et al. | Apr 2023 | A1 |
20230116371 | Mills et al. | Apr 2023 | A1 |
20230135297 | Kiani et al. | May 2023 | A1 |
20230138098 | Telfort et al. | May 2023 | A1 |
20230145155 | Krishnamani et al. | May 2023 | A1 |
20230147750 | Barker et al. | May 2023 | A1 |
20230210417 | Al-Ali et al. | Jul 2023 | A1 |
20230222805 | Muhsin et al. | Jul 2023 | A1 |
20230222887 | Muhsin et al. | Jul 2023 | A1 |
20230226331 | Kiani et al. | Jul 2023 | A1 |
20230284916 | Telfort | Sep 2023 | A1 |
20230284943 | Scruggs et al. | Sep 2023 | A1 |
20230301562 | Scruggs et al. | Sep 2023 | A1 |
20230346993 | Kiani et al. | Nov 2023 | A1 |
20230368221 | Haider | Nov 2023 | A1 |
20230371893 | Al-Ali et al. | Nov 2023 | A1 |
20230389837 | Krishnamani et al. | Dec 2023 | A1 |
20240016418 | Devadoss et al. | Jan 2024 | A1 |
20240016419 | Devadoss et al. | Jan 2024 | A1 |
20240047061 | Al-Ali et al. | Feb 2024 | A1 |
20240049310 | Al-Ali et al. | Feb 2024 | A1 |
20240049986 | Al-Ali et al. | Feb 2024 | A1 |
Number | Date | Country |
---|---|---|
109222950 | Jan 2019 | CN |
WO 2012154701 | Nov 2012 | WO |
WO 2020163640 | Aug 2020 | WO |
Entry |
---|
US 2022/0192529 A1, 06/2022, Al-Ali et al. (withdrawn) |
US 2024/0016391 A1, 01/2024, Lapotko et al. (withdrawn) |
Levy WJ. Effect of epoch length on power spectrum analysis of the EEG. Anesthesiology. Apr. 1987;66(4):489-95. doi: 10.1097/00000542-198704000-00007. PMID: 3565814. (Year: 1987). |
Machine Translation of CN109222950 (Year: 2019). |
Rampil IJ. A primer for EEG signal processing in anesthesia. Anesthesiology. Oct. 1998;89(4):980-1002. doi: 10.1097/00000542-199810000-00023. PMID: 9778016. (Year: 1998). |
Drover et al., “Patient State Index”, Best Practice & Research Clinical Anaesthesiology, 2006, vol. 20, No. 1, pp. 121-128. |
Greene et al., “Automated Estimation of Sedation Depth from the EEG”, 2007 Annual International Conference of the IEEE Engineering in Medicine an Biology Society, Lyon, France, Aug. 22-26, 2007, pp. 3188-3191. |
International Search Report and Written Opinion received in International Application No. PCT/US2020/017074, dated Jun. 4, 2020 in 11 pages. |
Schmidt et al., “Measurement of the Depth of Anesthesia”, Der Anaesthesist, 2008, vol. 57, pp. 9-36. |
Letter from Tara A. Ryan to Masimo Corporation re 510(k) No. K172890, U.S. Food & Drug Administration, dated Jan. 26, 2018. |
Letter from Todd D. Courtney to Masimo Corporation re 510(k) No. K203133, U.S. Food & Drug Administration, dated Feb. 25, 2022. |
Nagaraj et al., “Electroencephalogram Based Detection of Deep Sedation in ICU Patients Using Atomic Decomposition”, IEEE Transactions on Biomedical Engineering, Dec. 2018, vol. 65, No. 12, pp. 2684-2691. |
Ramaswamy et al., “A Novel Machine Learning based Drug-Independent Sedation Level Estimation using Quantitative Features from the Frontal Electroencephalogram”, Manuscript, Clinical trial registration: NCT 02043938, 2019, pp. 36. |
Number | Date | Country | |
---|---|---|---|
20200253544 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62847824 | May 2019 | US | |
62802575 | Feb 2019 | US |