Neuromodulation refers to altering neural activity through the targeted delivery of a stimulus (e.g., electrical, chemical, ultrasound), and is one of the fastest growing areas of medicine, impacting millions of patients. Many neurological disorders and diseases are a result of atypical neural activity in the brain and can be treated by providing appropriate stimuli which can “correct” the atypical neural activity.
In experiments, controlling the neural activity through stimuli has shown promise in treating Parkinsonian symptoms, stroke rehabilitation, regulating depression, etc. The ability to systematically design stimuli that produce a desired neural response in the brain which corrects the atypical activity is key to treating several neurological disorders.
Typically, a stimulus is characterized by a set of parameters. For example, electrical currents (stimuli) can be injected into the brain to selectively stimulate a particular type of neuron (controlling neural activity) for treating Parkinsonian symptoms. This method has been tried in mice subjects, in which the stimulus is characterized by three parameters, namely, amplitude, frequency, and duration of the injected electrical currents.
A common way to mathematically model the relation between the parameters of stimuli and the neural activity/response is through a “forward mapping/function” that takes as input the parameters and outputs the neural response. The problem of designing a stimulus that produced the desired neural response is then reduced to inverting the forward mapping. Correspondingly, the set of stimulus parameters can be obtained by plugging in the desired neural response as an input to the inverse.
Broadly, there are three major challenges in estimating the inverse of the forward mapping. First, as the forward mapping depends upon the parameters being explored, for novel parameters, the forward mapping is generally unknown and needs to be estimated from the data. Second, in most cases of interest, multiple sets of parameter values characterizing a stimulus lead to the same neural response. For example, in electrical stimulation of the brain, many stimuli produce the same neural firing rate and consequently, the same neural response. This implies that the forward mapping in many cases is many-to-one, and hence, non-invertible. Therefore, instead of estimating an inverse, a pseudoinverse of the forward mapping is estimated. Third, the amount of data available in such healthcare settings is limited, and in general, the data collection process is quite expensive.
It would be desirable to be able to estimate the pseudoinverse in a data-efficient manner. There has been a significant interest in designing stimuli using data-driven method through some form of pseudoinverse estimation. A main advantage of data-driven approaches over traditional stimulus design approaches is that they allow end users (e.g., clinicians or neuroscientists) to explore a larger number of parameters as compared to traditional approaches. The ability to explore a large number of parameters allows the possibility of discovering novel stimuli that can produce neural responses which have clinical relevance.
Most current approaches to waveform design for selective neuro stimulation are either model-based, therapy limiting their practical applicability, and/or use trial and error, which allows only a limited exploration of the waveform space. Therefore, it would be desirable to ne able to design to selectively stimulate different neuron types using a data-based approach which does not rely on a model or trial and error.
Disclosed herein in a first embodiment of the invention is a novel pseudoinverse estimation system and method that addresses the shortcomings of the two methodologies mentioned above. Specifically, the disclosed method adapts regression techniques to, in one embodiment, directly estimate a pseudoinverse, thereby circumventing the need of inverting an estimated forward model, while still requiring less data than CDE methods (because regression techniques typically require less data than their equivalent CDE counterpart). In a second aspect of the first embodiment, an ensemble of pseudoinverses is estimated.
The key insight of the disclosed method is that a non-invertible function can still be inverted over a restricted domain. If such a restricted domain is known a priori, the inverse mapping can be estimated using regression methods. The disclosed method jointly learns a restricted domain and the inverse mapping over it. A weighted L2 loss is utilized, where the weights are also learned from data. On convergence, the weights approximate the indicator function over the restricted domain, effectively learning it.
When compared with other methods, for example, Masked Autoregressive Flows (MAF) and Mixture Density Networks (MDN), the disclosed method outperforms all the other methods when the size of dataset is small.
In a second embodiment of the invention, a novel method for designing electrical waveforms for achieving selective stimulation among different neuron types which relies directly on data, without relying on the assumption of a computational model is disclosed. A major hurdle for data-driven approaches is that only a small amount of data can be sampled to design waveforms (e.g., ˜250 datapoints). Hence, any viable data-driven technique must necessarily be data-efficient.
The second embodiment of the invention discloses a method to adaptively design a selective waveform in a data efficient manner. The disclosed method of the second embodiment is a data-driven adaptive algorithm, in which all previously-collected data informs the estimation of a more selective waveform. This allows the disclosed method to find waveforms exhibiting higher selectivity by exploring only a few datapoints, thereby making it data efficient.
disclosed method versus various prior art data-driven methods discussed herein on toy mappings.
Disclosed herein as a first embodiment is a method for directly estimating a pseudoinverse neural mapping to affect a desired neural response by restricting the domain over which the forward mapping from the stimulus to the neural response is inverted.
Assume that each stimulus is characterized by n-different parameters, denoted as {θi}i=1n, where θi∈. Define θ=[θ1, θ2, . . . , θn]T ∈n as the collection of all the n parameters. Θ⊂n denotes the collection of all allowed stimuli. Let the number of neural responses of interest be m (e.g., for selective stimulation between two neuron types, m=2). Define r=[r1, r2, . . . rm]T∈m. Let Θ be the collection of all distinct neural responses produced by all the stimuli θ∈Θ.
To illustrate the notation further, consider an example in which the goal is to selectively stimulate a single neuron type without stimulating another neuron type for treating Parkinsonian symptoms. The stimuli of electrical currents is characterized by 3 parameters which are the amplitude, frequency and duration of the currents. So, in this example the parameters would be θ1, θ2, θ3 respectively denoting the amplitude, frequency and duration of the electrical currents, and θ=[θ1, θ2, θ3]T∈3. The relevant neural response in the same example are the firing rates of the different neuron types, so r=[r1, r2]T, where r1 and r2 are the firing rates of the two neuron types. For finding a stimulus that attains a desired neural response, assume access to a dataset consisting of only =[θi, ri]i=1N formed by N stimulus-response pairs.
Problem Statement-Given a dataset where {θi}i=1N are independent and identically distributed samples from a distribution (θ) on Θ, and ri is the neural response generated by the stimulus characterized by θi, the goal is to design/find a parameter θdes∈ such that the stimulus corresponding to θdes can elicit (close to) a desired (user-specified) neural response rdes∈Θ.
Note that access is only allowed to the fixed dataset . An important design parameter is choosing the set of allowed stimuli (i.e., Θ). This is decided a priori by the user, typically based on domain knowledge, (e.g., choose amplitude, frequency, and duration of the electrical waveforms as parameters of stimuli). Herein, it is assumed that that an appropriate Θ has already been chosen.
Denote the forward mapping from the stimulus parameter space Θ to the neural response space Θ as g: Θ→Θ. For many-to-one functions such as g, a natural definition of a pseudoinverse is a mapping g−1: Θ→Θinv, where Θinv⊆Θ is a restricted domain such that g(g−1(r))=r∀r∈Θ. For example, cos: →[−1,1] is many-to-one and not invertible but restricting the domain of cos to [0, π] helps define the familiar pseudoinverse: cos−1: [−1,1]→[0, π]. Note that, for a pseudoinverse, equality in the other direction, g−1(g(θ))=θ∀θ∈Θ is not necessarily true, because the domain of g−1 is Θinv (a subset of Θ). A function can have multiple pseudoinverses (e.g., cos(⋅), with Θinv=[0+2nπ, π+2nπ], has infinitely many pseudoinverses, one for each n∈N. For the goal of the method disclosed herein, estimating any one pseudoinverse suffices. The estimated pseudoinverse of g is denoted as herein.
The Method—The disclosed method estimates a pseudoinverse by exploiting the insights that many-to-one functions can be inverted over an appropriately restricted domain. If such a restricted domain Θinv was known a priori, then a restricted dataset can be created by excluding all data points (θi, ri) where θi∉Θinv from the dataset . Because g is invertible over Θinv, any traditional regression technique applied to this restricted dataset would yield a pseudoinverse corresponding to Θinv. Formally, a pseudoinverse on Θinv can be estimated as:
The challenge, as outlined above, is that only the dataset (and not the forward mapping) can be accessed, so Θinv is not known a priori. To address this, the disclosed method jointly estimates both a restricted domain and the corresponding pseudoinverse as follows:
The first sum in Eq. (2) represents the loss, and the second sum in Eq. (2) is a regularizer. This optimization formulation follows the philosophy of Eq. (1), approximating [θi∈Θinv] in Eq. (1) by ŵ(θi) which are learned jointly with .
How does the optimization of Eq. (2) incentivize learning of a restricted domain? If only parameters belonging to a restricted domain have non-zero weights (ŵ(θi)), the loss term would be low because the corresponding inverse mapping can be estimated accurately. Hence, the loss term encourages the learning of weights that are non-zero only for some restricted domain over which g is invertible.
It is desirable that an estimate of the pseudoinverse has its domain as the entire Θ, or at least as large a subset as possible, so it can provide a neural stimuli parameter for as many neural responses as possible. This implies that, for the disclosed method to estimate a pseudoinverse, the image of the restricted domain learned by it should be as large as possible. This condition is not ensured by the loss term in Eq. (2), since it is small for any restricted domain (e.g., consider two restricted domains [0,1] and [0, π] of cos(⋅). [0, π] is more desirable here. The loss term in Eq. (2) is small for both domains and is unable to discriminate between them).
To encourage the disclosed method to learn a restricted domain corresponding to a large response range/space, we use the following observation: If Θmax is the restricted domain having the largest size (measured by its total probability under p(θ)), then it's corresponding image/response space also has the largest size. Hence, by learning the largest restricted domain we ensure we have the largest response space. This observation is incorporated in the regularizer and the constraint in Eq. (2) to encourage the method to learn as large a restricted domain as possible. To see this, analyze the following optimization that distills the effect of the regularizer for distinguishing among restricted domains over which g is invertible (the first term in Eq. (2) is low for such domains):
This optimization explores the behavior of the regularizer when only K out of N total weights are non-zero and has the solution: w*i=N/K for any K out of N weights, and the rest are 0. The regularizer term in Eq. (2) scales approximately as ˜1/K2 for K non-zero weights, which incentivizes making larger numbers of ŵ(θi) to be non-zero, encouraging the consideration of as large a restricted domain as possible.
Thus, there is a careful interplay between the loss, the regularizer and the constraint in Eq. (2). The loss encourages learning non-zero weights (ŵ(θi)) only over a restricted domain; the regularizer and the constraint try to make the restricted domain as large as possible. By carefully choosing the value of B, a desirable pseudoinverse can be learned.
In a second aspect of the first embodiment of the invention, an extension of the disclosed method estimates an ensemble of pseudoinverses, instead of just one, thereby improving the performance. Process 500 of the second aspect of the first embodiment is shown in flowchart form in
Simulations—Two scenarios were simulated-toy examples and bio-physical neuron models, in which the performance of the disclosed method as well as 3 prior art data-driven methods (MAF, MDN and Naïve Inversion (NI)) were compared for estimating pseudoinverses (i.e., (⋅)) In addition, a baseline approach was implemented.
B
S
To characterize the data-efficiency of each technique, (i.e., the four variants of the disclosed method and MDN, MAF, NI) in estimating the pseudoinverses of the above toy mappings, the following study was performed: For each toy forward mapping we evaluate the NMAE at 4 different training dataset size. The resultant NMAE for all the four techniques and the three different toy mappings are shown in
It can be seen a variant of the disclosed method (denoted as “PF-NET”, “PF-NET-E”, “PF-GP” or “PF-GP-E” in the
and
S
prior art models (MDN, MAF and NI) were applied to estimate the parameters of stimulus (electrical waveform in this particular case) for desired neural responses (firing rate of the two neuron models).
The main goal of this simulation is to show that the performance of the disclosed method is better at “small” dataset size and is comparable to other methodologies at “large” dataset size. That is, qualitatively the results obtained for toy examples continue to hold in a more realistic scenario.
The simulation evaluates the ability of electrical waveforms to stimulate a bio-physically detailed cortical neuron model of excitatory pyramidal cell (Pyr neurons) and inhibitory parvalbumin-expressing inhibitory neurons (PV neurons). The ability to modulate the activity/firing rate of excitatory and inhibitory neurons using electrical currents has clinical relevance.
The NMAE was calculated for all the data-driven approaches, as well as the baseline approach and the results are shown in
As with the toy examples, it was observed that the disclosed method significantly outperforms the other techniques for low sample size, as shown in
To create a dataset of size N for a toy mapping (i.e., toy={θi,ri}i=1N), N samples of θ were uniformly randomly sampled from the toy forward mapping's respective domains provided in Eqs. (4-6). The corresponding {ri}i=1N for each toy mapping were then generated according to the respective models provided in Eqs. (4-6). Different values of N were considered for different toy mappings to compare performance of the disclosed method and the three prior art methods across different dataset sizes.
Bio-physical Neuron Models: It is well known that cortex consists of excitatory and inhibitory neurons, and it is hypothesized that a careful interplay between the activation of excitatory and inhibitory neurons in the cortex drives the neural activity in the cortex. Imbalance in the activation of excitatory and inhibitory neurons is linked to neurological diseases such as seizures, hence being able to modulate the activity of excitatory and inhibitory neurons is of clinical relevance. In recent years, there has been a growing interest in exploring electrical stimulation as a means to stimulate excitatory and inhibitory at desired firing rates. Controlling the firing rates of excitatory and inhibitory neurons (neural response) with electrical stimulation (stimulus) provides a good simulation setup to test the performance of the disclosed method and the three prior art methods. in a more realistic neuromodulation context.
Morphologically-realistic and bio-physically detailed multicompartment neuron models taken from the Allen Cell Type Database were used in the simulations. One neuron model is of a cortical pyramidal (Pyr; excitatory) neuron and the other is of a cortical parvalbumin-expressing (PV; inhibitory) neuron. Both neurons were simulated using the NEURON software, with the allensdk package in python.
A parametric waveform family using 5 parameters is constructed: Q, An/Ap, T, Tzero/T, Tneg/T−Tzero, where Ap, An, T, Tzero, Tneg are as depicted in
(generated from {right arrow over (θn)}) is injected intracellularly into the soma of each neuron model. Synaptic noise is also added to the models by adding additive white Gaussian noise with a mean of zero and variance of 2.25 (nA)2 to the input waveform. A peak detection algorithm is run (present in the scipy package in python), on the time-trace of the resulting neuron's membrane potentials to calculate the number of neural spikes. A simplistic definition of the firing rate for this simulation is used:
The firing rates for both neuron models were collected to construct the neural response ri=[ri1, ri2]T (ri1 for the Pyr neuron and ri2 for the PV neuron). The range of firing rates in the dataset for the Pyr neuron is [0 Hz, 40 Hz] and for the PV neuron is [0 Hz, 100 Hz].
Because the input to the model is the neural response {right arrow over (rl)} and not the stimulus parameter {right arrow over (θl)}, the data is split into training, test and validation sets in the following manner: Split all possible neural responses present in into training, validation and test firing rates. Let V, Tr, Te be the sets containing the validation, training and test neural responses. Remove all the ({right arrow over (θl)}, {right arrow over (rl)}) from the original dataset where {right arrow over (rl)} is present in the test and validation set, to construct the training dataset DTr. Note that for any i present in the test set or the validation set, there may be multiple stimulus parameter vectors {right arrow over (θ )} in the original dataset which produce F and all of them are removed. The validation dataset V can be constructed similarly by removing ({right arrow over (θl)}, {right arrow over (rl)}) from the original dataset where {right arrow over (rl)} is present in the training set or the test set. For the test set, only the neural responses are stored (i.e., Te), to generate stimuli which produce those neural responses.
After training, each approach outputs a parameter {circumflex over (θ)} corresponding to every r. For the disclosed method, {right arrow over ({circumflex over (θ)} )} is the output of the estimated pseudoinverse, when provided with the input {right arrow over (r)}. For the first embodiment, {right arrow over ({circumflex over (θ)} )} is obtained as the output of the estimated pseudoinverse, when provided with the input {right arrow over (r)}. For the embodiment producing the ensemble of pseudo inverse functions, the prediction scheme described above is used.
A figure of merit is used to evaluate the validation/test loss. The figure of merit is explained by taking an example of calculating it over the validation set. For every neural response {right arrow over (r)} present in V, obtain the corresponding {right arrow over ({circumflex over (θ)} )}, which is then fed to the neuron/model to obtain its actual firing rate {right arrow over (r)}act. That is, g(({right arrow over (r)}))={right arrow over (r)}act.
Because there are m different neural responses, a NMAE is calculated for each individual neural response. Let the maximum and minimum values that can be achieved for jth neural response be denoted as: rjmax, rjmin, the NMAE is defined for the jth neural response as:
Note that for calculating NMAE, it is not necessary to explicitly know the restricted domain over which pseudoinverse has been estimated. The NMAE quantifies how close the neural response produced by the predicted stimulus is to the desired neural response on a scale of 0 to 100. The test NMAE can be calculated in a similar manner, where we replace the validation set by the test set. Notice that, for calculating the NMAE, access to the neuron is required (hence the need for accessing the neuron during hyper-parameter tuning.
Implementation-The details of one possible implementation of the disclosed method will now be discussed. To choose the hyper-parameters for all the data-driven approaches, the 10-fold cross-validation NMAE is used. Specifically, the hyper-parameter with the lowest cross-validation NMAE is picked. To calculate the 10-fold cross-validation NMAE, a 90%-10% split of the dataset is created, where 90% of the datapoints are in the training data and 10% of the datapoints are in the validation data, and the NMAE is calculated for the validation data. This process is repeated 10 times, and the average validation NMAE is used as the cross-validation NMAE. An upper and a lower bound of the range for each hyperparameter's search is obtained through a random search, followed by a grid search within these bounds.
Four different variants of the disclosed method were used, referred to herein as PATHFINDER-NET, PATHFINDER-NET-E, PATHFINDER-GP, and PATHFINDER-GP-E. In PATHFINDER-NET and PATHFINDER-GP, the inverse mapping () is estimated using a multi-layer perceptron (MLP) and Gaussian Process Regression, respectively. PATHFINDER-NET-E and PATHFINDER-GP-E are, respectively, the ensemble versions of PATHFINDER-NET and PATHFINDER-GP. To estimate the weight mapping (ŵ) for all 4 variants, a MLP is used. To implement the Gaussian Process Regression (GP) in PATHFINER-GP and PATHFINDER-GP-E, an existing implementation was used with default parameters. In the toy examples, all of the MLPs used consist of 1-20 hidden layers, each with 10 hidden units, and GP's covariance kernel was chosen as a weighted summation of the Radial Basis Kernel, White Kernel and Dot Product Kernel. The weights for each kernel are automatically learned in GP by using cross-validation loss. For the neuron model, MLPs with 5 to 8 hidden layers were used, with each layer having 100 hidden units (although the hidden layers could have any number of hidden units), for the regressor (in PATHFINDER-NET and PATHFINDER-NET-E) and weight network for all the 4 variants. The Kernel for the neuron model case was the summation of the Radial Basis Kernel, White Kernel, Rational Quadratic and Dot Product Kernel. The output layer of the MLP for the regressor and the weight network has linear activation. For the toy examples, β was chosen as 0.001 for the cos 2πθ mapping, 0.001 for the
mapping and 0.00001 for the (θ2−4)2 mapping. For the neuron model the following values of β: 0.5, 0.1, and 0.05 were searched.
In addition to the data-driven methods, a brute force approach, akin to the 1-Nearest Neighbor Method, is used as a baseline method for the simulation study. In the baseline method, the training data is used as a look-up table, where the parameter vector present in the training dataset whose response vector is closest to the desired response vector under the L1 distance metric is output.
In one embodiment, Rectified Linear Unit (ReLU) activation is used for hidden layers of all approaches. MAF and MDN were trained by minimizing the negative log-likelihood (NLL). Alternatively, other activations, such as tan h, sigmoidal, exponential leaky unit and linear activation could be used.
Both the weight and the regressor were trained simultaneously by minimizing the loss in Eq. (2). During training, a simplex projection was performed on the batch output of the weight network to ensure the constraints specified in Eq. (2) are met). An Adam optimizer with a learning rate of 0.001 is used for minimizing the objectives for all techniques. Batch normalization was applied wherever applicable. Because all the methods were trained for multiple different sizes of the training datasets, batch size for Adam was chosen according to the training dataset size. Each model was trained until the validation loss converged (note loss instead of MAE), defined as a relative change of less than 0.1% in the validation loss between successive steps. Validation loss here was calculated using the more traditional method.
A potential reason for the data efficiency of the disclosed method is the maximization bias. The maximization bias refers to the phenomenon that the method tends to estimate the pseudoinverse with the largest number of datapoints. For example, in the cos(⋅) mapping, assume that the method only estimates 1 of the 6 pseudoinverses corresponding to the restricted domains [0, 0.5], [0.5, 1], [1, 1.5], [1.5, 2], [2, 2.5] and [2.5, 3]. Let ni be the number of datapoints that lie in the restricted domain of the ith pseudoinverse. Recall that the regularizer loss tries to give non-zero weights to as many datapoints as possible. Consequently, the regularizer encourages the choosing of the restricted domain with the largest number of datapoints (i.e., maxi ni. For a size-n dataset sampled using a uniform distribution,
where (⋅) is the expectation, and c is a constant). Note that, while the expected number of datapoints in any one restricted domain is n/6, the largest restricted domain has extra c√{square root over (n)} datapoints. The method loss encourages inversion in just this restricted domain, and thus, is able to harness these extra datapoints, lowering the required overall sample size. On the other hand, the prior art methods, in one way or another, try to estimate the entire forward mapping, and are not designed to harness maximization bias.
The first embodiment is based on data-driven techniques for designing stimuli that produce desired neural responses. Traditionally, such stimuli are designed by choosing 1 or 2 “important” parameters (based on the intuition from the bio-physics of the system) and then brute-force searching across the 1 or 2 parameters to characterize the neural response. For example, one method considers the frequency of the sinusoidal stimulation as a parameter and brute-force searches across the frequency to characterize the neural response. A limitation of such brute-force search approaches is that they do not scale well as the number of parameters increases.
The most important advantage of the data-driven techniques is that allow exploration of a larger number of parameters. The Brute-force techniques require 250 datapoints to achieve the same performance as the disclosed method at 50 datapoints. Hence, the disclosed method allows an exploration of a larger number of parameters due to its data-efficiency. In practice, to an end-user, this implies that instead of choosing 1 or 2 most relevant parameters, they can now choose 5 or 10 most relevant parameters to search across. The advantage of exploring a larger number of parameters is that novel stimuli can be discovered which can produce neural responses which would be impossible to produce by just exploring 1 parameter. Intuitively, data-driven techniques are able to scale well to a higher number of parameters because they exploit the inherent smoothness of the neural response, in one form or other, and provide a more principled way of searching the parameter space.
The data-requirement for these data-driven techniques can be even further reduced by using adaptive sampling techniques. For example, the weight mapping of the disclosed method can be used for sampling only from the restricted domain thereby reducing the data requirements or the estimated p({right arrow over (θ)}|{right arrow over (r)}) in the conditional density methods can be used to adaptively sample around a desired response {right arrow over (r)}des. The characterization of the data-efficiency of the adaptive-sampling techniques is not addressed herein.
Also disclosed as a second embodiment of the invention is an adaptive data-driven method and model for designing an electrical waveform to produce a desired neural response. Data-driven techniques allow end-users, such as clinicians and neuroscientists, to explore a larger number of data points (i.e., parameters of an electrical waveform) as compared to brute force search methods that allow only a 1 or 2 dada point search space. Exploring a larger number of data points helps in designing novel stimuli that produce neural responses which are clinically relevant.
It is desirable to make data-driven techniques as data-efficient as possible to increase their applicability in different neuromodulation contexts, as collecting data in neuromodulation domain is typically expensive. Towards increasing the data-efficiency of data-driven techniques, the disclosed method is capable of designing stimuli for desired neural responses by iteratively estimating a pseudo inverse function. Data-driven techniques provide an alternative to traditional brute-force search of parameters and can help in designing novel stimuli.
The second embodiment of the invention directly estimates a waveform that will provide selective stimulation of certain types of neurons, while minimizing the stimulation of all other neuron types. Assume that the jth neuron type (j=1, . . . , M), is a black box which, when provided with an input waveform u(⋅), a function of time, outputs a firing rate rj∈. The search for input waveforms is restricted to a parametric family denoted by Θ={u↓}θ∈Θ, where Θ⊆K. Let rj denote the firing rate (i.e., the ratio of the total number of spikes and the time for which the waveform is supplied) of the jth neuron type. The vector r=[r1, . . . , rM]T is the “firing rate vector”, and the collection of all r's produced by waveforms uθ(t)∈Θ is Θ. For designing selectively stimulating waveform, access is only available to samples of the form {θ,r}.
Therefore, the goal of the second embodiment is to design a jth neuron selective waveform ujsel∈Θ that maximizes the firing rate rj of neuron j while keeping the firing rate of all other neuron types below a pre-specified threshold rlow (i.e., ri≤Rlow) for all i∈[1, . . . , M]\j (i.e., for all i≠j) using the least number of samples of the form {θ,r}.
Note that the differences in the dynamics of the neuron types may limit the possible firing rate vectors. (e.g., if all neurons have similar dynamics, selective stimulation at a high rate for only the desired neuron-type may not be possible). Another limitation comes from how rich Θ, and hence Θ, is, and thus, in practice, the choice of Θ needs to be made carefully. In some embodiments of the disclosed method, only two neuron types (i.e., M=2) are considered.
The second embodiment is an improvement to the method of the first embodiment. The method of the first embodiment is a non-adaptive data-driven algorithm for determining a pseudo inverse function that will aid in the design of waveforms that elicit desired firing rates in different neuron types. The first embodiment appropriately inverts the forward mapping/relation from waveform parameters within a restricted domain to the firing rate of neuron to obtain a pseudo inverse function, which then can be used to obtain an estimate of a waveform parameter that elicits a firing rate in each neuron type which us close to some desired firing rate. The pseudo inverse function is determined over a restricted domain, which is also estimated by the first embodiment. A limitation of the method of the first embodiment is that the dataset used to estimate the pseudo inverse function and the restricted domain is unrestricted and known a priori and is not changed by the method if the first embodiment.
The method of the second embodiment extends the first embodiment to an adaptive, data driven method for designing selectively stimulating waveforms. To collect new datapoints, the second embodiment uses the first embodiment to extrapolate from existing data points and estimates a waveform corresponding to a higher degree of selectivity Then, the second embodiment refines an adaptive sampling scheme to sample densely in a concentrated region around the estimate of the more selective waveform. This adaptive sampling scheme enables the second embodiment to design waveforms in a data-efficient manner.
The method of the second embodiment, which designs selective waveforms in a data-efficient manner, will now be disclosed. The method iteratively collects data in rounds, where, in each round, N datapoints are collected (except for the first round, wherein data points are uniformly sampled from an unrestricted domain). A data point comprises a set of parameters defining an electrical waveform and the observed neural response. At each round, each data point is used to generate an electronic waveform using an arbitrary waveform generator. The generated waveform is then used to stimulate a brain region from which the corresponding neural response are recorded.
The novelty of the second embodiment is the exploitation of the data collected in previous iterations to estimate data points exhibiting higher selectivity, and then densely sample a data point exhibiting the most selective neural response. By sampling in a region concentrated around more selective data points, the method finds waveforms with a high degree of selectivity by only sampling a few data points.
Denote all of the data collected until iteration t by t. Let and ŵ(θ) be the estimated pseudo inverse and the corresponding weight mapping for the restricted domain, respectively, estimated by the method of the first embodiment using t. Define rt,jmax as the most selective firing rate vector for neuron j observed until round t:
At iteration t, the method performs two operations: (i) find an estimate of a waveform parameter (θdt) that elicits a more selective firing rate rst than rt,jmax. Note that being an estimate, is not guaranteed to elicit firing rate rst, but rather something close to it; (ii) sample waveform parameters (i.e., data points) from a region concentrated around (the more concentrated the better). rst is defined by increasing the firing rare of neuron j in rt,jmax by some positive number δ. In effect, the restricted domain is narrowed to an area around the data point that elicits the most selective neural response (i.e., high firing rate of the selected neuron type and low firing rates of other neuron types). There are two strategies disclosed herein for the narrowing of the restricted domain.
For estimating θst, the second embodiment method uses the first embodiment to obtain , which then can be used to obtain the estimate of θst by simply plugging rst into , (i.e., of =(rst)). In various embodiments, the method uses one of two possible strategies (or a combination of the strategies) to reduce the waveform search space for sampling in a concentrated region around .
In a first strategy of the second embodiment, the standard deviation of θst is estimated. The standard deviation is an indicator of how confident is in its estimate of . Intuitively, the samples should be selected densely around if is small and coarsely if is large. Hence, instead of sampling the whole domain Θ, the smaller region Bt (i.e., the narrowed restricted domain) is sampled:
For each single parameter θi, sampling uniformly from Bt is the equivalent to uniformly sampling the interval |θi,ts−{circumflex over (σ)} t,i, θi,ts+{circumflex over (σ)}t,i, where {circumflex over (θ)}i,ts and {circumflex over (σ)}t,i, i are ith components of θts and {circumflex over (σ)}t, respectively.
In a second strategy of the second embodiment, only θ's that lie in the restricted domain estimated by the first embodiment of the invention at round t are selected (i.e., sampled from Bt). The restricted domain is chosen such as to include as many firing rate vectors as possible. Hence, by only considering θ lying in the restricted domain, a significant reduction in the parameter search space can be achieved while ensuring a more selective parameter can still be sampled. To select data points belonging to the restricted domain, realize that is approximately 0 for data points not in the restricted domain. A rejection sampling procedure is used, where a θ (sampled from Bt) is accepted with probability pt(θ)=({circumflex over (ω)}(θ)−ωmin)/(ωmax−ωmin). Here, ωmaxt=maxθ∈B
The two strategies may be used alone, however, application of both of the strategies together leads to significant reduction in the region of parameter space being sampled, thereby allowing the method to quickly find selectively stimulating waveform parameters. The method of the second embodiment terminates when either a desired level of selectivity is achieved or the maximum budget for collecting the dataset is reached.
A process 700 implementing the second embodiment is shown in flow chart form in
Both embodiments of the invention are agnostic to the data collection process, but typically, the neural responses are collected through some form of electro-physiological recording. For example, the neural responses can be collected from patch-clamp recordings, EMG responses, EEG, sEEG, or standard Local Field Potentials.
As would be realized by those of skill in the art, the specific examples discussed herein have been provided only as exemplary embodiments and the invention is not meant to be limited thereby. Modifications and variations are intended to be within the scope of the invention, which is given by the following claims:
This application claims the benefit of U.S. Provisional Patent Application No. 63/460,962, filed Apr. 21, 2023. This application is also a continuation-in-part filing under 35 U.S.C. § 365 (c) of PCT application PCT/US2023/078171, filed in the United States Receiving Office on Oct. 30, 2023. The contents of both applications are incorporated herein in their entireties.
This invention was made with U.S. Government support under contract N65236-19-C-8017, awarded by the U.S. Department of the Navy. The U.S. government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
63460962 | Apr 2023 | US |