Bioelectronic therapy uses implantable medical devices to deliver electrical stimulation to patients to treat a variety of conditions such as heart failure, depression, epilepsy, movement disorders, and chronic pain. Stimulation is delivered via one or more leads with electrode contacts located in proximity to some targeted neural tissue, e.g., the vagus nerve, pelvic nerves, spinal cord, or brain.
The continued advance of bioelectronic therapies is limited by inadequate activation of targeted nerve fibers and by co-activation of non-targeted nerve fibers. More fundamentally, the relationship between applied stimuli and the complement of nerve fibers that are activated or blocked or have a specific subthreshold response, how this relationship varies across individuals and species, and how this relationship can be controlled remain largely unknown.
One of the substantial challenges to addressing these problems is the exceedingly large computational costs of calculating the response of populations of highly nonlinear neurons to different parameters of stimulation. This challenge is compounded when considering the very large number of iterations required for model-based optimization. Hence there is an ongoing need for improved modeling and design of neural stimulation.
The Summary is provided to introduce a selection of concepts that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
Embodiments of the present disclosure include a method for estimation of a neuronal response. The method comprises inputting a sequence of numerical input vectors corresponding to an electric field over a period of neuronal stimulation into a surrogate model of neuronal stimulation. The surrogate model of neuronal stimulation outputs a sequence of numerical output vectors corresponding to one or more changes in neuronal response during the stimulation period.
In some embodiments, the sequence of numerical input vectors comprise a spatiotemporal distribution of extracellular potentials measured, calculated, or estimated for the period of neuronal stimulation.
In some embodiments, the one or more changes in neuronal response corresponding to the sequence of numerical output vectors comprise at least one of: transmembrane voltage, total transmembrane current, transmembrane current for one or more ions, conduction state of an ion channel, detection of one or more action potentials, a probability of a dynamic event, gating variable values, or any combination thereof.
In some embodiments, the surrogate model comprises a machine learning component and/or an artificial intelligence component comprising at least one of: recurrent neural network, convolutional neural network, fully connected neural network, decision tree, random forest, support vector machine, linear regression, logistic regression, nearest neighbors, or any combination thereof.
In some embodiments, the method further includes training the surrogate model of neuronal stimulation with data from simulation of a biophysical model or a simplified model.
In some embodiments, the sequence of numerical input vectors is from a model of an anatomically realistic nerve morphology with an electrode.
In some embodiments, the surrogate model is a classification surrogate that determines a probability that an action potential has occurred.
In some embodiments, the surrogate model determines a strength-duration relationship, determines a relationship between activation thresholds and neuron size, determines a relationship between block thresholds and neuron size, determines subthreshold response, identifies unidirectional propagation, identifies bidirectional propagation, determines propagation speed, determines the number of evoked action potentials, determines the time of action potential initiation, determines the location of action potential initiation, determines a gating variable, captures interaction between propagating action potentials, captures interaction between a subthreshold change in membrane state and a propagating action potential, captures interaction between changes in membrane state and responses to stimulation, predicts a state of conduction block, or any combination thereof.
In some embodiments, the surrogate model is a cable surrogate that includes nodal compartments of a complete neuron model.
In some embodiments, the surrogate model is fully differentiable with respect to all parameters.
Embodiments of the present disclosure include a method of administering neuromodulation therapy to a subject. The method includes programming a pulse generator to deliver a pattern of electrical stimulation identified using the method for estimation of a neuronal response disclosed herein; and delivering the pattern of electrical simulation to the subject to generate a response in at least one target neuron.
Embodiments of the present disclosure include a method for optimizing neuronal response. The method comprises inputting a set of criteria corresponding to a desired neuronal response into an inverted model of neuronal response. The inverted model of neuronal response outputs one or more stimulation criteria capable of producing the desired neuronal response.
In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) for activation or conduction block. In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) having a specific subthreshold response.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a gradient based optimization.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a global optimization algorithm comprising at least one of: a differential evolution algorithm, a genetic algorithm, a particle swarm algorithm, a simulated annealing algorithm, an ant colony algorithm, an estimation of distribution algorithm, a simplex search algorithm, a coordinate descent algorithm, a random search algorithm, a grid search algorithm, a spiral optimization algorithm, a stochastic diffusion search algorithm, or any combination thereof; or optimized with a recommender system with a machine learning component and/or an artificial intelligence component comprising at least one of: recurrent neural network, convolutional neural network, fully connected neural network, decision tree, random forest, support vector machine, linear regression, logistic regression, nearest neighbors, or any combination thereof.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized for at least one of: minimizing energy required for activation or block or a subthreshold response, minimizing power required for activation or block or a subthreshold response, minimizing charge imbalance in the optimized waveform(s), minimizing onset response produced when the optimized waveform(s) are turned on, maximizing degree of conduction block, minimizing voltage required for activation or block or a subthreshold response with the optimized waveform(s), minimizing current required for activation or block or a subthreshold response with the optimized waveform(s), minimizing charge required for activation or block with the optimized waveform(s), maximizing therapeutic benefit produced by application of the optimized waveform(s), minimizing adverse effects produced by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron types by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron diameters by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron locations by application of the optimized waveform(s), measures related to the complexity of the electrode geometry or stimulation waveform shape, and any combinations thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode geometric parameter comprising at least one of: inter-contact spacing of contacts, number of contacts, shape of contacts, size of contacts, shape of insulation, size of insulation, placement of contacts relative to the insulation, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode placement parameter comprising at least one of: electrode implantation depth, electrode displacement from an anatomical landmark, rotation of electrode, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise a stimulation parameter comprising at least one of: amplitude, pulse width, frequency, waveform shape, delays, and any combination thereof.
Embodiments of the present disclosure include a method of administering neuromodulation therapy to a subject. The method comprises programming a pulse generator to deliver a pattern of electrical stimulation identified using the method for optimizing neuronal response disclosed herein; and delivering the pattern of electrical simulation to the subject to generate a response in at least one target neuron.
Embodiments of the present disclosure also include a method for estimation of one or more stimulation criteria. In accordance with these embodiments, the method includes inputting a sequence of numerical input vectors corresponding to a desired neuronal response to a surrogate model of neuronal stimulation. In some embodiments, the surrogate model outputs a sequence of numerical output vectors corresponding to one or more stimulation criteria capable of producing the desired neuronal response.
In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) for activation or conduction block. In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) having a specific subthreshold response.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a gradient based optimization.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a global optimization algorithm comprising at least one of: a differential evolution algorithm, a genetic algorithm, a particle swarm algorithm, a simulated annealing algorithm, an ant colony algorithm, an estimation of distribution algorithm, a simplex search algorithm, a coordinate descent algorithm, a random search algorithm, a grid search algorithm, a spiral optimization algorithm, a stochastic diffusion search algorithm, or any combination thereof; or optimized with a recommender system with a machine learning component and/or an artificial intelligence component comprising at least one of: recurrent neural network, convolutional neural network, fully connected neural network, decision tree, random forest, support vector machine, linear regression, logistic regression, nearest neighbors, or any combination thereof.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized for at least one of: minimizing energy required for activation or block or a subthreshold response, minimizing power required for activation or block or a subthreshold response, minimizing charge imbalance in the optimized waveform(s), minimizing onset response produced when the optimized waveform(s) are turned on, maximizing degree of conduction block, minimizing voltage required for activation or block or a subthreshold response with the optimized waveform(s), minimizing current required for activation or block or a subthreshold response with the optimized waveform(s), minimizing charge required for activation or block with the optimized waveform(s), maximizing therapeutic benefit produced by application of the optimized waveform(s), minimizing adverse effects produced by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron types by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron diameters by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron locations by application of the optimized waveform(s), measures related to the complexity of the electrode geometry or stimulation waveform shape, and any combinations thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode geometric parameter comprising at least one of: inter-contact spacing of contacts, number of contacts, shape of contacts, size of contacts, shape of insulation, size of insulation, placement of contacts relative to the insulation, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode placement parameter comprising at least one of: electrode implantation depth, electrode displacement from an anatomical landmark, rotation of electrode, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise a stimulation parameter comprising at least one of: amplitude, pulse width, frequency, waveform shape, delays, and any combination thereof.
The accompanying figures are provided by way of illustration and not by way of limitation.
Section headings as used in this section and the entire disclosure herein are merely for organizational purposes and are not intended to be limiting.
All publications, patent applications, patents and other references mentioned herein are incorporated by reference in their entirety.
The terms “comprise(s),” “include(s),” “having,” “has,” “can,” “contain(s),” and variants thereof, as used herein, are intended to be open-ended transitional phrases, terms, or words that do not preclude the possibility of additional acts or structures. The singular forms “a,” “and” and “the” include plural references unless the context clearly dictates otherwise. By way of example, “an element” means at least one element and can include more than one element. The present disclosure also contemplates other embodiments “comprising,” “consisting of” and “consisting essentially of,” the embodiments or elements presented herein, whether explicitly set forth or not. As used herein, “and/or” refers to and encompasses any and all possible combinations of one or more of the associated listed items, as well as the lack of combinations where interpreted in the alternative (“or”).
For the recitation of numeric ranges herein, each intervening number there between with the same degree of precision is explicitly contemplated. For example, for the range of 6-9, the numbers 7 and 8 are contemplated in addition to 6 and 9, and for the range 6.0-7.0, the number 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, and 7.0 are explicitly contemplated. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise-Indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. For example, if a concentration range is stated as 1% to 50%, it is intended that values such as 2% to 40%, 10% to 30%, or 1% to 3%, etc., are expressly enumerated in this specification. These are only examples of what is specifically intended, and all possible combinations of numerical values between and including the lowest value and the highest value enumerated are to be considered to be expressly stated in this disclosure. “About” or “approximately” is used to provide flexibility to a numerical range endpoint by providing that a given value may be “slightly above” or “slightly below” the endpoint without affecting the desired result.
“Subject” and “patient” as used herein interchangeably refers to any vertebrate, including, but not limited to, a mammal (e.g., cow, pig, camel, llama, horse, goat, rabbit, sheep, hamsters, guinea pig, cat, dog, rat, and mouse, a non-human primate (e.g., a monkey, such as a cynomolgus or rhesus monkey, chimpanzee, etc.) and a human). In some embodiments, the subject may be a human or a non-human. In one embodiment, the subject is a human. The subject or patient may be undergoing various forms of treatment.
“Treat,” “treating” or “treatment” are each used interchangeably herein to describe reversing, alleviating, or inhibiting the progress of a disease and/or injury, or one or more symptoms of such disease, to which such term applies. Depending on the condition of the subject, the term also refers to preventing a disease, and includes preventing the onset of a disease, or preventing the symptoms associated with a disease. A treatment may be either performed in an acute or chronic way. The term also refers to reducing the severity of a disease or symptoms associated with such disease prior to affliction with the disease. Such prevention or reduction of the severity of a disease prior to affliction refers to administration of a treatment to a subject that is not at the time of administration afflicted with the disease. “Preventing” also refers to preventing the recurrence of a disease or of one or more symptoms associated with such disease.
“Therapy” and/or “therapy regimen” generally refer to the clinical intervention made in response to a disease, disorder or physiological condition manifested by a patient or to which a patient may be susceptible. The aim of treatment includes the alleviation or prevention of symptoms, slowing or stopping the progression or worsening of a disease, disorder, or condition and/or the remission of the disease, disorder or condition.
The term “effective amount” or “therapeutically effective amount” refers to an amount sufficient to effect beneficial or desirable biological and/or clinical results.
Unless otherwise defined herein, scientific and technical terms used in connection with the present disclosure shall have the meanings that are commonly understood by those of ordinary skill in the art. For example, any nomenclatures used in connection with, and techniques of, cell and tissue culture, molecular biology, neurobiology, microbiology, genetics, electrical stimulation, neural stimulation, neural modulation, and neural prosthesis described herein are those that are well known and commonly used in the art. The meaning and scope of the terms should be clear; in the event, however of any latent ambiguity, definitions provided herein take precedent over any dictionary or extrinsic definition. Further, unless otherwise required by context, singular terms shall include pluralities and plural terms shall include the singular.
One aspect of the present disclosure provides novel systems and methods to estimate the response of neurons to electrical stimulation and to determine the optimal electrode geometry and parameters of stimulation to activate or block specific targeted groups of neurons without activation or block of non-targeted groups of neurons. These methods provide essential tools for the design of experimental and therapeutic interventions. The disclosed system enables a user to specify the neurons targeted for activation or block (e.g., types, locations), and the method identifies the electrode geometry and/or stimulation parameters to achieve neuron-selective activation or block.
The development of bioelectronic therapies is limited by inadequate activation of targeted neurons and spillover of activation to non-targeted neurons. More fundamentally, the relationship between applied stimuli and the complement of neurons that are activated or blocked, how this relationship varies across individuals and between species, and how these relationships can be controlled remain largely unknown.
Development of effective, efficient, and selective neural activation and block requires exploration and understanding of a large multi-dimensional parameter space, including stimulation waveform, amplitude, frequency, and electrode design. Further, the optimal or even appropriate solutions are likely to vary across nerves, species, and individuals. Computational models, especially those that represent the anatomical, morphological, and biophysical properties of the nerves of interest, enable efficient exploration of this parameter space and rigorous optimization of application-specific parameters. Model-based approaches are effective, for example, for analysis and design of electrodes for selective stimulation of peripheral somatic nerves. Further, such models enable investigation of the mechanisms underlying complex non-linear phenomena, such as conduction block.
Disclosed herein are novel methods for estimating electrical activation and block of neurons (hereafter referred to as ‘surrogate’ models, or surrogate network). The method enables determination of the distribution of activated and blocked neurons, including neurons with a specific subthreshold response, with orders of magnitude less computational time compared to standard biophysical models. The method serves as an essential engineering design tool where iterative parametric in vivo studies are largely impractical.
In contrast to conventional threshold-based estimators of the neural response to applied signals, where the estimate consists of whether a neuron has crossed a pre-computed ‘firing threshold’ for a specific stimulus waveform, the method retains generality in its estimates and predicts a broad range of complex non-linear phenomena exhibited by neurons in response to arbitrary waveforms in arbitrary electric fields. Examples include but are not limited to: interactions between propagating action potentials (where action potentials will meet and mutually annihilate) and onset response and conduction block resulting from a high (kHz)-frequency signal (where a neuron will initially fire action potentials and then go silent and prevent propagation of action potential across the region beneath the stimulating electrode when extracellularly stimulated with a kHz frequency signal).
Models of neural stimulation are traditionally used in the forward direction, where the user specifies the inputs (e.g., electrode geometry, stimulation waveform, stimulation intensity), and the model calculates the response of neurons (e.g., activation, block). As detailed herein, an inverted model is where the user specifies the neurons targeted for activation or block, and the model calculates the appropriate electrode geometry and/or stimulation parameters. However, because the response of neurons to stimulation is highly non-linear, e.g., doubling the intensity of stimulation does not lead to a doubling of neuron activation or block, this cannot be accomplished by simply inverting the forward model. Rather, disclosed herein are gradient-free global optimization and gradient-based optimization methods to determine input parameters that produce target outcomes in the models of stimulation. In combination with the novel surrogate methods to estimate neural responses, these approaches enable solving higher-dimensional problems than previously achieved, and for analogous problems, they are solved with reduced computational cost and time. Additionally, disclosed herein is the application of machine learning methods alongside the surrogate models disclosed herein to learn neural networks (hereafter referred to as ‘recommender’ networks) to recommend directly model-specific values of programmable parameters to achieve selective activation and block under a variety of constraints.
With reference to
The inventions described herein are used to support real-time prediction of neural responses to stimulation in clinical settings. They are used to implement systems and software for designing and programming neuromodulation devices, with or without patient-specific models. They are used to implement modeling platforms for research and development to analyze existing therapies and design new therapies, including the geometry of the neurostimulators themselves, the positioning of the neurostimulator with respect to the target excitable tissue, and the waveforms and patterns of stimulation used when delivering stimulation. The recovery of complex non-linear phenomena by the surrogate models means that the surrogate models can be used to investigate interactions between non-local effects to achieve therapeutic outcomes (e.g., downstream filtering of action potentials by high frequency stimulation).
A surrogate model for stimulation of a neuron takes as input a sequence of numerical vectors representing the extracellular electric field as it varies over the course of stimulation and returns a sequence of numerical vectors representing the evolving state of the neuron over the same period (e.g., transmembrane voltage(s) or current(s), or variables representing the state of conduction of individual ion channels, or a metric relating the probability of some dynamic event (e.g., an action potential) occurring within the modeled time window, or any combination and / or function thereof). In the surrogate model, this mapping may be performed and/or augmented by a neural network or other machine learning or artificial intelligence construct (e.g., kernel methods such as support vector machines, Kalman filters, etc.).
The process for learning a surrogate neuron model consists of training a neural network to reproduce the responses of biophysical neuron models to extracellular electrical stimulation. In some embodiments, biophysical models are implemented and solved using the NEURON simulation environment, although the methods apply to other frameworks, software packages, or programming languages for simulating biophysical models, e.g., Arbor, Brian 2, or MATLAB. In some embodiments, other simplified models of neurons are implemented and solved.
More specifically, in some embodiments, the sequence of numerical vectors supplied as inputs representing the extracellular fields are the extracellular potentials sampled along the compartments of each modeled neuron. In some embodiments, these potentials are extracted from a finite element model of anatomically realistic nerve morphologies instrumented with an electrode or may be computed using simpler models such as point and line source idealizations, or other estimates of the electric field. The state(s) returned by the surrogate model can represent spatiotemporal dynamics (e.g., gating variable values across time and space) and more abstract features (e.g., to predict the probability of generating an action potential).
For some embodiments described herein, the surrogate model is of the 5.7 µm diameter MRG axon model, a well-established biophysical multicompartment myelinated axon model. In these implementations, the input extracellular electric field is sampled along contiguous sequences of nodes of Ranvier, and the outputs consist of either the probability of an action potential occurring within the modeled time window or ion channel gating variable values at the same nodes at which the field was sampled for every modeled timestep. The methods outlined apply more generally to data sampled at any set of discrete compartments in any multicompartment neuron model, myelinated or unmyelinated, of any diameter, of arbitrary length and diameter, and may be used to predict any set of dynamic variables (including the transmembrane potential, transmembrane current, gating variables) for any set of compartments (not necessarily the same compartments at which the extracellular field was sampled), as well as at different timestep resolutions than was used for solving the biophysical model.
Classification Surrogate: With reference to
Gating Variable Surrogate: With reference to
For the gating variable surrogate, let N be the number of nodal compartments being modeled, and t the timestep. Then xt ∈ ℝN is the distribution of extracellular potentials sampled at those N nodes at time t. A single neural network (NN) within the stack illustrated in
where
and
[p, q] indicates the concatenation of tensors p and q along the feature axis; the input xt, representing the forcing term (i.e., the imposed extracellular potentials) at time t, has only one feature (the potential), while the hidden and cell states ht and ct may have arbitrarily many features F (i.e., the number of features is a model hyperparameter chosen to tune the capacity of the model). The vector r ∈ ℝF is learned during training and is a parameter that enables the network to globally adjust the rate at which each feature of the internal states changes every timestep. The hyperparameter L defines how many such recursive mappings between vector spaces can be applied sequentially to learn more sophisticated relationships between input and output. Only the hidden state ht is propagated vertically between LSTMs (NNs) in the stack (whereas both ht and ct are propagated forward in time). Empirically, a 3 layer LSTM network (i.e., L = 3 in
W ∗ p indicates discrete 1D convolution between weights W and tensor p. W and b in the equations above are parameters that are learned during training. In contrast to a standard LSTM network, in which state updates are mediated by fully connected networks, an LSTM architecture consisting of 1D convolutional primitives is used. This confers various structural properties to the network that are important to preserve from the biophysical multicompartment neuron model. Specifically, one such structural property is that the network can handle variable input sizes, i.e., arbitrary numbers of compartments of the neuron can be modeled, and once the model is trained, it can predict responses from different numbers of compartments. Furthermore, using convolutions means that the models disclosed herein have translational symmetry (e.g., can transpose the field along the length of modeled neuron and reproduce the same dynamical behavior at those different locations). Additionally, unlike in the fully connected network case, where each modeled compartment contributes to the immediate future state of every other, convolutions integrate information locally: the width of the 1D convolutional kernel determines from how many adjacent compartments the future state will be calculated. This is key for achieving node-level resolution of dynamics (i.e., transmembrane voltage(s) or current(s), or variables representing the state of conduction of individual ion channels) that generalizes well. In some implementations, a kernel of size 3 is used, integrating information across three adjacent nodes, where the future state(s) of node i will depend on the current state(s) of nodes i - 1, i, and i + 1. Finally, in some implementations, symmetry is enforced in the convolutional kernels along their width (i.e., the node / compartment axis); this is critical for avoiding the accumulation of noise in the system, recovering the fast rates of change in the system states related to phenomena such as action potentials, and improving stability.
The neural networks are trained on data generated by solving biophysical neuron models using standard numerical integration methods or by simulating other types of models of neuronal responses. The extracellular electric field generated by stimulation that is applied to the biophysical neuron models may be solved in biophysically realistic field models, for example by solving a finite element model of a tissue morphology derived from histology. The electric fields may also be solved in idealized field models, using for example point- or line-source field models. Alternatively, the electric fields may be generated entirely from a synthetic probability distribution specifying properties such as the peak amplitude, steepness, and symmetry. The generated dataset consists of field-response pairs, where the field is a sequence of vectors representing the electric field at each compartment of the model neuron, and the response may be, e.g., a sequence of vectors representing the states of the neuron to be predicted, for example the transmembrane voltage(s) or current(s), or gating variables representing the state of conduction of individual ion channels, or a value 0 or 1 corresponding to the absence or presence of an action potential, or combinations and/or functions thereof.
In some gating variable surrogate implementation, training data samples (i.e., individual field-response pairs) are generated for axon fibers positioned at the centroids of fascicles within a morphologically realistic pig vagus nerve model consisting of 31 fascicles, while in the classification surrogate implementation, training data samples are generated for axon fibers in a homogeneous isotropic nerve model. Data collection consists of randomly generating parameters for monophasic rectangular pulses for each of the electrode contacts arranged circumferentially around the nerve. In some embodiments, Latin hypercube sampling is used with sample size of 32 over the intervals [-20, 20] mA for amplitude, [0, 2] ms for pulse width, and [0, 2] ms for delay. However, differently parameterized waveforms (e.g., biphasic pulses, sums of sinusoids, spline-interpolated functions), other parameterization values, and any combination thereof, as well as different sampling methods (e.g., normal random sampling, uniform random sampling, Sobol sampling), may be used. The data collected could be for one or multiple sizes of neurons. The axonal responses to each sampled parameter set are simulated over 10 ms (though other time windows may be simulated) and the presence of an action potential is determined by checking whether the transmembrane potential crossed 0 mV at a nodal compartment 1 mm from the end of the simulated fiber. For generating training data for the gating variable surrogate, the m, p, h, and s gating variables are recorded at 87 nodal compartments for each axon. This is repeated 128 times for a total of 126,976 axon simulations constituting 126,976 training samples.
The networks are trained by gradient descent using batches of data (i.e., a subset of field-response pairs from the entire dataset) randomly sampled from this dataset. The loss function for the gating variable surrogate is defined as the root mean squared error between the network prediction and the true gating variable values, although other loss functions such as the mean absolute error or mean squared error may be used. The loss function for the classification surrogate is the binary cross entropy loss, although other functions such as the hinge loss or squared hinge loss may be used. A variety of algorithms exist for learning neural networks by gradient descent; in the implementation, the RMSprop optimizer with a learning rate of 1e-4 on batches of size 192 produced good results. However, other optimization algorithms such as (but not limited to) Adam (Kingma and Ba, 2014), Adagrad (Duchi et al., 2011) and RAdam (Liu et al., 2021) may be used; the appropriate learning rate will depend on the choice of optimizer, and the appropriate batch size will depend on the specific computing hardware being used to train the model, the specific learning rate of the optimization algorithm, and the specific dataset used for training. The surrogates may be learned and deployed on massively multithreaded hardware, e.g., the implementations run on Nvidia RTX A5000 GPUs, although the networks may be deployed on, e.g., other CUDA GPUs with minimal additional implementation cost. When deployed on such hardware, the surrogate models generate predictions of axonal responses in novel field models at up to 500x increased speed for the gating variable surrogate implementation and up to 2000x increased speed for the classification surrogate implementation compared to a single-threaded biophysical axon implementation.
Cable Surrogate: In one surrogate model implementation, a simplified cable model is designed, implemented, trained, and tested to predict the full system state (gating variables and transmembrane potential) as a function of time at each node of Ranvier for 5.7 to 14 µm diameter MRG axons (hereafter referred to as ‘cable surrogate’).
In some embodiments, only the nodal compartments of the complete MRG fiber model are retained and the myelin is assumed to be perfectly insulating. Other geometric abstractions (e.g., explicitly modeling the internode, adding additional layers of extracellular field, and any combinations and variations thereof) may be adopted. In some embodiments, a forward Euler discretization is applied with staggered timestep to solve the resulting system of differential equations. Shown here for node n, advancing from time t to t+dt, first update gating parameter values as EQN. 9.
Likewise for h, p, and s, where αx and βx are independent rate constant functions for gating variable ‘x’. Using m to indicate m(n, t + 0.5dt) (likewise for all other gating variables h, p, and s) and Vm to indicate Vm(n, t), gives EQNS. 10 and 11.
Where D2([Vm(n,t),Ve(n,t)]) is the sum of second spatial differences of Vm and Ve centered on node n, implemented here as a linear convolution over concatenated Vm and Ve vectors with kernel .
Other explicit (e.g., higher order Runge-Kutta) and / or implicit (e.g., implicit Euler) numerical integration methods may be used, and any combinations or variations thereof. Other neuronal models with different biophysics or different simplified representations may likewise be adapted, with appropriate adjustment and / or addition / subtraction of the relevant set of ionic conductances and associated rate constant functions.
In some embodiments, backpropagation and gradient descent are used to apply the appropriate parametric adjustments to achieve better agreement between the simplified cable model and the complete (NEURON implemented) biophysical model. In some embodiments, the models are implemented in PyTorch v1.9 though other deep learning frameworks, such as TensorFlow, and automatic differentiation packages more generally, may be used. In some embodiments, the peak ionic conductance values (g̅Naf, etc.), axial resistivity, membrane capacitance, and the values of the convolutional kernel used to compute D2 were assigned as optimizable parameters. The method freely permits other parameters (such as the coefficients of the rate constant equations, equilibrium potentials, etc.) and all subsets and combinations thereof to also be selected for optimization. Furthermore, any functions constituting the set of equations describing the cable model (for example, the rate constant functions) may in part or in whole be substituted with generic function approximators implemented as a neural network or other machine learning construct and trained alongside the other model components. Likewise, the biophysical parameters themselves (for example, though not limited to, the axial resistivity) may be implemented as functions of inputs to the model (for example, though not limited to, the fiber diameter), either explicitly parameterized or approximated with a neural network or other machine learning construct, with the resulting functions also optionally trained alongside the rest of the model.
In some embodiments, a scaling correction is implemented for Ve as a 3rd order polynomial function of the fiber diameter which was fit using the least squares method after the model had been trained. The scaling correction may be implemented as any parametric function or neural network or other machine learning construct and may also be trained alongside the rest of the model parameters or omitted altogether.
As with the other surrogate models described herein, the cable surrogate is trained on data generated by solving biophysical neuron models using standard numerical integration methods. The extracellular electric field generated by stimulation that is applied to the biophysical neuron models may be solved in biophysically realistic field models, for example by solving a finite element model of a tissue morphology derived from histology. The electric fields may also be solved in idealized field models, using for example point- or line-source field models. Alternatively, the electric fields may be generated entirely from a synthetic probability distribution specifying properties such as the peak amplitude, steepness, and symmetry. The generated dataset consists of field-response pairs, where the field is a sequence of vectors representing the electric field at each compartment of the model neuron, and the response is a sequence of vectors representing the states of the neuron to be predicted, for example the transmembrane voltages and gating parameter values, or combinations and / or functions thereof.
In some embodiments of the cable surrogate, training data samples (i.e., individual field-response pairs) are generated for axon fibers with diameters randomly sampled from a uniform distribution over [5.7 µm, 14 µm]. Axons are positioned at the centroids of fascicles within a morphologically realistic pig vagus nerve model consisting of 53 fascicles. Data collection consisted of randomly generating parameters for monophasic rectangular pulses for each of the electrode contacts arranged circumferentially around the nerve. In some embodiments, Latin hypercube sampling is used with sample size of 32 over the intervals [-0.1, 0.1] mA for amplitude, [0, 2] ms for pulse width, and [0, 2] ms for delay. However, differently parameterized waveforms (e.g., biphasic pulses, sums of sinusoids, spline-interpolated functions) and any combination thereof, as well as different sampling methods (e.g., normal random sampling, uniform random sampling, Sobol sampling), may be used. The axonal responses to each sampled parameter set were simulated over 7.5 ms (though other time windows may be simulated). For generating training data for the cable surrogate, the m, p, h, and s gating variables as well as the membrane potential were recorded at 53 nodal compartments for each axon. This was repeated 32 times for a total of 54,272 axon simulations constituting 54,272 training samples.
The model is trained by gradient descent using batches of data (i.e., a subset of field-response pairs from the entire dataset) randomly sampled from this dataset. The loss function for the gating cable surrogate was defined as the mean squared error between the cable surrogate prediction and the true transmembrane voltage and gating variable values, although other loss functions such as the mean absolute error or root mean squared error may be used.
A variety of algorithms exist for learning differentiable functions by gradient descent. In some implementations, a truncated backpropagation through 50 timesteps and gradient descent with the RMSprop update rule with a learning rate of 1e-6 on batches of size 53 produced good results. However, other optimization algorithms such as (but not limited to) Adam (Kingma and Ba, 2014), Adagrad (Duchi et al., 2011) and Radam (Liu et al., 2021) may be used the appropriate learning rate will depend on the choice of optimizer, and the appropriate batch size will depend on the specific computing hardware being used to train the model, the specific learning rate of the optimization algorithm, and the specific dataset used for training. The surrogates may be learned and deployed on massively multithreaded hardware, e.g., the implementations run on Nvidia RTX A5000 GPUs, although the networks may be deployed on, e.g., other CUDA GPUs with minimal additional implementation cost. When deployed on such hardware, the cable surrogate generated predictions of axonal responses in novel field models at up to 10,000x increased throughput compared to a single-threaded biophysical axon implementation.
The cable surrogate is fully differentiable with respect to all parameters, and therefore fully compatible with all optimization methods described previously (gradient based and gradient free).
Embodiments of the present disclosure include a method for estimation of a neuronal response, the method comprising: inputting a sequence of numerical input vectors corresponding to an electric field over a period of neuronal stimulation into a surrogate model of neuronal stimulation. The surrogate model of neuronal stimulation outputs a sequence of numerical output vectors corresponding to one or more changes in neuronal response during the stimulation period.
As described further herein, in some embodiments, the sequence of numerical input vectors comprise a spatiotemporal distribution of extracellular potentials measured, calculated, or estimated for the period of neuronal stimulation.
As described further herein, in some embodiments, the one or more changes in neuronal response corresponding to the sequence of numerical output vectors comprise at least one of: transmembrane voltage, total transmembrane current, transmembrane current for one or more ions, conduction state of an ion channel, detection of one or more action potentials, a probability of a dynamic event, gating variable values, or any combination thereof.
As described further herein, in some embodiments, the surrogate model comprises a machine learning component and/or an artificial intelligence component comprising at least one of: recurrent neural network, convolutional neural network, fully connected neural network, decision tree, random forest, support vector machine, linear regression, logistic regression, nearest neighbors, or any combination thereof.
As described further herein, in some embodiments, the method further includes training the surrogate model of neuronal stimulation with data from simulation of a biophysical model or a simplified model.
As described further herein, in some embodiments, the sequence of numerical input vectors is from a model of an anatomically realistic nerve morphology with an electrode.
As described further herein, in some embodiments, the surrogate model is a classification surrogate that determines a probability that an action potential has occurred.
As described further herein, in some embodiments, the surrogate model determines a strength-duration relationship, determines a relationship between activation thresholds and neuron size, determines a relationship between block thresholds and neuron size, determines subthreshold response, identifies unidirectional propagation, identifies bidirectional propagation, determines propagation speed, determines the number of evoked action potentials, determines the time of action potential initiation, determines the location of action potential initiation, determines a gating variable, captures interaction between propagating action potentials, captures interaction between a subthreshold change in membrane state and a propagating action potential, captures interaction between changes in membrane state and responses to stimulation, predicts a state of conduction block, or any combination thereof.
As described further herein, in some embodiments, the surrogate model is a cable surrogate that includes nodal compartments of a complete neuron model.
As described further herein, in some embodiments, the surrogate model is fully differentiable with respect to all parameters.
As described further herein, in some embodiments, the sequence of numerical input vectors corresponds to an arbitrary spatial distribution of extracellular potentials.
As described further herein, in some embodiments, the sequence of numerical input vectors correspond to an arbitrary temporal shape extracellular potentials.
As described further herein, in some embodiments, the design of the surrogate model predicts the response of one or multiple sizes of neurons.
As described further herein, in some embodiments, the surrogate model is trained to predict the response of one or multiple sizes of neurons.
As described further herein, in some embodiments, the sequence of numerical input vectors is from a model of a simplified nerve morphology with an electrode.
As described further herein, in some embodiments, the surrogate model combines a cable surrogate that includes nodal compartments of a complete neuron model with a machine learning component and/or an artificial intelligence component.
Embodiments of the present disclosure include a method of administering neuromodulation therapy to a subject, the method comprising: programming a pulse generator to deliver a pattern of electrical stimulation identified using the method of for estimation of a neuronal response; and delivering the pattern of electrical simulation to the subject to generate a response in at least one target neuron.
An optimization problem is solved to determine electrode geometry and/or stimulation parameters that achieve selective fiber activation in a model of neural stimulation. An optimization problem in this context is defined by a tissue morphology, consisting of a tissue geometry and tissue electrical conductivities, and the locations of neurons, with their own geometries, within that morphology. Along with these physical characteristics, the optimization problem is defined by labels that identify which neurons are targeted for stimulation (activation) or conduction block, and a loss function that grades the performance of a set of parameters with respect to achieving the desired selectivity, although the loss function may also incorporate other performance criteria as described below. The task is to then determine stimulation and/or electrode geometric parameters that selectively activate or block the targeted neurons by minimizing the associated loss function. Stimulation parameters may for example include the amplitudes, pulse widths, waveform shapes, and/or delays of current waveforms being delivered from each of several electrode contacts. Geometric parameters may for example include the inter-contact spacing for circumneural contacts in a cuff electrode, the number of such contacts, and/or the shape of such contacts. The loss function may be single or multi-objective, wherein the loss criteria may incorporate: minimizing energy required for selective activation or block, minimizing power required for selective activation or block, minimizing charge imbalance in the optimized waveform(s), minimizing onset response produced when the optimized waveform(s) are turned on, maximizing degree of conduction block, minimizing voltage required for activation or block with the optimized waveform(s), minimizing current required for activation or block with the optimized waveform(s), minimizing charge required for activation or block with the optimized waveform(s), maximizing therapeutic benefit produced by application of the optimized waveform(s), minimizing adverse effects produced by application of the optimized waveform(s), maximizing selectivity of activation or block between nerve fiber types by application of the optimized waveform(s), maximizing selectivity of activation or block between nerve fiber diameters by application of the optimized waveform(s), maximizing selectivity of activation or block between nerve fiber locations by application of the optimized waveform(s), measures related to the complexity of the electrode geometry or stimulation waveform shape, and any combinations and derivatives thereof.
In the single-objective case, an example loss function for maximizing, e.g., selectivity in activation, is the Jaccard loss:
where A and B are ordered lists of 0′sand 1′s indicating whether each neuron did not fire or did fire an action potential, the union is calculated with an OR operation, the intersection is calculated with an AND operation, and |X| is the cardinality of set X (i.e., sum of the set). This loss measures the dissimilarity in overlap between two sets A and B. In the context of selective nerve stimulation, set A constitutes those neurons which are targeted for activation, and set B constitutes those which did in fact produce a propagating action potential (as predicted by a computational model, which may be a biophysical neuron model, or a surrogate model as described herein, or some other approximation of neuronal activity, such as an activation function or linear threshold estimator). The Jaccard loss has a theoretic minimum of 0, where sets A and B fully overlap (i.e., all targeted neurons are activated and no off-target neurons are activated), and a maximum of 1. Other loss functions for binary labeled data such as the Hamming loss, binary cross entropy loss, Hinge loss, etc., may also be used. Additional criteria as outlined above can be integrated in the single-objective context by applying a weighted sum over the various criteria. In the multi-objective case, the loss function returns an n-tuple of n objectives, where each of those objectives is computing some loss criterion as outlined above - for example, if integrating energy minimization as an objective, the loss function may return a 2-tuple consisting of the Jaccard loss to assess selectivity and the summed squares of all current waveforms emitted from all electrode contacts to assess electric power expenditure.
Method disclosed herein apply gradient-free and gradient-based approaches to solve optimization problems of this kind. In both cases, neuronal responses to stimulation may variously be computed using biophysical models (e.g., to validate solutions, validate responses at boundaries of regions of activation, block, or subthreshold response, predict all responses for a given set of stimulus parameters, and combinations and derivatives thereof) or surrogate models as described above; however, gradients may only be computed using an appropriate surrogate model. Activity metrics used when computing loss functions may consist of checking for a variable (e.g., transmembrane voltage(s) or current(s), or variables representing the state of conduction of individual ion channels) crossing some predetermined threshold or may consist of functions of some variables such as the mean value of a gating variable over some time window for some set of compartments, or any combinations and derivatives thereof.
With reference to
where Xbest,G is the best candidate solution in the current iteration, F is the mutation rate, Xr0,G and Xr1,G are mutually exclusive randomly selected candidate solutions from the current population. Most state-of-the-art metaheuristic optimization algorithms have mechanisms to adapt how they search a domain based on the topology of the loss function. The use of a differential mutation operator oriented around a difference vector (Xr0,G - Xr1,G in the ‘best1’ mutation rule illustrated above) achieves this by implicitly adapting the search range and direction to the loss function, a process referred to as ‘contour fitting’. Ui,G is then constructed from Vi,G via crossover: for each j ∈ 0, 1 ... d where d is the dimensionality of the optimization problem,
where rand(0, 1) is a random sample from a uniform distribution in the interval [0, 1) (resampled for every j) and CR is the crossover rate. In some implementations, if the resulting trial vector Ui,G exceeds the problem bounds in any dimension, the mutation and crossover operation is repeated up to a limit of 10 times, after which those dimensions that remain outside the bounds are randomly reinitialized to values within the bounds. This is a feasibility-preserving strategy for boundary constraint handling that has shown advantages over Darwinian (i.e., where boundary violations are discouraged by adding a penalty term to the loss function) and repair (i.e., where dimensions in which the boundary has been violated are placed back within the bounds following some rule, such as reflection against the boundary) methods in recent empirical studies using DE.
For single-objective problems, selection means Ui,G replaces Xi,G in iteration G + 1 if f(Ui,G) < f(Xi,G), i.e., if the loss of Ui,G is less than the loss of Xi,G, or equivalently, Ui,G has a better performance. In the multi-objective case, Ui,G replaces Xi,G only if f(Ui,G) dominatesf(Xi,G), that is f(Ui,G) ≤ f(Xi,G) in all objectives and f(Ui,G) < f(Xi,G) in at least one objective. Conversely, if f(Xi,G) dominatesf (Ui,G), Ui,G is discarded. If f (Ui,G) does not dominate f (Xi,G) or vice versa, Ui,G is appended to the population. After all candidate vectors have been constructed, evaluated, and compared, if the population size exceeds the set population number, the population is sorted into pareto fronts (groups within which no member dominates any others and which are themselves ranked based on which fronts dominate others), and the population is reduced to the set population size by randomly discarding the required number of members from the lowest ranked fronts.
The values F and CR, the mutation and crossover rates, respectively, are hyperparameters that are supplied to the optimizer upon initiation. Additionally, the implementation employs parameter adaptation, allowing the optimizer to learn better F and CR values during optimization. Under parameter adaptation, at each iteration, instead of using the single value CR for each crossover operation when generating each trial vector, a crossover rate CRi is generated independently for each member of the population, sampling from a normal distribution with mean CR and standard deviation of 0.1, and subsequently clipped to [0, 1]. Let SCR be the set of all such crossover rates in the current iteration for which the resulting trial vector Ui,G successfully replaced the corresponding target vector Xi,G (i.e., had a lower loss / dominated the target vector). Then in the next iteration CR is updated such that CR = (1 - c) ∗ CR + c ∗ meanA(SCR), where meanA is the arithmetic mean and c is a positive number between 0 and 1. Likewise, instead of using the single value F for each mutation operation when generating each trial vector, a mutation rate Fi is generated independently for every member of the population by random sampling from a Cauchy distribution with location parameter F and scale parameter 0.1, truncated to 1 if Fi > 1 and regenerated if Fi ≤ 0. Using the same terminology as before, in the subsequent iteration F is updated such that F = (1 - c) ∗ F + c ∗ meanL(SF), where meanL(SF) is the Lehmer mean
The implementation uses a value of c = 0.1.
Gradient-based optimization: With reference to
Gradient descent operates on scalar valued differentiable loss functions. In some implementations, the difference between the mean (calculated over the entire simulated period) m gating variable value in the 10 nodes at each end of each axon in the target and off-target populations is used. Additional criteria as outlined above (e.g., charge balance, energy minimization, power minimization, etc.) may be incorporated into the loss function through (weighted) addition. Let L be the scalar valued differentiable loss function that operates on the output of the differentiable dynamics model. Then, efficiently compute the derivatives of the loss with respect to the input parameters of the stimulation model, denoted ∇pL, where p is the vector of input parameters. p may consist of the parameters of various stimulus waveforms, or in cases where the waveform has not been predefined, it may consist of the numerical value of the stimulus current at every timestep for every electrode contact over the course of the simulation. If the model of the extracellular electric field is itself also differentiable (for example, in the case of point-source models in homogenous media), p may also contain parameters defining the size and or positions of extracellular sources of current. One update of the parameters in a single iteration of the standard gradient descent algorithm is then given by ← p - λ · ∇pL, where λ is the learning rate. In some implementation, an adaptive learning rate is used beginning at 0.01 and shrinking linearly to 0.001 over 500 iterations of gradient descent. However, variations on standard gradient descent, for example incorporating adaptive momentum with different learning rates and schedules, may be used. Additionally, gradient descent may be terminated when various convergence criteria are met, for example when the loss function has not decreased by a pre-specified amount for a pre-specified number of gradient descent iterations, or when magnitudes of the gradients of the loss function with respect to the optimizable parameters has dropped beneath a pre-specified value.
In both the gradient-free and gradient-based case, if using a surrogate model to evaluate performance, the results may be validated using biophysical neuron models. The results of these validations constitute additional extracellular electric field-neural response pairs that can be integrated into the training dataset and used to continue learning the models of the dynamics of activation and or block. Successful parameters may then be used to inform design of electrode geometries and/or stimulation parameters. Otherwise, the results of optimization using the learned surrogate models can be used to initialize the population for a gradient-free optimization using the biophysical neuron models to compute the loss function. This can overcome poor population initialization (whereby the gradient-free optimization method using the biophysical model to evaluate the loss function struggles to minimize the loss function due to low population diversity and no ‘good’ parameters in the population from which to construct superior candidates) and often converges to superior solutions compared to random initialization methods, as illustrated in
Embodiments of the present disclosure include a method for optimizing neuronal response, the method includes inputting a set of criteria corresponding to a desired neuronal response into an inverted model of neuronal response. The inverted model of neuronal response outputs one or more stimulation criteria capable of producing the desired neuronal response.
As detailed herein, in some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) for activation or conduction block, and neurons having a specific subthreshold response.
As detailed herein, in some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a gradient based optimization.
As detailed herein, in some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a global optimization algorithm comprising at least one of: a differential evolution algorithm, a genetic algorithm, a particle swarm algorithm, a simulated annealing algorithm, an ant colony algorithm, an estimation of distribution algorithm, a simplex search algorithm, a coordinate descent algorithm, a random search algorithm, a grid search algorithm, a spiral optimization algorithm, a stochastic diffusion search algorithm, or any combination thereof.
As detailed herein, in some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized for at least one of: minimizing energy required for activation or block or a subthreshold response, minimizing power required for activation or block or a subthreshold response, minimizing charge imbalance in the optimized waveform(s), minimizing onset response produced when the optimized waveform(s) are turned on, maximizing degree of conduction block, minimizing voltage required for activation or block or a subthreshold response with the optimized waveform(s), minimizing current required for activation or block or a subthreshold response with the optimized waveform(s), minimizing charge required for activation or block with the optimized waveform(s), maximizing therapeutic benefit produced by application of the optimized waveform(s), minimizing adverse effects produced by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron types by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron diameters by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron locations by application of the optimized waveform(s), measures related to the complexity of the electrode geometry or stimulation waveform shape, and any combinations thereof.
As detailed herein, in some embodiments, the one or more stimulation criteria comprise an electrode geometric parameter comprising at least one of: inter-contact spacing of contacts, number of contacts, shape of contacts, size of contacts, shape of insulation, size of insulation, placement of contacts relative to the insulation, and any combination thereof.
As detailed herein, in some embodiments, the one or more stimulation criteria comprise an electrode placement parameter comprising at least one of: electrode implantation depth, electrode displacement from an anatomical landmark, rotation of electrode, and any combination thereof.
As detailed herein, in some embodiments, the one or more stimulation criteria comprise a stimulation parameter comprising at least one of: amplitude, pulse width, frequency, waveform shape, delays, and any combination thereof.
As detailed herein, in some embodiments, the method further includes training a recommender system with a gradient based optimization.
As detailed herein, in some embodiments, the one or more stimulation criteria from prior optimization(s) using a surrogate model is utilized to initialize subsequent optimization(s) using surrogate models(s) or complete biophysical model(s) to evaluate the neural response.
Embodiments of the present disclosure also include a method for estimation of one or more stimulation criteria. In accordance with these embodiments, the method includes inputting a sequence of numerical input vectors corresponding to a desired neuronal response to a surrogate model of neuronal stimulation. In some embodiments, the surrogate model outputs a sequence of numerical output vectors corresponding to one or more stimulation criteria capable of producing the desired neuronal response.
In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) for activation or conduction block. In some embodiments, the set of criteria corresponding to the desired neuronal response comprises identifying one or more target neuron(s) having a specific subthreshold response.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a gradient based optimization.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized with a global optimization algorithm comprising at least one of: a differential evolution algorithm, a genetic algorithm, a particle swarm algorithm, a simulated annealing algorithm, an ant colony algorithm, an estimation of distribution algorithm, a simplex search algorithm, a coordinate descent algorithm, a random search algorithm, a grid search algorithm, a spiral optimization algorithm, a stochastic diffusion search algorithm, or any combination thereof; or optimized with a recommender system with a machine learning component and/or an artificial intelligence component comprising at least one of: recurrent neural network, convolutional neural network, fully connected neural network, decision tree, random forest, support vector machine, linear regression, logistic regression, nearest neighbors, or any combination thereof.
In some embodiments, the one or more stimulation criteria capable of producing the desired neuronal response is optimized for at least one of: minimizing energy required for activation or block or a subthreshold response, minimizing power required for activation or block or a subthreshold response, minimizing charge imbalance in the optimized waveform(s), minimizing onset response produced when the optimized waveform(s) are turned on, maximizing degree of conduction block, minimizing voltage required for activation or block or a subthreshold response with the optimized waveform(s), minimizing current required for activation or block or a subthreshold response with the optimized waveform(s), minimizing charge required for activation or block with the optimized waveform(s), maximizing therapeutic benefit produced by application of the optimized waveform(s), minimizing adverse effects produced by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron types by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron diameters by application of the optimized waveform(s), maximizing selectivity of activation or block between neuron locations by application of the optimized waveform(s), measures related to the complexity of the electrode geometry or stimulation waveform shape, and any combinations thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode geometric parameter comprising at least one of: inter-contact spacing of contacts, number of contacts, shape of contacts, size of contacts, shape of insulation, size of insulation, placement of contacts relative to the insulation, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise an electrode placement parameter comprising at least one of: electrode implantation depth, electrode displacement from an anatomical landmark, rotation of electrode, and any combination thereof.
In some embodiments, the one or more stimulation criteria comprise a stimulation parameter comprising at least one of: amplitude, pulse width, frequency, waveform shape, delays, and any combination thereof.
Embodiments of the present disclosure include a method of administering neuromodulation therapy to a subject, the method comprising: programming a pulse generator to deliver a pattern of electrical stimulation identified using the method for optimizing neuronal response; and delivering the pattern of electrical simulation to the subject to generate a response in at least one target neuron.
The systems and methods described herein can be implemented in hardware, software, firmware, or combinations of hardware, software and/or firmware. In some examples, the systems and methods described in this specification may be implemented using a non-transitory computer readable medium storing computer executable instructions that when executed by one or more processors of a computer cause the computer to perform operations. Computer readable media suitable for implementing the systems and methods described in this specification include non-transitory computer-readable media, such as disk memory devices, chip memory devices, programmable logic devices, random access memory (RAM), read only memory (ROM), optical read/write memory, cache memory, magnetic read/write memory, flash memory, and application-specific integrated circuits. In addition, a computer readable medium that implements a system or method described in this specification may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
One skilled in the art will readily appreciate that the present disclosure is well adapted to carry out the objects and obtain the ends and advantages mentioned, as well as those inherent therein. The present disclosure described herein are presently representative of preferred embodiments, are exemplary, and are not intended as limitations on the scope of the present disclosure. Changes therein and other uses will occur to those skilled in the art which are encompassed within the spirit of the present disclosure as defined by the scope of the claims.
It will be readily apparent to those skilled in the art that other suitable modifications and adaptations of the methods of the present disclosure described herein are readily applicable and appreciable, and may be made using suitable equivalents without departing from the scope of the present disclosure or the aspects and embodiments disclosed herein. Having now described the present disclosure in detail, the same will be more clearly understood by reference to the following examples, which are merely intended only to illustrate some aspects and embodiments of the disclosure, and should not be viewed as limiting to the scope of the disclosure.
The present disclosure has multiple aspects, illustrated by the following non-limiting examples.
With reference to
of approximately 15% (observed in the largest diameter fibers only) were observed. For nearly all diameters and waveforms, absolute threshold error was below 5%. It was found that fascicle position had relatively little effect on threshold error. Sawtooth (
Experiments were also conducted to assess the selectivity performance using the gradient-free approach and the learned surrogate model (3-layer recurrent neural network trained to predict m, h, p, and s gating variables in the MRG axon model) in novel pig and human vagus nerve morphologies (
In general, where the predicted activation differs, the gating variable surrogate predicts a higher activation threshold.
Experiments were also conducted to assess gradient descent in a selectivity task where the waveforms have not been specified; the results of these experiments are provided in
It is understood that the foregoing detailed description and accompanying examples are merely illustrative and are not to be taken as limitations upon the scope of the disclosure, which is defined solely by the appended claims and their equivalents. Various changes and modifications to the disclosed embodiments will be apparent to those skilled in the art.
This application claims priority to and the benefit of U.S. Provisional Pat. Application No. 63/318,491 filed Mar. 10, 2022, which is incorporated herein by reference in its entirety for all purposes.
This invention was made with Government support under Federal Grant No. OT2-OD025340 awarded by the National Institutes of Health. The Federal Government has certain rights to the invention.
Number | Date | Country | |
---|---|---|---|
63318491 | Mar 2022 | US |