Ecological Momentary Assessment (EMA) data provide a rich context for diagnosis and tracking of the progression or remediation of disease during health care interventions. EMA involves frequent data collection between clinical appointments and in non-clinical settings and, when implemented on mobile systems, provide a platform for real-time delivery of behavioral therapy and healthcare management. Existing EMA approaches are limited because: (1) They depend on pre-selected tests and conditions that are not optimized to the patient's behavioral or functional level or to changes in the patient's behavior or function during treatment. Consequently, patients perform the same test each time, many test items may be redundant, and the dynamic range of each patient's performance may be sampled inefficiently. (2) The testing schedule is pre-determined and does not update in response to changes or patterns detected in the data. Consequently, conservative testing schedules are selected, which may place unnecessary burden on patients in order to sample as much data as the patient can sustain and this may reduce the quality of data.
The disclosed methods and systems address the foregoing problems and can have the precision of clinical and research grade apparatuses while, for example, operating on mobile testing equipment. One example embodiment of the invention is a method of determining an optimal schedule for obtaining assessments of a physical subject. According to the example method, a set of assessments of a measurable biological or behavioral quantity of the physical subject is obtained from a data source over a particular time period and stored in memory. A processor in communication with the memory determines a first estimate of values of the measurable quantity of the physical subject over the particular time period based on the set of assessments, and stores the first estimate of values in the memory. For each time point during the particular time period that it is possible to obtain an assessment of the physical subject, the processor determines a maximum likelihood estimate for the measurable quantity at that time point, and stores the set of maximum likelihood estimates in the memory. The processor determines a second estimate of values of the measurable quantity of the physical subject over the particular time period based on the set of assessments and the set of maximum likelihood estimates, and stores the second estimate of values in the memory. The processor compares the first estimate of values with the second estimate of values as a function of time to obtain a set of divergences for the time points during the particular time period that it is possible to obtain an assessment of the physical subject, and stores the set of divergences in the memory. The processor determines at least one next time to obtain an assessment of the physical subject based on a maximum value of the set of divergences. When determining the first estimate of values and the second estimate of values, the processor may calculate a Gaussian Process Regression, and when obtaining the set of divergences, the processor may calculate a Kullback-Leibler Divergence.
The method can further include obtaining a subsequent assessment of the measurable biological or behavioral quantity of the physical subject at the determined at least one next time, adding the subsequent assessment to the set of assessments in the memory, and repeating the determinations of the estimates of values, maximum likelihood estimates, divergences, and at least one next time to obtain an assessment of the physical subject.
In some embodiments, the particular time period is based on knowledge of the physical subject, and the set of assessments of the physical subject are distributed over the particular time period based on knowledge of the physical subject. In some embodiments, the data source is an existing data set including information about the physical subject, and obtaining the set of assessments includes mining the set of assessments from the existing data set.
In many embodiments, the data source can include a sensor in communication with the physical subject and can be configured to measure the biological or behavioral quantity of the physical subject, or the method can include transmitting the determined next time to testing equipment in communication with the physical subject or causing testing equipment to obtain a subsequent assessment of the measurable biological or behavioral quantity of the physical subject at the determined at least one next time.
Another example embodiment of the invention is a system for determining an optimal schedule for obtaining assessments of a physical subject. The system includes memory, a data source, a hardware processor in communication with the memory and the data source, and a control module in communication with the processor. The hardware processor performs a predefined set of operations in response to receiving a corresponding instruction selected from a predefined native instruction set of codes. The control module includes (i) a first set of machine codes selected from the native instruction set for causing the hardware processor to obtain, from the data source, over a particular time period and store, in the memory, a set of measurable biological or behavioral quantity assessments of the physical subject, (ii) a second set of machine codes selected from the native instruction set for causing the hardware processor to determine and store, in the memory, a first estimate of values of the measurable quantity of the physical subject over the particular time period based on the set of assessments, (iii) a third set of machine codes selected from the native instruction set for causing the hardware processor, for each time point during the particular time period that it is possible to obtain an assessment of the physical subject, to determine and store, in the memory, a maximum likelihood estimate for the measurable quantity at that time point, resulting in a set of maximum likelihood estimates, (iv) a fourth set of machine codes selected from the native instruction set for causing the hardware processor to determine and store, in the memory, a second estimate of values of the measurable quantity of the physical subject over the particular time period based on the set of assessments and the set of maximum likelihood estimates, (v) a fifth set of machine codes selected from the native instruction set for causing the hardware processor to compare the first estimate of values with the second estimate of values as a function of time to obtain and store, in the memory, a set of divergences for the time points during the particular time period that it is possible to obtain an assessment of the physical subject, and (vi) a sixth set of machine codes selected from the native instruction set for causing the hardware processor to determine and store, in the memory, at least one next time to obtain an assessment of the physical subject based on a maximum value of the set of divergences.
In many embodiments, the control module includes (i) a seventh set of machine codes selected from the native instruction set for causing the hardware processor to obtain a subsequent assessment of the measurable biological or behavioral quantity of the physical subject at the determined at least one next time, (ii) an eighth set of machine codes selected from the native instruction set for causing the hardware processor to add the subsequent assessment to the set of assessments in the memory, and (iii) a ninth set of machine codes selected from the native instruction set for causing the hardware processor to re-execute the second, third, fourth, fifth, and sixth set of machine codes.
The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
A description of example embodiments of the invention follows.
The disclosed methods and systems provide adaptive test administration with respect to a physical subject. The disclosed methods and systems can be used to efficiently collect clinical assessment data from a patient, for example, can use machine learning approaches to detect unknown patterns in the assessment data, and can modify a testing schedule in real-time based on the analyses. The disclosed techniques can be used to (1) control the timing of each assessment in order to collect data at times and under conditions that are most informative about, for example, the health status of patient, and (2) modify the content of a test/intervention to optimally sample data at the most informative levels with respect to, for example, an individual patient. Such optimized testing schedules can be informed by known prior information about the time course of functional variation, which may occur over seconds (e.g., between blinks in the context of dry eye disease), daily (e.g., in diabetic retinopathy), or weeks (e.g., in optic neuritis in muscular sclerosis) or longer (e.g., in the aging eye). It should be appreciated that while the above provides example of timing in relation to the testing and treatment of eye diseases, the methods and systems can be applied to a variety of assessment situations, such as, for example, medical insurance, healthcare provision, personalized health and fitness tracking, and pharmaceutical applications, where they can be used for, e.g., individualized/personalized medicine, remote monitoring, and clinical trial management by determining optimal test schedules for detecting presence, progression, or remediation of disease, disease symptoms, and treatment side effects as well as the scheduling of therapeutic interventions and the possible cessation of treatment.
The disclosed adaptive methods and systems significantly reduce the frequency of data collection without loss in accuracy or precision and can even increase test reliability through reduction in redundancy and prevention of frustration and fatigue in a physical subject (e.g., a medical patient). Additionally, by detecting underlying patterns in the data, estimates of variability of each sample can be reduced. If these patterns are not accounted for, variability arising from fluctuations in the underlying unknown function is likely attributed to measurement error and behavioral noise, which can lead to significant increases in the sample sizes required to power a research study. The disclosed approach, therefore, leads to significant reductions in the size and cost of clinical trials as well as the detection of informative patterns of symptom change.
The disclosed methods and systems include direct applications of an optimal sampling technique to clinical assessment, and allow for accurate estimates of the variance of an unknown function at a single time step (i.e., measurement noise) independently of the variance due to fluctuations in the function over time by, for example, using an information theoretic approach to determining an optimal subsequent time point at which to acquire a next assessment.
Some example advantages of the disclosed methods and systems include the ability to estimate an unknown, underlying function using a small number of free parameters that remain constant regardless of the number of data points being estimated; thus, substantially reducing the error of the function estimate. Because estimates of the measurement error are achieved with a minimum of sampled assessments, and with great accuracy, the statistical power of clinical trials can be greatly increased. The disclosed methods and systems allow for accurate estimates of complex, irregular functions without the need for fitting a large number of local smoothing parameters. Local smoothing can be achieved via Gaussian Process Regression (GPR), for example, using only generic assumptions about the covariance of the underlying function.
The disclosed methods and systems generate fast estimates of an unknown time-series function using a minimum number of observations in the form of, for example, clinical measurements. The unknown function is assumed to be periodic and an estimate of the time period of a single cycle of the function can be derived from prior information about the quantities being measured or deduced from first principles. The estimation process begins by obtaining a small number of clinical measurements at equal time intervals. Nine to eleven measurements, for example, can be adequate for a fast, accurate estimation of 100 time points.
GPR can be performed on the initial measurements, yielding an estimate of the unknown function for all time points in the period, the variance of the function estimate, and the measurement noise. The variance of the estimate can be used to derive the likelihood of the estimate across all time points, i.e., the probability of observing all possible values of the underlying function at a given time.
An optimal subsequent time point can be selected at which to acquire a subsequent measurement by examining each time point to be estimated in turn and making a hypothesis as to the value of a measurement acquired at that point. For each time point, the hypothesized value of an observation at that point can be the current estimate at that point. This is the maximum likelihood solution: the expectation over all possible values that could be observed at that time. GPR can then be performed again using the current measurements plus the additional hypothesized one yielding a new, hypothetical function estimate and its associated likelihood function. An information theoretic technique, the Kullback-Leibler Divergence, can be used to compare the likelihood of the current estimate to that of the hypothetical estimate. The time point at which the hypothetical likelihood differs most from that of the current estimate is chosen as the next time at which to acquire a measurement.
Using this technique, the underlying function can be sampled at time points where change in the function over time is maximal and, thus, gives a clinician the most information about the form of the underlying function, avoiding locations where the function is not changing and which, thus, offer little information to the clinician.
Embodiments of the present invention utilize an analytical technique applied to time series data. The disclosed methods and systems generate a minimum-error estimate of the unknown, underlying time series function while simultaneously minimizing the number of samples required to optimize the estimate. The methods and systems operate either in real time or by mining an existing data set. Real-time estimates can be generated for functions that are known to be cyclical and for which the number of subsequent cycles available for sampling is unrestricted. Following each sample, the methods/systems generate a function estimate and then choose a next best time at which to acquire a sample. When an existing data set is used, estimates can be generated for any function with data defined over any finite time interval. When using an existing data set, the methods/systems generate an optimized sampling schedule that can then be used to sample from a new data source in real time.
The methods and systems can also generate simultaneous estimates of multiple time series functions defined over the same time span. A single optimized sampling schedule is derived which can be used for simultaneous sampling of each function. The methods and systems can also generate schedules for sampling at multiple successive time points. For example, if there is a limited number of, say, n samples that can be taken, the methods and systems determine the optimal subsequent n time points at which to obtain samples.
General Description
According to example embodiments of the invention, methods and systems take successive samples from an underlying function and, following each sample, generate an estimate of the function using, for example, Gaussian Process Regression (GPR). The optimal time at which to next sample the function is derived by comparing the current estimate of the function to hypothetical estimates made at future times. The maximum likelihood predicted value of a sample taken at a given time is the current GPR estimate for that time. This predicted sample is added to the current set of actual samples and a new, hypothetical GPR estimate is derived. The hypothetical estimate is compared to the current estimate by, for example, determining their Kullback-Leibler divergence (KLD). The time point at which the KLD is maximized is selected as the next time at which to obtain a sample. The time of maximum divergence is the time brings about the greatest change in the estimate of the underlying function and which thus yields the most information regarding its form.
Let Fi(t) equal one of i unknown time series functions to be estimated defined over a finite time span, T={t0, . . . , tN}. Let yi(t) equal an observation of the function made at time, t, corrupted by normally distributed noise with standard deviation, σy,i:
yi(t)=Fi(t)+ε
ε˜N(0,σy,i)
The example methods and systems return an estimate, Gi(t), of Fi(t) and an estimate, {circumflex over (σ)}y,i, of σy,i. It also returns a schedule of successive times from T that optimize the estimates with the minimum number of observations.
The following describes steps involved in calculating the function estimates. These steps are carried out independently for each Fi(t) and so the subscript, i, is omitted for notational simplicity.
Form Prior Distribution Over Noise Estimates
To derive {circumflex over (σ)}y, first form a prior distribution over σy. Assume that the maximum possible value of σy equals the standard deviation, σ, of F(t) measured over the timespan, T, and make a reasonable estimate, {circumflex over (σ)}, of it from existing data. Assume the prior distribution over σy to be normally distributed and set its initial value over a range of possible values, {circumflex over (σ)}y. The log prior is given by:
x={0, . . . ,{circumflex over (σ)}}
μprior={circumflex over (σ)}/2
σprior={circumflex over (σ)}/12
The prior distribution implicitly assumes that extreme values of σy are unlikely. Low values of σy imply that most observed variability comes from the underlying function, which has low covariance between the values at different time points. High values of σy imply that the function has high covariance, e.g., it's a line with slope near zero, and most observed variability arises from measurement error. The initial prior distribution assumes that the true value lies somewhere between the extremes.
Acquire Initial Samples
The example methods and systems begin by obtaining a small number of samples from F(t) at a set of equally-spaced time intervals. Let Y={yt
Derive the Covariance Kernel
GPR assumes that F(t) is drawn from a 0-mean, N-dimensional normal distribution of functions such that:
F(t)˜N(0N,Σ)
where 0N is a zero vector of length N, and Σ is the N×N matrix giving the variance and covariance of observations made at each time point. The initial samples are used to derive a model of Σ (defined at the currently sampled time points in t) in the form of the covariance kernel, K. The kernel takes the form of a squared-exponential function augmented by the current estimate of the additive noise, {circumflex over (σ)}y:
where X(t,t) is the distance between the current sampled time points:
Ns is the number of current samples, and I is the identity matrix.
Derive the maximum a posteriori estimate of additive noise, {circumflex over (σ)}y, by first iterating over each of its possible values in x (effectively treating it as a constant) and then fitting α (the variance) and β (the covariance ‘bandwidth’) by maximum likelihood for each iteration. The log-likelihood of the current samples given the kernel parameters is given by:
where s is an Ns×1 vector containing all current sampled values in Y, and | . . . | indicates the determinant. Next, calculate the log posterior probability of each noise estimate given the samples and the corresponding best fitting parameter values, α and β:
P(σy=x|s,α,β)=L(si|α,β,σy=x)+P(σy=x)
The current noise estimate is given by:
{circumflex over (σ)}y=argmaxxP(σy=x|s,α,β)
Derive the Regression Estimate
Next a GPR estimate is determined for F(t). For each time point of interest, tn (n={0, . . . , N}), calculate the distance, x*, of tn from all currently sampled time points in t:
x*={t0−tn, . . . ,tN
and derive the estimated covariance, k*, given the current parameter estimates, α and β:
The GPR estimate, G(tn), of F(tn) is then:
G(tn)=k*K−1s
and the variance of the estimate is given by:
vt
The full function estimate, G(t), is derived by iterating over each time point, n, in the range.
Determine the Optimal Next Sample Time
Next determine the optimal time or set of times, topt, at which to get the next samples. Let T*={t0, . . . , tC}, for
equal the set of all groups of time points to be considered for sampling. When only a single sample time is needed, k=1, and T*=[1, 2, . . . , N]. Under some circumstances, it may be desirable to determine multiple times at which to get samples. When, for example, there is a limit on the number of samples that can be taken, set k equal to the total number of possible samples and then determine the set of k optimal sampling times. Having collected a new sample at the first time in the sequence, a new schedule of size k−1 can be derived, and so forth. When it is desirable to determine the two best times at which to obtain samples during the time span, N, for example, k=2, and T=[(1 2), (1 3), . . . , (1 N), (2 3), . . . (N−1 N)].
Iteratively select each set of time points tc (for c={0, . . . , C}) in T* and derive the set, ŷi(tc), of maximum likelihood estimates of sample values from Fi(tc). The maximum likelihood sample values are simply the current GPR estimates at those times:
ŷi(tc)=Gi(tc)
The values in ŷi(tc) are added to the set of current samples. Let ŝi equal the (Ns+k)×1 vector giving the aggregate of current samples, si, from Fi(t), and the hypothetical samples in ŷi(tc). If, in si, there is already a sample at any of the time points in tc, the arithmetic mean of the samples is taken. Then a new, hypothetical GPR estimate, Ĝi(t), is derived as above, but replacing si with ŝi.
Let ξ(yi|Gi(t),vt,i) equal the likelihood of the current GPR estimate at time t:
where y is a m×1 vector with values defined over some finite interval in the range of Fi(t). And let ξ(yi|Ĝi(t),{circumflex over (v)}t,i) be the similar likelihood function for Ĝi(t).
Calculate the Kullback-Leibler divergence of the current and the hypothetical GPR estimates:
The optimal times at which to get the next samples are given by:
tops=argmaxt
Acquire the Next Sample
The next iteration begins by getting a sample at topt, adding the sample to Y, and repeating the process.
Update the Noise Prior
Each time a new sample is acquired and the covariance kernel, K, is recalculated, the noise prior, P(σy=x), is updated by replacing it with the posterior, P(σy=x|s,α,β).
Getting the Optimal Sampling Schedule Using Existing Data
The above describes the process for sampling and generating function estimates in real time. When existing data sets are used, example methods and systems can calculate the residual error of the function estimate made on each step with regards to the entire data set. The optimal number of samples is that corresponding to the function estimate at which the error approaches a low asymptote. Sample times may be chosen in staggered order. The optimal schedule consists of the selected times arranged in temporal order.
Example Iterations
The following is a step-by-step illustration of an example method of determining an optimal schedule for obtaining assessments of a physical subject, with reference to
(1) Measurements from the unknown function are taken at regular intervals over the time span of interest. As shown in
(2) A model of the variance and covariance of the measurements is derived in the form of the kernel, K, given by:
where X(t, t) is a matrix containing the distances between the measured time points and α, β, and {circumflex over (σ)}y are parameters fit to the data by maximum likelihood. The resulting matrix, K, is shown in
(3) Next, calculate an estimate of the unknown function from the initial measurements, resulting in estimated values for every time point, tn, for n={1, 2, . . . , 100, 101}. To do so, first estimate the covariance of the measurements at each time point, tn, with respect to the initial measurements using the parameters α, and β derived in the previous step. The resulting row vector of estimated covariances is given by:
where x* is a row vector giving the distance between the time point, tn, and each of the current measurement times. A subset of the resulting vectors is shown in
(4) The estimate of the unknown function for each time point, tn, is given by:
(tn)=k*K−1s
where s is a column vector containing the current set of measurements. The full estimate across all time points, (t), derived from the initial measurements, is shown in
(5) Next, derive the next optimal time at which to get a measurement, as shown in
Calculate the likelihood of the current estimate, (t), as a function of t across a range of possible values of y(t):
where y is a m×1 vector with values defined over some finite interval in the range of F(t), and vt is the variance of the estimate at time, t. Similarly derive the likelihood function of the hypothetical estimate: ξ(y|(t),{circumflex over (v)}t).
The likelihood distributions are used to calculate the Kullback-Leibler divergence, Dt
This process is illustrated in
The process can be repeated, calculating the KLD for all time points, tc. The optimal time at which to get the next measurement is the time at which the KLD is maximized. In this example, the maximum value is at tc=66, as shown in
(6) Next, obtain a measurement at t=topt=66 and repeat the process from step 2. The new measurement and updated function estimate (from step 4), are shown in
Example Applications of Methods and Systems
The following are three example applications of example embodiments of the invention. In each case, estimates of an unknown function and an optimized sampling schedule are determined. The first example (
The example application of Example 1 determines estimates of an underlying cyclical time series function of the form:
y(t)=α sinc(2πt/N)
t={0,1, . . . ,N}
where N is the length of one cycle of y(t). The underlying function is unknown. Noisy samples, s(t), are obtained from the function where the samples are normally distributed with SD, σ:
s(t)˜(y(t),σ)
Samples are obtained from multiple, successive cycles of the function, but for simplicity, all samples are depicted within a single cycle. So that, for example, if a sample at t=50 is selected followed by a sample at t=30, it is understood that the latter sample occurs in the subsequent cycle.
For this simulation, α=3, N=101, and σ=0.5. The example application begins by collecting eleven equally spaced samples at t={1, 11, . . . , 101}. Optimal samples are then selected for an additional 100 iterations. The results are shown in
In Example 2, measurements of systolic and diastolic blood pressure for a single subject were collected every 30 minutes over a 24-hour period. The data were used to simulate the operation of the disclosed methods and systems. As above, samples were selected as if from successive cycles of the function, i.e., BP as a function of time. Only one data point per 30-minute time interval was selected. The example application begins by obtaining nine samples: every 3 hours from 0:00 to 21:00, and then one sample at 23:30. It then selects optimal sample times until all of the data have been sampled. The results are shown in
In Example 3, data was collected from twenty subjects who were receiving treatment for macular degeneration in the form of injections of anti vascular endothelial growth factor (antiVEGF). Following the injections, periodic tests were carried out (approximately once every two days over a 90-day period) to measure the effectiveness of the drug in improving the subjects' vision. For the purposes Example 3, one of these tests is shown, the contrast sensitivity function (CSF), which measures the ability of a subject to reliably identify a stimulus (e.g., a letter on a standard vision chart) as a function of its contrast and spatial frequency (i.e., size). Sensitivity is quantified using a standard summary statistic, the area under the log CSF (AULCSF). Each subject's AULCSF is normalized with respect to his or her maximum measured value. All subjects' data are then combined resulting in a set containing measurements at a rate of approximately one per day (shown in
Example 3 demonstrates the ability of the disclosed methods and systems to mine an existing data set in order to derive an optimal sampling schedule that can be applied in testing new subjects. Similar to the previous examples, the method/system begins by obtaining ten samples at regular intervals (one every ten days). It then selects an additional 80 optimal samples. The results are shown in
In one embodiment, the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the invention system. The computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application is the U.S. National Stage of International Application No. PCT/US2016/037880, filed Jun. 16, 2016, which designates the U.S., is published in English, and claims the benefit of U.S. Provisional Application No. 62/180,778, filed on Jun. 17, 2015. The entire teachings of the above applications are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2016/037880 | 6/16/2016 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/205521 | 12/22/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
9883809 | Klap | Feb 2018 | B2 |
20080301077 | Fung | Dec 2008 | A1 |
20110004110 | Shusterman | Jan 2011 | A1 |
20130116999 | Stein | May 2013 | A1 |
20140280145 | Heit | Sep 2014 | A1 |
20150164416 | Nothacker | Jun 2015 | A1 |
20180025110 | Meuleman | Jan 2018 | A1 |
20210151184 | Yuen | May 2021 | A1 |
Number | Date | Country |
---|---|---|
104605939 | May 2015 | CN |
WO 2013170091 | Nov 2013 | WO |
WO 2016205521 | Dec 2016 | WO |
Entry |
---|
Lin et al., “Accuracy Based Adaptive Sampling and Multi-Sensor Scheduling for Collaborative Target Tracking,” Ninth International Conference on Control, Automation, Robotics and Vision, ICARCV 2006. (Year: 2016). |
Willett et al. “Backcasting: Adaptive Sampling for Sensor Networks,” Information Processing in Sensor Networks 2004, pp. 124-133. (Year: 2004). |
Craven et al., “Compressed Sensing for Bioelectric Signals: A Review,” IEEE Journal of Biomedical and Health Informatics, vol. 19, No. 2, Mar. 2015 (Year: 2015). |
Hardwick et al., “Flexible Algorithms for Creating and Analyzing Adaptive Sampling Procedures,” New Developments and Applications in Experimental Design—IMS Lecture Nodes—Monograph Series (1998) vol. 24, pp. 91-105. (Year: 1998). |
Benda et al., “From response to stimulus: adaptive sampling in sensory physiology,” Current Opinion in Neurobiology, 2007, 17: 430-436. (Year: 2007). |
Bohs et al., Real-time adaptive sampling with the fan algorithm, Med. & Biol. Eng. & Comput., 1988, 26, 565-573. (Year: 1988). |
Thomas A. Lasko, “Nonstationary Gaussian Process Regression for Evaluating Clinical Laboratory Test Sampling Strategies”, Proc Conf AAAI Artif Intell., 2015, Jan. 1, 2015, pp. 1777-1783. |
International Search Report and Written Opinion of the International Searching Authority for International Application No. PCT/US2016/037880, entitled “Method And System For Adaptive Scheduling Of Clinical Assessments”, dated Sep. 29, 2016. |
Singh, et al., “Active Learning for Sampling in Time-Series Experiments With Application to Gene Expression Analysis”, Proceedings of the 22nd International Conference on Machine Learning, pp. 832-839, Bonn, Germany, 2005. |
Poppe, et al., “A predictive approach to nonparametric inference for adaptive sequential sampling of psychophysical experiments”, Journal of Mathematical Psychology 56 (2012) 179-195. |
International Preliminary Report on Patentability and Written Opinion of the International Searching Authority for International Application No. PCT/US2016/037880, entitled “Method And System For Adaptive Scheduling Of Clinical Assessments”, dated Dec. 19, 2017. |
Number | Date | Country | |
---|---|---|---|
20180190380 A1 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
62180778 | Jun 2015 | US |