The present disclosure relates generally to probabilistic spatiotemporal forecasting using machine learning techniques.
Spatiotemporal forecasting plays an important role in various real world systems, such as traffic control systems and wireless communication systems. For example, in traffic control systems, spatiotemporal forecasting may be used in intelligent traffic management applications to predict (e.g. forecast) future traffic speeds based on historical traffic speeds obtained by sensors located throughout a road network. An example of a road network 102 with speed sensors 104(1) to 104(N) (where 104(i) denotes a generic speed sensor) disposed at various locations on the roads of the road network 102 is shown in
Combining the spatial dependencies encoded in the structure of a graph with the temporal pattern information of the time series obtained at the nodes of the graph can be problematic. Recent research has resulted in multivariate prediction algorithms which effectively utilize the structure of a graph via various Graph Neural Networks (GNN) to address this problem. In existing systems that perform spatiotemporal forecasting, graph convolution is combined with recurrent neural networks, temporal convolutions, and attention mechanisms to further encode the temporal correlation between adjacent time points in the time series. Such existing systems can generate fairly accurate point forecasts, however, these existing systems have a serious drawback as these existing system cannot gauge the uncertainty in their predictions (i.e. forecasts). Uncertainty estimation of the prediction (e.g. forecast) generated by systems that perform spatiotemporal forecasting is important because the uncertainty estimation provides information in terms of how confident the system can be about the prediction (forecast).When decisions are made based on forecasts, the availability of a confidence or a prediction interval can be vital. Accordingly, there is a need for a system that can provide forecasts and accurate confidence predictions for such forecasts.
According to a first example aspect of the present disclosure is a computer implemented method for probabilistic spatiotemporal forecasting. The computer implemented method includes acquiring a time series of observed states from a real-world system, each observed state corresponding to a respective time-step in the time series and including a set of data observations of the real-world system for the respective time-step. For each of a plurality of the time steps in the time series of observed states, the method includes: generating a hidden state for the time-step based on (i) the observed state for a prior time-step and (ii) an approximated posterior distribution generated for a hidden state for the prior time-step, and generating an approximated posterior distribution for the hidden state generated for the time-step based on (i) the observed state for the time-step and (ii) the hidden state generated for the time-step. The computer implemented method further includes generating a future time series of predicted states for the real-world system, each predicted state corresponding to a respective future time-step in the future time series. Generating the future time series of predicted states, includes: (A) for a first future time step in the future time series: generating a hidden state for the first future time step based on (i) the observed state for a final time step in the time series of observed states; and (ii) the posterior distribution for the hidden state generated for the final time step in the time series of observed states, and generating a predicted state of the real-world system for the first future time step based on the hidden state generated for the first future time step; and (B) for each of a plurality of the future time steps following the first future time step in the future time series: generating a hidden state for the future time step based on (i) the predicted state of the real-world system generated for a prior future time step and (ii) the hidden state generated for a prior future time step, and generating a predicted state of the real-world system for the future time step based on the hidden state generated for the future time step.
In at least some applications, the use of an approximated posterior distribution alternated with hidden state predictions when encoding the time series of observed states can enable improved forecasting in complex, high dimensional settings, and also provide a confidence indication for final predictions.
According to some aspects of the computer implemented method, the method computer implemented includes controlling the real-world system to modify future data observations of the real-world system based on the future time series of predicted states for the real-world system.
According to one or more of the preceding aspects of the computer implemented method, the real-world system includes a road network and the set of data observations include traffic speed observations collected at a plurality of locations of the road network.
According to one or more of the preceding aspects of the computer implemented method, the computer implemented method includes controlling a signaling device in the road network based on the future time series of predicted states for the real-world system.
According to one or more of the preceding aspects of the computer implemented method, the computer implemented method includes comprising forming a Monte Carlo approximation of a posterior distribution of the future time series of predicted states.
According to one or more of the preceding aspects of the computer implemented method, for each of the plurality of the time steps in the time series of observed states, generating the approximated posterior distribution generated for the hidden state generated for the time-step comprises using a particle flow algorithm to migrate particles of the hidden state to represent the posterior distribution.
According to one or more of the preceding aspects of the computer implemented method, for each of the plurality of the time steps in the time series of observed states and for each of the plurality of the future time step, generating of the hidden states is performed using a trained recurrent neural network (RNN).
According to one or more of the preceding aspects of the computer implemented method, for each of the plurality of the future time steps, generating the predicted state of the real-world system for the future time step is performed using a trained fully connected neural network (FCNN)
According to one or more of the preceding aspects of the computer implemented method, the predicted state of the real-world system for a future time-step includes a set of predicted observations and a prediction interval for each of the predicted observations.
According to one or more of the preceding aspects of the computer implemented method, the set of data observations of the real-world system are measured using a respective set of observation sensing devices.
According to one or more of the preceding aspects of the computer implemented method, each time series of the observed states from the real-world system is represented as a respective node in a graph and relationships between the respective times series are represented as graph edges that collectively define a graph topology, wherein: for each of the plurality of the time steps in the time series of observed states, generating the hidden state for the time-step is also based on the graph topology; and for each of the plurality of the future time including the first future time step in the future time series, generating the hidden state for the future time step is also based on graph topology.
In some aspects, the present disclosure provides a system for probabilistic spatiotemporal forecasting, the system comprising a processing system configured by instructions to cause the system to perform any of the aspects of the method described above.
In some aspects, the present disclosure provides a computer-readable medium storing instructions for execution by a processing system for probabilistic spatiotemporal forecasting. The instructions when executed cause the system to perform any of the aspects of the method described above.
Reference will now be made, by way of example, to the accompanying drawings which show example embodiments of the present application, and in which:
The same reference numerals may be used in different figures to denote similar components.
The present disclosure provides a method and system for probabilistic spatiotemporal forecasting with uncertainty estimation. The system and method of the present disclosure include a probabilities method that approximates the posterior distribution of spatiotemporal forecasts. The method and system of the present disclosure provide samples from an approximate posterior distribution of a forecast.
Probabilistic spatiotemporal forecasting can be applied in many practical real world time-series prediction applications, including for example real-word dynamic systems that are related to intelligent traffic management, computational biology, finance, wireless networks and demand forecasting. The probabilistic spatiotemporal forecasting methods and systems described in this disclosure can be applied to different types of real-word dynamic systems. Examples will be illustrated in the context of intelligent traffic management, however the present disclosure is not limited to such systems.
Intelligent traffic management controller 101 includes a machine learning (ML) based forecasting model 110. Forecasting model 110 obtains real-world state-space time-series observations about the dynamic system 100, including for example traffic speed measurements from the set of speed sensors 104(1) to 105(N) included at known locations within the road network 102. The time-series observations from each speed sensor 104(i) can, for example, be received by intelligent traffic management system 100 over communication network 106. Forecasting model 110 forecasts (i.e. predicts) predicts a future time-series of state-spaces based on the observed time-series data. These predictions can be processed by intelligent traffic management controller 101 to make a traffic management decision. For example, intelligent traffic management controller 101 may make traffic flow controlling and routing decisions that are effected by controlling signaling devices such as traffic flow control lights 108 (e.g., stop lights). In some examples, the predictions can be provided to one or more centralized or distributed vehicle navigation systems 112 and processed to enable real-time routing decisions (or suggestions) for individual vehicles and/or groups of vehicles. Reference will be made throughout the following disclosure to road traffic forecasting in the context of
As explained below, forecasting model 110 can include neural networks that are collectively configured and trained to perform a task of discrete-time multivariate time-series prediction, with the goal of forecasting multiple time-steps ahead. A multivariate time-series consists of more than one time-dependent variable and each variable depends not only on its past values but also has some dependency on other variables. In the road traffic forecasting example of
In the following description, ytε denotes a multivariate observed state at time step t, and t
In the road traffic forecasting example of
For graph =(,ε), node data corresponding to the set of N nodes for each time step t corresponds to the multivariate observed state yt ∈ and the covariate observed state zt ∈. The set of edges E for the graph, which defines the graph topology, can be represented in an N by N adjacency matrix, A (hereinafter “graph topology A”).
A robust historical dataset () can be used for training forecasting model 110, but after training the forecasting model 110 performs its prediction tasks based on a limited window of historical data. As will be explained below, forecasting model 110, is configured to process, for some time offset t0, a multivariate observed state time-series t
In example embodiments, the forecasting model 110 generates prediction results that include a posterior distribution of the time series forecasting, gathered from particle predictions for Np particles for each time-step. The mean of the particle predictions can be used as the final prediction result (e.g., as a point estimate) and the distribution of the particle predictions as an uncertainty characterization for the prediction results and a confidence indicator in the form of a prediction interval. Each predicted state of the real-world system includes, for each respective time-series, a posterior distribution of particles, wherein a mean of the posterior distribution is used as a predicted observation for the time-series for the future time step and the posterior distribution of particles is used to generate a confidence indicator. Thus, in examples, forecasting model 110 outputs: (i) point estimates (also referred to as predicted or forecast samples) (e.g., a predicted traffic speed) for each time step for each time-series i (e.g., for each speed sensor 104(i)), and (ii)_corresponding prediction intervals. A prediction interval is an indication of confidence in a prediction and indicates a range that future individual point observations will fall within relative to the predicted point estimate. For example, in a traffic speed forecasting scenario, a 95% prediction interval will include an upper speed value and a lower speed value with respect to a predicted speed sample for speed sensor 104(i) for a future time step, and is an indication, with a 95% probability that the actual observed speed value for the speed sensor 104(i) for that future time step will fall within the range of the upper speed value and the lower speed value. The narrower the prediction interval range, the greater the prediction confidence.
An example of forecasting model 110, according to an example aspect of the disclosure, is illustrated in the block diagram of
In example aspects of the disclosure, forecasting model 110 operates based on the postulation that observed multivariate observed state yt ∈ is an observation from a Markovian state space model with a hidden (i.e., unobserved) state Xt ∈. The state space for forecasting model 110 can be represented as:
x
i
=˜p
(Z:p)
x
i
=g
,o(Xt−3,yt−1,zt,vt),fort>1,
y
i
=h
(xt,zt,wt),fort≥1 (EQ. 1)
Where xi is an initial hidden state, Vt˜pv(·|Xt−1,σ) is a process noise latent state; wt˜Pw(·|xt,γ) is a measurement noise latent state; p, σ and y are parameters of distribution of the initial hidden state Xi, process noise latent state vt and measurement noise latent state wt, respectively; and g and h denote system dynamics (transition) and measurement (observation) approximating functions with parameters ψ and Φ respectively. The subscript in functions g and h indicates that the functions are potentially dependent on the graph topology A of graph . The measurement function hg,o(xt,zi,o) is a differentiable function whose first derivative w.r.t. hidden state xt is continuous.
Accordingly, the complete set of learnable parameters for forecasting model 110 is formed as ⊖={p, ϕ, σ, ϕ, γ}.
In example aspects, the ML model 112 is configured to approximate the following prediction function:
In particular, as explained below, the term pψ,σ(Xt|Xt−1, yt−1, zt) is approximated by state transition operations 310, 316; the term P⊖(XP|Y1:P, Z1:P) is approximated by particle flow operation 312; and the term pϕγ(yt|xt, zt) is approximated by emission operation 314.
The integral in Equation (2) is analytically intractable for a general non-linear state-space model. Accordingly, ML model 112 applies a Monte Carlo approximation of the integral, as will be explained below.
Each the operations 310, 312, 314, 316 and their respective approximations of the above equations terms will now be described according to example aspects of the disclosure.
RNN model based state transition operations 310, 316, can, in example embodiments, be performed using an Adaptive Graph Convolution Gated Recurrent Unit (AGCGRU) as presented in the published paper “Bai, L., Yao, L., Li, C., Wang, X., and Wang, C. Adaptive graph convolutional recurrent network for traffic forecasting. In Proc. Neural Info. Process. Systems (NeurIPS), 2020” (Reference 1). In such cases, an AGCGRU is used to approximate the function pψ,σ(xt|xt−1, yt−1,zt).
As described in Reference 1, an AGCGRU combines (i) a module that adapts a provided graph based on observed data, (ii) graph convolution to capture spatial relations, and (iii) a gated recurrent unit (GRU) to capture evolution in time. An example RNN model used for state transition operations 310, 316 employs an L-layer AGCRU with additive Gaussian noise to model the system dynamics function g:
In Equation (3), Pv(vt)=(0, σ2I). i.e., the latent variables for the system dynamics function g are independent. The initial state distribution is chosen to be isotropic Gaussian, i.e. p(xt,zt,p)=(0,p2I). The parameters p and a are learnable variance parameters.
As indicated in
In the case of decoder 304, the state transition operation 316 for time step t=P+1 receives as inputs:(i) the multivariate observed state yp for time step P; (ii) a covariate state zp+1 for the time step P+1; (iii) graph topology A of graph ; and (iv) the approximated posterior distribution {tilde over (X)}p of hidden state XP. Based on its respective inputs, the state transition operation 310 for time step t=P+1 computes a predicted future time-step hidden state XP+2.
In the case of the decoder state transition operations 316 for each of the time steps t={P+2, . . . ,P+Q}, the respective inputs to the state transition operation 316 for the time step are: (i) the predicted state yt−1 for time step t−1 as generated by an emission operation 314 for the previous time step (explained below); (ii) a covariate state zt for the subject time step t (features for future time steps can be provided at inference time); (iii) graph topology A of graph ; and (iv) the predicted hidden state xt−1 generated by the previous state transition operation 316. Based on their respective inputs, each respective state transition operation 310 for each of the time steps t={P+1, . . . ,P+Q} computes a respective predicted future time-step hidden state xt.
Particle flow operations 312 will now be described in greater detail. Each hidden state xt defines a distribution of Np continuous variable elements, referred to as particles. As noted above, in encoder 302, an approximated posterior distribution {tilde over (x)}t=PΘ(xt|Y1;p,z1:t) of hidden state xt, is generated by a respective particle flow operation 312 for each time step t. For example, particle flow operation 312 can apply a particle flow algorithm that, for a given time step t, solves differential equations to gradually migrate particles from the predictive distribution (e.g., hidden state xt) so that they represent the posterior distribution for that hidden state after the flow. A particle flow can be modelled by a background stochastic process η, in a pseudo-time interval λ∈[0,1], such that the distribution of η0 is the prior predictive distribution pΘ(xt|y1:t−1) and the distribution of η1 is the posterior distribution pΘ(xt|y1:t). A graphical representation of a particle flow operation is illustrated in
Where {Xti}j=1N
As indicated in
Emission operations 314 will now be described in greater detail. The FCNN based model that performs emission operation 314 can be represented as:
y
t
=W
ϕx
+W
t EQ. (5)
Where wϕ is a linear projection matrix and latent variable wt for the emission operation is modelled as Gaussian with variance dependent on hidden state xt via a learnable softplus function:
P
w(wt|xt)=(0,diag(softplus(γx;t))2. EQ. (6)
As indicated in
An example of a probabilistic spatiotemporal forecasting task in which a sequence of recent historic data is used to predict a sequence of future data for a real-word dynamic system (e.g., the dynamic system 100), performed by forecasting model 110, is illustrated in the pseudocode “Algorithm 1” of
As indicated in line 1 of Algorithm 1, the inputs provided to forecasting model 110 include: a time series sequence of multivariate observed states yt for a set of historic time steps t={1, . . . ,P}; a time series sequence of covariate observed states Z t for a set of historic time steps t={1, . . . ,P} (optional in some examples); a graph adjacency matrix A providing a graph topology of the observed system (optional in some examples); and an initial set of forecasting model parameters ⊖={p, ψ, σ, ϕ, γ}. As indicated in line 2, the output of forecasting model 110 is a time series sequence of predicted states y for a set of future time steps t={P+1, . . . ,P+Q}. The results for each future predicted state y includes the posterior distribution of the time series forecasting, gathered from the prediction results for the Np particles. The mean of the particle predictions is used as the final prediction result and the distribution of the particle prediction results as the uncertainty characterization for the prediction results and a confidence indicator in the form of a prediction interval. As indicated line 3, initial hidden states xi and initial hidden state particle distribution ηi0 can be randomly sampled from a stochastic distribution.
In Algorithm 1, lines 4 to 10 correspond to a first processing step (Step 1) that includes operations performed by encoder 302 in respect of observed time steps t=1, 2, . . . ,P, and lines 11 to 18 correspond to a second processing step (Step 2) that includes operations performed by decoder 304 in respect of future time steps t=P+1, . . . ,P+Q.
Step 1: For each of the time steps t=1, 2, . . . ,P, particle flow operations 312 and state transition operations 310 respectively generate a approximated posterior distribution pΘ(xt|y1:t) and hidden state Xt using the methodologies described above. Each hidden state Xt incudes a distributed set of Np particles, {xtj}. The hidden state output Xt by the state transition operation 310 for time step t is used as the input for the particle flow operation 312 for the same time-step t. The approximated posterior distribution pΘ(xt|y1:t) from each particle flow operation 312 for a time-set t is used as the input for the state transition operation 310 for the next time-step t+1. In this manner, the posterior distributions of the hidden states are recursively approximated.
An example of a particle flow process that can be used to implement particle flow operation 312 is illustrated in the pseudocode “Algorithm 2” of
Step 2: For each of the time steps t={P+1, . . . ,P+Q}, decoder 304 iterates between the following two operations:
(A) State transition operation 316, which samples hidden state particles x, t as particles (i) in the case of t={P+1}, from the hidden state approximated posterior distribution PΘ(xt−1|Y1:t−1Zt), and (ii) in the case of t={P+2, . . . ,P+Q}, from the hidden state xt−1 output by the previous time step state transition operation 316 (e.g., from pψ,σ(xt|xt−1, yt−1, zt)) to output a respective hidden state Xt. This amounts to a state transition at time t to obtain the current hidden state xt from the previous state xt−1 as per the above noted function pΘ(xp|y1:p, z1:p); and
(B) Emission operation 314, which samples a prediction y′t (i.e., a forecasted sample) from hidden state Xt., using the previously described measurement function h (i.e., Yt=hg,ϕ(xt, zt, wt).
As indicated at line 19 of Algorithm 1, once Steps 1 and 2 are complete, a Monte Carlo (MC) approximation of the integral in EQ. (2) is then formed as:
Each prediction sample yjP+1:p+Q is approximately distributed according to the joint posterior distribution of predicted state YP+1:P+Q.
As noted above, a comprehensive historical dataset () can be used for training forecasting model 110. The forecasting model parameters Θ={p, ψ, σ,ϕ,γ}. can be initialized by random sampling, and updated during training using gradient decent. An example of a training process that can be used to train the implement particle flow operation 312 is illustrated in the pseudocode “Algorithm 3” of
For illustrative purposes,
From the above disclosure, it will be noted that forecasting model 110 considers time-series data from a dynamic system as a random realization from a nonlinear state-space model and targets Bayesian inference of the hidden states for probabilistic forecasting. Particle flow analysis is sued as a tool for approximating the posterior distribution of the states. Particle flow analysis may, in some applications, be highly effective in complex, high-dimensional settings. In at least some scenarios, forecasting model 110 may provide better characterization of uncertainty while maintaining comparable accuracy to the state-of-the art point forecasting methods.
The systems and methods of this disclosure include embodiments that model multivariate time-series as random realizations from a nonlinear state-space model, and target Bayesian inference of the hidden states for probabilistic forecasting. The disclosed systems and methods can be applied to univariate or multivariate forecasting problems, can incorporate additional covariates, can process an observed graph, and can be combined with data-adaptive graph learning procedures. In the illustrated example, the dynamics of the state-space model are built using graph convolutional recurrent architectures. An inference procedure employs particle flow, which may in some scenarios, conduct more effective inference for high-dimensional states when compared to particle filters of known forecasting solutions. In the illustrated examples, a graph-aware stochastic recurrent network architecture and inference procedure is disclosed that combines graph convolutional learning, a probabilistic state-space model, and particle flow.
Further details and example aspects of systems and methods for probabilistic spatiotemporal forecasting according to the rpesent disclosure will now be provided. Observations of an observed time series are received from a state-space model. An observation is an observed measurement in the observed time series (e.g. traffic speed) that is influenced by a latent state variable. The observation is a noisy transformation of the latent state variable of a recurrent neural network (RNN). Since the parameters of the RNNs and fully connected networks (FCNNs) of an example system (for example, forecasting model 110) of the present disclosure that performs spatiotemporal forecasting are unknown, the posterior distribution of the forecast generated by the system is maximized during training of the system to learn parameters of the RNNs and the FCNNs of the system. At each epoch during training of the system, the posterior distribution is computed based on the current value of the parameters of the RNNs and the FCNNs and a stochastic gradient based backpropagation algorithm is used to update the parameters of the RNNs and the FCNNs. Based on the trained system (e.g. the system with the parameters of the RNN and FCNNs having been learned), Bayesian inference of the states of the RNNs (“RNN states”) is performed to obtain the approximate posterior distribution of the forecasts during training. Because Bayesian inference in the high dimensional space of RNN states is performed, many conventional Bayesian inference techniques become inefficient. The method and system of the present disclosure uses particle flow for computing the posterior distribution of RNN states, as it is shown to be highly effective in complex high-dimensional settings.
x
(0)˜(0·p2).
x
(t)
=RNN(Y(t−1)·X(t−1)).
y
(t)
=X
(t)
W
proj
+V
(t)
·v
(t)˜(0·δ2)
The transition of the latent (i.e. hidden) state X(t) is governed by a recurrent neural network (RNN) and the measurement function is a linear. The initial latent (i.e. hidden) state X(0) is assumed to be distributed according to a isotropic Gaussian distribution and the measurement noise v(t) is also Gaussian. The system 400 has access to a graph G, which encodes spatial relationships among different dimensions of Y(t). Any suitable RNN may be used which either exploits the structure of the graph for learning or learns a graph from the observed time series and incorporate it into learning. The system 400 performs spatiotemporal forecasting by accessing the first P steps for the observations Y(t) (i.e. Y(1:P)) and generating predictions (e.g. forecasts) for the next Q steps (i.e. Y(P+1:P±Q)). In a Bayesian setting, this amounts to the system computing the posterior distribution of the forecasts, which is expressed as follows:
Θ denotes the parameters of the RNNs and the FCNNs of the system 400 of
The integral above is intractable, and the system of
Step 1: The system 400 shown in
The diagram (a) on the left shows the samples (shown in asterisk) from the prior distribution (contours shown in lines). The diagram (b) in the middle shows the contours of the posterior distribution and the direction of flow for the particles, and the diagram (c) on the right shows the particles after the flow is complete.
Step 2: For t=P+1 to P+Q, the system 400 shown in
Step 3: The system shown in
The electronic storage 220 may include any suitable volatile and/or non-volatile storage and retrieval device(s), including for example flash memory, random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and other state storage devices. In the example of
As used herein, statements that a second item (e.g., a signal, value, scalar, vector, matrix, calculation, or bit sequence) is “based on” a first item can mean that characteristics of the second item are affected or determined at least in part by characteristics of the first item. The first item can be considered an input to an operation or calculation, or a series of operations or calculations that produces the second item as an output that is not independent from the first item. As used herein, the terms “comprising”, “comprises”, “including” and “includes” are inclusive terms and do not exclude other elements or components that are not listed.
Although the present disclosure describes methods and processes with operations in a certain order, one or more operations of the methods and processes may be omitted or altered as appropriate. One or more operations may take place in an order other than that in which they are described, as appropriate.
Although the present disclosure is described, at least in part, in terms of methods, a person of ordinary skill in the art will understand that the present disclosure is also directed to the various components for performing at least some of the aspects and features of the described methods, be it by way of hardware components, software or any combination of the two. Accordingly, the technical solution of the present disclosure may be embodied in the form of a software product. A suitable software product may be stored in a pre-recorded storage device or other similar non-volatile or non-transitory computer readable medium, including DVDs, CD-ROMs, USB flash disk, a removable hard disk, or other storage media, for example. The software product includes instructions tangibly stored thereon that enable a processing device (e.g., a personal computer, a server, or a network device) to execute examples of the methods disclosed herein.
The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. The described example embodiments are to be considered in all respects as being only illustrative and not restrictive. Selected features from one or more of the above-described embodiments may be combined to create alternative embodiments not explicitly described, features suitable for such combinations being understood within the scope of this disclosure.
All values and sub-ranges within disclosed ranges are also disclosed. Also, although the systems, devices and processes disclosed and shown herein may comprise a specific number of elements/components, the systems, devices and assemblies could be modified to include additional or fewer of such elements/components. For example, although any of the elements/components disclosed may be referenced as being singular, the embodiments disclosed herein could be modified to include a plurality of such elements/components. The subject matter described herein intends to cover and embrace all suitable changes in technology.
The contents of all publications referenced in this disclosure are incorporated by reference.
The present application is a continuation of International Patent Application No. PCT/CA2022/050166, filed Feb. 4, 2022, entitled METHOD, SYSTEM AND COMPUTER READABLE MEDIUM FOR PROBABILISTIC SPATIOTEMPORAL FORECASTING, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 63/145,961 filed Feb. 4, 2021, entitled METHOD AND SYSTEM PROBABILISTIC SPATIOTEMPORAL FORECASTING. The content of the related application documents identified above are incorporated herein by reference as if reproduced in their entirety.
Number | Date | Country | |
---|---|---|---|
63145961 | Feb 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CA22/50166 | Feb 2022 | US |
Child | 18365568 | US |