The technology described in this patent document relates generally to the field of model forecasting and prediction and more specifically to the generation of combining weights for multimodel forecasting and prediction.
Accurate forecasting of future sales and other business variables is of great value to organizations. Accurate forecasts of product demands are needed for optimal inventory management, pricing, and resource planning. Automated computer algorithms for generating statistically-based predictions for larger numbers of items with no or minimal human intervention is beneficial for firms that sell many product items through numerous geographically distinct sales outlets as forecasts are often needed for each of tens of thousands of stock keeping units for each of hundreds of stores. In addition to business forecasting, statistical models for prediction are used in a wide variety of industrial applications.
For each item requiring forecasts or predictions, multiple statistical forecasting models are available. Deciding which of the available models, or combination of models, to use for predicting future values for a given item is a challenging problem. The utilization of multiple models in a prediction may offer improved predictive performance. To capitalize on this improved performance, systems and methods of generating weights for a weighted average of model outputs is described that uses information criteria indicative of fit quality of the utilized multiple models in determining model weights.
In accordance with the teachings provided herein, systems and methods are provided for automatically generating a weighted average forecast model. For example, a plurality of forecasting models and time series data are received. At least one parameter of each of the received forecasting models is optimized utilizing the received time series data. A weighting factor is generated for each of the plurality of optimized forecasting models utilizing an information criteria value indicating the fit quality of each of the optimized forecasting models, and the generated weighting factors are stored.
As another illustration, systems and methods may be used for automatically generating a weighted average forecast model that includes a plurality of forecasting models and a file containing time series data indicative of transactional activity. A model fitter receives the plurality of forecasting models and the file of time series data and optimizes at least one parameter of each of the plurality of forecasting models based on the time series data. A forecast calculator is configured to receive the plurality of optimized forecasting models and generates a forecasted output for each of the plurality of optimized forecasting models. A model evaluator is configured to receive the plurality of optimized forecasting models and generate a weighting factor utilizing an information criteria for each of the forecasting models indicating fit quality of each of the optimized forecasting models.
a-7d are a flow diagram of a multimodel forecast creation process.
The generated weighting factors are used in conjunction with forecasts or predictions from the multiple models in order to generate a composite model that may have higher predictive capabilities than individual models on their own. For example, in predicting future sales for a product, a set of predictive models is chosen. The models are fitted according to historical data. The fitted models are examined for quality, and based on this quality assessment, a combining weight is assigned to the model. Forecasts are made by each of the models in the set of predictive models, and the forecasts are multiplied by the combining weights and summed to generate a weighted average multimodel forecast.
The multimodel combining weights processing system 34 can be an integrated web-based analysis tool that provides users flexibility and functionality for performing model forecasting or prediction or can be a wholly automated system. One or more data stores 40 can store the data to be analyzed by the system 34 as well as any intermediate or final data generated by the system 34. For example, data store(s) 40 can store the plurality of models whose outputs are to be averaged and time series data used to calibrate the models and make predictions. Examples of data store(s) 40 can include relational database management systems (RDBMS), a multi-dimensional database (MDDB), such as an Online Analytical Processing (OLAP) database, etc.
The users 32 can interact with the system 34 in a number of ways, such as over one or more networks 36. One or more servers 38 accessible through the network(s) 36 can host the multimodel combining weights processing system 34. It should be understood that the multimodel combining weights processing system 34 could also be provided on a stand-alone computer for access by a user or in other computing configurations.
A model evaluator 76 receives the optimized forecasting models 74 and generates a weighting factor for each of the plurality of optimized models 74. The generation of the weighting factors utilizes an information criteria value that indicates fit quality and complexity of each of the optimized forecasting models. The generation of weighting factors will be discussed in detail herein. The generated weighting factors are then stored in a data store 78. The data store may be a temporary storage medium such as random access memory, or the data store may be a longer term storage medium such as a hard drive, CD, DVD, as well as many others.
In the example of
The optimized forecasting models 74 are received by both the model evaluator 76 and the forecast calculator 82. The model evaluator 76 is configured to generate a weighting factor 92 utilizing an information criteria for each of the optimized forecasting models 74 that indicates the fit quality and complexity of each of the optimized forecasting models 74. The forecast calculator 82 is configured to receive the plurality of optimized forecasting models and to generate a model forecasted output 84 for each of the optimized forecasting models 74. The weighted average calculator 86 weights the model forecasted outputs 84 according to the generated weighting factors 82 and sums the result to generate the weighted average forecasted output 60.
The process of generating a forecast as shown at 82 in
For each item, Di, a set of predictive models is chosen to forecast a numerical measure. Each model, mj, contains a vector of variable parameters, θj, which should be calibrated to the data available for a given item before the model will produce forecasts for that item. Thus, in more complete notation, mj=mj(Ys, Xs, θ*j) where θ* is the estimate of θ produced by fitting the model to past data for y and x (Ys and Xs respectively). Model parameters may be calibrated to past data using a variety of methods such as the maximum likelihood method, the least squares method, as well as others.
For each item, an array of weights is generated, assigning one weight wj to each model in the set of predictive models. The combined forecast for an item is then defined by the formula:
y*t=Σwj·y*j,t,
which is the weighted average of the forecasts from the fitted models for this item.
Combined forecasts using the constant weights wj=1/k (i.e., a simple average of the k model forecasts) may be superior to the forecast of the best individual model. Even better forecasts may be produced by using non-constant weights optimized for each item.
The Akaike Information Criterion (“AIC”) is an indicator of model fit quality and complexity. The AIC is defined as:
AIC=−2·ln(L)+2·P,
where L is the likelihood of the model (taken from a maximum likelihood estimation of the parameters) and P is the number of fitted parameters in the model (that is, the number of elements in the parameter vector θ*). The AIC may also be computed as:
AIC=n·ln(mse)+2·P,
where mse is the mean squared error and n is the number of data points to which the model is calibrated.
The AIC statistic trades off model fit, measured by the log likelihood or log mean squared error, and a penalty for model complexity, measured by the number of free parameters. It has theoretical justification as an approximation to the expected Kullback-Leibler discrepancy (K-L information) between the model and the unknown process that generated the data. This property of AIC is an asymptotic results, and in small sample sizes AIC is a biased estimate of the expected K-L information. An adjustment to AIC which corrects for this finite sample bias is called AICc, which is defined as:
AICc=AIC+(2·P·(P+1))/(n−P−1).
The best known of the alternative information criteria is the Bayesian Information Criterion of BIC, which is defined as:
BIC−2·ln(L)+ln(n)·P.
Note that BIC is like AIC but uses the log of the sample size instead of 2 as the penalty weight for the number of parameters. It should be noted that other information criteria may be used as well as variations of the criteria described above.
For each item, all models are fit to the available data by maximum likelihood or least squares or another method. For each fitted model, the information criteria value is calculated. The calculated information criteria may then be utilized to calculate raw weights by calculating a difference value, Δj, for each calculated information criteria value representing the difference between that information criteria value and the smallest single information criteria value calculated for the set of fitted models. Raw weights may be calculated as
ωj=exp(−Δj/2),
and the calculated raw weights may then be normalized according to
wj=ωj/(Σωj).
If the AIC or AICc is used as the information criterion, then the combining weight of a model reflects the relative weight of evidence for the model as a good approximation to the true data generating process of that model. Thus, this method may be expected to produce superior forecasts because the benefits of model combination are gained while giving greater weight to better models and little weight to poor models.
If information criteria other than AIC and its variants are used in this method, then the theoretical justification based on the Kullback-Leibler discrepancy is lacking but may still expect superior forecasting performance because of the similarity of these criteria to the AIC.
A simple generalization is to compute raw weights as
ωj=exp(−λ·Δj/2),
where the constant λ is an adjustment that may be specified by the user, and Δj is equal to the difference between the information criteria value for a model and the smallest information criteria of all of the models in the set of selected predicted models.
When λ=1 the usual Akaike weights are produced. When λ=0 the weights are all equal and wj=1/k, so the calculations reduce to the equally weighted simple-average-of-forecasts method. When 0<λ<1, the resulting method is a compromise between the information criterion weights and equal weights.
When λ is set to large values much greater than one (e.g., λ>10), the weight for the model with the smallest information criterion (m0, for which Δj=0) tends to w0=1 and weights for all models with Δj>0 tend to wj=0. Thus, for large λ the resulting method is the minimum IC best single model selection approach. When λ>1 but λ is moderate in size (e.g., λ=1.5), a compromise between the information criterion weighted combined forecast and the best single model forecast is the result.
Thus, a continuum of forecast combination methods are supplied, indexed by λ, ranging between the two extremes of single best model selection and equally weighted average combination, with the default λ=1 case intermediate between these extremes and providing the theoretically optimum combination based on Kullback-Leibler information.
a-7d are a flow diagram of an example multimodel forecast creation process.
b illustrates step 170 in additional detail where the selected models are fitted and combining weights are computed at 200. The model fitting and combining weight generation begins at step 202. As indicated at step 204, for each model within the selected candidate model subset 168 for an item, the loop of steps 204-210 is executed. In step 206, a model is fitted according to extracted data 162 for the item. Additionally, the information criteria calculation is performed for each model. The fitted model along with the information criteria are stored as shown at 208. The fitting and information criteria calculation loop of 204-210 is repeated for each of the candidate subset models stored at 168. Once the loop 204-210 is completed for each of the models, the combining weights are computed at step 212 using the information criteria values calculated in step 206. The fitting and combining weight generating step 170 is then completed, and the fitted models are stored at step 172 and combining weights are stored at step 176.
c illustrates at 300 a process for generating output forecasts based on the stored fitted models 176 and combining weights 176. The process for generating output forecasts begins at step 302. A set of items 154, such as individual stock keeping units, are identified for processing. The set of steps from 304 to 320 are then taken for each of the identified set of items 154. In step 308, past data for the item is extracted from a data store 306. The past data 306 in
d illustrates at 400 a process for generating a weighted average forecast for an item. The process begins at step 402 and proceeds to step 404 which begins a loop to be executed for each of the stored fitted models 172. In step 406, a forecast is computed for the fitted model being processed in the loop iteration in light of the data 310 extracted for the item. The computed forecast is multiplied by the combining weight 176 associated with the fitted model 172 and stored as shown at 408. The loop 404-410 is then repeated for each of the fitted models 172. After weighted forecasts 408 have been generated for each of the fitted models, the weighted forecasts 408 are summed to computed a combined multimodel forecast in step 412. The process then returns to
While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. Accordingly, the examples disclosed herein are to be considered non-limiting. As an illustration, the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation (as shown at 800 on
It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.
Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.
The systems' and methods' data (e.g., associations, mappings, etc.) may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions for use in execution by a processor to perform the methods' operations and implement the systems described herein.
The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5615109 | Eder | Mar 1997 | A |
| 5918232 | Pouschine et al. | Jun 1999 | A |
| 5953707 | Huang et al. | Sep 1999 | A |
| 6611726 | Crosswhite | Aug 2003 | B1 |
| 7080026 | Singh et al. | Jul 2006 | B2 |
| 7130822 | Their et al. | Oct 2006 | B1 |
| 7240019 | Delurgio et al. | Jul 2007 | B2 |
| 7660734 | Neal et al. | Feb 2010 | B1 |
| 7689456 | Schroeder et al. | Mar 2010 | B2 |
| 7693737 | Their et al. | Apr 2010 | B2 |
| 8010404 | Wu et al. | Aug 2011 | B1 |
| 20030200134 | Leonard et al. | Oct 2003 | A1 |
| 20050055275 | Newman et al. | Mar 2005 | A1 |
| 20050159997 | John | Jul 2005 | A1 |
| 20070094168 | Ayala et al. | Apr 2007 | A1 |
| 20070106550 | Umblijs et al. | May 2007 | A1 |
| 20070203783 | Beltramo | Aug 2007 | A1 |
| 20070208608 | Amerasinghe et al. | Sep 2007 | A1 |
| 20090018996 | Hunt et al. | Jan 2009 | A1 |
| Number | Date | Country |
|---|---|---|
| 2005124718 | Dec 2005 | WO |
| WO 2005124718 | Dec 2005 | WO |
| Entry |
|---|
| Eero Simoncelli, Least Squares Optimization, Center for Neural Science, and Couran Institute of Mathematical Sciences, Mar. 9, 2005. |
| Jack Weiss, Lecture 16—Wednesday, Feb. 8, 2006, http://www.unc.edu/courses/2006spring/ecol/145/001/docs/lectures/lecture16.htm. |
| SAS Institute Inc., SAS/ETS User's guide, Version 8, 1999. |
| Burnham, Kenneth P. et al., “Multimodel Inference: Understanding AIC and BIC in Model Selection”, Sociological Methods & Research, vol. 33, No. 2, pp. 261-304 [Nov. 2004]. |
| Hibon, Michele et al., “To combine or not to combine: selecting among forecasts and their combinations”, International Journal of Forecasting, vol. 21, pp. 15-24 [2005]. |
| Kapetanios, George et al., “Forecasting Using Bayesian and Information-Theoretic Model Averaging: An Application to U.K. Inflation”, Journal of Business & Economic Statistics, vol. 26, No. 1, pp. 33-41 [Jan. 2008]. |
| Book, McQuarrie, Allan D.R. et al., “Regression and Time Series Model Selection”, World Scientific Publishing Co. Pte. Ltd. (1998). |
| Akaike, Hirotugu (1974). “A new look at the statistical model identification”. IEEE Transactions on Automatic Control 19 (6): 716-723. |
| Burnham, K. P., and Anderson, D.R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd ed. Springer-Verlag. ISBN 0-387-95364-7. |
| Schwarz, Gideon E. (1978). “Estimating the dimension of a model”. Annals of Statistics 6 (2): 461-464. |
| Aiolfi, Marco et al., “Forecast Combinations,” CREATES Research Paper 2010-21, School of Economics and Management, Aarhus University, 35 pp. (May 6, 2010). |
| Costantini, Mauro et al., “Forecast Combination Based on Multiple Encompassing Tests in a Macroeconomic DSGE System,” Reihe Okonomie/ Economics Series 251, 24 pp. (May 2010). |
| Harvey, Andrew, “Forecasting with Unobserved Components Time Series Models,” Faculty of Economics, University of Cambridge, Prepared for Handbook of Economic Forecasting, pp. 1-89 (Jul. 2004). |
| SAS Institute Inc., SAS/ETS User's Guide, Version 8, Chapter 25—Specifying Forecasting Models, pp. 1279-1305 (1999). |
| Simoncelli, Eero, “Least Squares Optimization,” Center for Neural Science, and Courant Institute of Mathematical Sciences, pp. 1-8 (Mar. 9, 2005). |
| Weiss, Jack, “Lecture 16—Wednesday, Feb. 8, 2006,” http://www.unc.edu/courses/2006spring/eco1/145/001/docs/lectures/lecture16.htm, 9 pp. (Feb. 9, 2006). |
| Yu, Lean et al., “Time Series Forecasting with Multiple Candidate Models: Selecting or Combining?”, Journal of Systems Science and Complexity, vol. 18, No. 1, pp. 1-18 (Jan. 2005). |
| Number | Date | Country | |
|---|---|---|---|
| 20090319310 A1 | Dec 2009 | US |