Computer-Implemented Forecast Accuracy Systems And Methods

Information

  • Patent Application
  • 20080255924
  • Publication Number
    20080255924
  • Date Filed
    December 04, 2007
    16 years ago
  • Date Published
    October 16, 2008
    16 years ago
Abstract
Computer-implemented systems and methods are provided to perform accuracy analysis with respect to forecasting models, wherein the forecasting models provide predictions based upon a pool of production data. As an example, a forecast accuracy monitoring system is provided to monitor the accuracy of the forecasting models over time based upon the pool of production data. A forecast model construction system builds and rebuilds the forecasting models based upon the pool of production data.
Description
TECHNICAL FIELD

This document relates generally to computer-implemented forecasting systems and methods, and more particularly to computer-implemented forecast accuracy systems and methods.


BACKGROUND

A typical forecasting system allows the user to explore the data, build forecasting models and analyze the forecasting accuracy. Forecasting accuracy is essential and needs to be monitored, such as continuously on a weekly basis. If the accuracy falls below a desirable level, the models should be rebuilt. For retail applications, datasets are massive, timely and efficient model building is critical, and reasonable forecast accuracy is essential to meet the business needs.


As illustrated in FIG. 1, forecasting accuracy and model (re)building 34 are traditionally done on a different copy of the database (e.g., data mart 32) than the production system's database (e.g., data mart 42). Typically, the analysis is performed in the user's sand box 30, wherein a sand box is a testing environment that separates or isolates untested code or model (re)building operations and changes from the production environment or repository. This separation is used since the production environment 40 and its programs 44 should remain stable and functional while exploration and monitoring continues in the sandbox environment 30. However, the maintenance of two separate data marts (32 and 42) can be costly in terms of resources, and it can present logistical problems whenever the data marts (32 and 42) must be updated, maintained, etc. These problems are particularly acute for retailers as their data marts are extremely large and the model rebuilding/exploration exercises require a significant investment of both time and space.


SUMMARY

In accordance with the teachings provided herein, systems and methods for operation upon data processing devices are provided to perform accuracy analysis with respect to forecasting models, wherein the forecasting models generate predictions based upon a pool of production data. As an example, a forecast accuracy monitoring system is provided to monitor the accuracy of the forecasting models over time based upon the pool of production data. A forecast model construction system builds and rebuilds the forecasting models based upon the pool of production data.


As another example, a forecast accuracy monitoring system is provided to monitor the accuracy of the forecasting models over time based upon the pool of production data. A forecast model construction system builds and rebuilds the forecasting models based upon the pool of production data. The forecast accuracy monitoring system and the forecast model construction system are configured to operate concurrently. The forecast accuracy monitoring system is configured to provide an indication in response to forecast accuracy of one or more of the forecasting models not satisfying pre-specified forecast accuracy criteria. The forecast model construction system is configured to rebuild one or more of the forecast models in response to the provided indication.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram depicting a traditional way of handling forecasts with respect to production data.



FIG. 2 is a block diagram depicting an environment wherein users can interact with a set of forecasting-related systems (e.g., forecasting-related computer programs or software).



FIGS. 3-5 illustrate forecasting-related systems operating in different modes.



FIG. 6 depicts a hold out technique and a forecast out technique for use in forecast accuracy determinations.



FIG. 7 is a flowchart depicting forecast accuracy monitoring.



FIG. 8 is a block diagram depicting use of data re-alignment and data re-use in model rebuilding and accuracy analysis.



FIG. 9 is a flowchart depicting a hold out method for use in forecast accuracy determinations.



FIG. 10 depicts an example of forecast accuracy results.



FIG. 11 is a flowchart depicting an accuracy monitoring operational scenario.



FIG. 12 is a flowchart depicting an operational scenario involving a hold out procedure.



FIG. 13 is a block diagram depicting a single general purpose computer environment wherein a user can interact with a set of forecasting-related systems (e.g., forecasting-related computer programs or software).





DETAILED DESCRIPTION


FIG. 2 depicts a forecasting system 100 which generates forecasts for one or a number of different applications and purposes. For example, the forecasting system 100 can generate forecasts or predications based on large amounts of data collected from retail transactional databases, such as Internet websites or point-of-sale (POS) devices. Such data may be analyzed using time series techniques to model marketing mix (e.g. prices and promotions) effects and forecast demand for a vast array of items that the websites or stores may be selling.


The accuracy of the forecasts generated by the forecasting system 100 is monitored by system 110. If the accuracy of forecasts generated by system 100 falls below a desirable level, system 120 creates or rebuilds new models for use by the forecasting system 100 for generation of its forecasts.


Users 130 can interact with systems 100, 110, and 120 through a number of ways, such as over one or more networks 140. Server(s) 150 accessible through the network(s) 140 can host the systems 100, 110, and 120. One or more data stores 160 can store the data to be analyzed by the systems 100, 110, and 120 as well as any intermediate or final data generated by the systems 100, 110, and 120.


The systems 100, 110, and 120 can each be an integrated web-based reporting and analysis tool that provides users flexibility and functionality when forecasts and their models need to be analyzed. It should be understood that the systems 100, 110, and 120 could also be provided on a stand-alone computer for access by a user.



FIG. 3 illustrates that both the forecast accuracy monitoring system 110 and model construction system 120 can access and utilize the production data mart 200 (i.e., the data that exists in the production environment). The user can operate within this environment in one of two modes 210. The first mode is forecast accuracy monitoring as performed by system 110, and the other mode is model (re)building/exploration as performed by system 120. Each mode can be configured to operate concurrently with each other. In other words, each system 110 and 120 can access the production data and perform its respective operations without interfering with the other while accessing and processing the production data 200.



FIG. 4 illustrates that the forecast accuracy monitoring system 110 can be configured to use model degradation criteria 300 (e.g., degradation threshold(s)) in order to determine whether the forecast accuracy has significantly declined. When the model degradation threshold has been reached, the forecast accuracy monitoring system 110 provides an indication that the forecast model construction system 120 needs to build or rebuild models that will provide improved predictive capability. It is noted that with respect to the forecast accuracy monitoring system 110, the production models are assumed to be built already and the forecasting is done with the models and information about future data (e.g., future pricing information).



FIG. 5 illustrates that in both modes, the production data 200 is used by either temporarily re-aligning the data 400 where necessary and/or (re)using the data 410 “as is” in the production environment. For example, the systems 110 and 120 can measure and monitor forecast accuracy by re-aligning and using the data as follows:

    • Hold out method: As shown in FIG. 6 at 500, given some history of data (e.g., two years or more), the most recent history of the data is held out (e.g., hold out period 510), that is, the models are built upon the history before the hold out period as indicated at 520, and the forecast accuracy is calculated with the hold out data.
    • Forecast out method: the whole history 560 is used to train the models, and some period(s) in the future (i.e., a forecast out period 570) are forecast and measured for forecast accuracy.



FIG. 7 depicts an example operational scenario of accuracy monitoring and its use of the forecast out method for accuracy determinations. The operational scenario also illustrates how certain complications can be addressed when determining whether the models have sufficiently degraded as to warrant rebuilding of a model. Such complications can include that retailers are typically interested in the performance of forecasts over a fixed period of time (e.g., the performance of weekly forecasts over a one month period into the future). Yet, models are often updated, at least partially, week-by-week with the concomitant effect that the forecasts are modified on such a time basis. Further, the planned future marketing mixes used to make the forecasts often differ from the executed marketing mixes for various reasons.


With reference to FIG. 7, the operational scenario begins with decision process 600 determining whether enough periods in the data have been archived. If a sufficient number of periods have not been archived, then process 602 archives the necessary information, such as the baseline forecasts. As an illustration, each time the models are updated and new forecasts are made, the system will archive a user-specified window of forecasts out into the future. For the example given above, this would be four weeks of future forecasts each week the model is updated. In other words, information archiving is performed, which is to archive the base forecasts without the pricing effect, because the future marketing mixes planned may not be the actual marketing mixes executed due to the retail practice. This archiving operation uses the separation of the base forecasts and the marketing mix effect of the models. (When the actual marketing mixes and sales are available, the sales forecasts with the marketing mix effect will be calculated for the accuracy.)


If a sufficient number of periods have been archived as determined by decision process 600, then process 604 extracts the period for analysis from the archives. Process 606 extracts the actual pricing information from the datamart, and process 608 re-forecasts with the actual pricing information. Based upon the new forecasts, the accuracy of the model is calculated by process 610. Decision process 612 examines whether the accuracy is satisfactory (e.g., whether model degradation thresholds have been met). If the accuracy is satisfactory, then model rebuilding is not performed at this time, and processing returns to decision process 600. However if the accuracy is not satisfactory as determined by decision process 612, then process 614 rebuilds the model using information that includes the actual pricing information.


In summary, rather than archiving final sales forecasts, the system will archive what is referred to as a base forecast. This base forecast includes the effect of all factors affecting the final sales forecast except the marketing mix effects. A specified number of weeks (moving window size) can be specified to measure the average accuracy or the total accuracy in the moving window. When actual marketing mixes become available for the forecasted weeks, a final sales forecast is derived using these implemented marketing mixes, and this final forecast is compared to actual sales. The forecast errors over the user specified window length are then monitored using various statistical measures. It is the performance of these forecasts that are typically monitored as they provide an indication of the health of the forecasting system. As an illustration, when the actual sales for the next week are available, the forecasts are then compared to the actual sales to get the accuracy.



FIG. 8 illustrates a model rebuilding/building mode and use of the hold out method described above. In the model rebuilding/building mode, a user explores different kinds of model specifications and selects the best model for forecasting. To avoid possible over-fitting, the hold out method described above is used, that is, some portion of the historical data (e.g., most recent history) is held out while the remaining data is used for model building, and the accuracy for the model is calculated against the held out data, which is not used for model training. To better integrate with the accuracy mode or for use without the accuracy mode, data re-alignment and/or data re-use processing (710 and 720) is used.


With respect to data re-alignment processing 710, while accuracy monitoring requires the actual week for the future forecasting, a hold out method as shown in FIG. 9 holds out the most recent history (to be regarded as the future) for testing the accuracy of a model being constructed. First, decision process 800 examines whether model data exists in order to build a model. If model data does not exist, then process 802 builds the model data based upon the production data before processing continues at process 806. However if model data does exist as determined by decision process 800, then process 804 extracts the model data from storage.


Process 806 then performs a realignment of the data in order to create a hold out period for later use in determining the accuracy of a constructed model. Process 808 examines whether a model has already been built by the user. If a model has not already been built, then process 810 facilitates the construction of a model. However if a model has already been built as determined by decision process 808, then process 812 uses the model in order to forecast the hold out period. Process 814 constrains the forecast by the available inventory on-hand for the hold out period in order to keep the sales forecasts inline with actual sales that could be constrained by the inventory availability. The accuracy of the forecast with respect to the hold out period is then used to calculate the model forecast accuracy at process 816.


As illustrated in the operational scenario of FIG. 9, the model building process is structured so as to not to interfere with accuracy monitoring. For example, the time sensitive data is re-aligned at process 806 so that the data can be used by process 810 for model building as well as for accuracy monitoring. A separate directory can be created and reserved for the data that is re-aligned. With this data re-alignment, the hold out method will first look up the data in the re-alignment directory. If the data exists in the re-alignment directory, it uses the data from the directory; otherwise, it uses the original data mart for accuracy monitoring.


As an illustration, suppose that when a user wants to try different model specifications and compare these models with existing models, the user wants to see how these proposed models forecast in the future. Because the user cannot wait for the future to occur to assess the accuracy of these models, the sample hold out method is used. In this example, the “current” time is assumed to be sometime in the past, such as one month ago. That is, the most recent weeks of data are excluded (in this example, four weeks) from the model building system, the model is estimated, and then it is forecasted into the “future.” Because the “future” has already happened, the user can compare forecasted data (e.g., forecasted sales) to actual data (e.g., actual sales) in order to assess the forecasting accuracy.


When forecasting into the “future,” the user can also use the actual marketing mixes implemented by the retailer as these have already been observed. This maneuvering of time involves a re-alignment of the data—a re-alignment which ultimately leaves the original production data mart in its production state. To ensure the original production data mart is preserved, a separate directory can be created and reserved for the data that need to be re-aligned. Only the data that is related to time-based partitions needs be re-aligned. For example, data is re-aligned so that the entries in the hold out period will be regarded as the future for the model. With this data re-alignment, the hold out method will first look up the data in the re-alignment directory. If the data did not need to be realigned and thus, are not in this re-alignment directory, they will be read directly from the original data mart.


With respect to data re-use operations 720 on FIG. 8, data re-alignment 710 can be configured to re-use as much data from the original data mart, that is, all data that does not need re-alignment can stay in the original data mart. The system also tries to re-use model data if possible. Model data contains all the necessary information for building models. It is pre-processed by merging information from different sources. The system allows the user to specify where the model data is stored. If the requested model data already exists, it will use the model data without rebuilding it. Such model data can be re-used for building different models and forecasting different hold out periods. In other words, any data that does not require re-alignment is used directly from the original data mart.


The system can introduce another separate directory which is reserved for storing model data. If the requested model data already exists, the system will use that portion of the already constructed data that it needs—thereby, saving significant time in the assessment process. The portion of model data that is used by the system depends on the hold out period specified and the particular model specification.



FIG. 10 depicts at 900 an example of forecast accuracy results. The output of FIG. 10 shows forecasts of a product (column PROD_HIER_SK) at different stores (column GEO_HIER_SK). The forecasted values (column FORECAST) are compared with the actual values (column ACTUAL) and its accuracy is analyzed with MAPE (mean absolute percentage errors) and RMSE (root mean square error) statistical measures (e.g., columns MAPE and RMSE). It is noted that a system and method can be configured as disclosed herein to be scalable so that it can generate such output results for massive retail data with data re-alignment and data re-use.


While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. Accordingly the examples disclosed herein are to be considered non-limiting. As an illustration, the accuracy monitoring process and the hold out process can be configured in many different ways such as shown respectively in FIGS. 11 and 12.



FIG. 11 depicts an operational scenario for accuracy monitoring. In this operational scenario, an extract, transformation, and load data operation 1002 is performed upon a weekly batch job 1000. The weekly batch job 1000 contains data to be loaded into the production data mart 1004. Model calibration and forecasting 1006 are performed with the data that is contained within the production data mart 1004. The accuracy monitoring process 1008 analyzes the results of the model forecasting generated by process 1006. The estimated demand components are archived by process 110 in the production data mart 1004 for future use, such as for future accuracy analysis.


A re-forecast 1012 is done based on the observed input data (e.g., previously archived forecasts 1018, incremental data 1020, etc.). Forecast accuracy analysis 1014 is then performed upon these forecasts. If the forecasts are not biased as determined by decision process 1016 through the forecast accuracy analysis, then processing resumes to process a new batch job at 1000. However if bias is detected in the forecasts, then there is a determination at decision process 1022 as to whether manual intervention is required. If manual intervention is not required, then the forecast override process 1026 is performed, and its results are stored in the production data mart 1004. However if there is to be manual override, then a user intervenes at 1024 before the forecast override process is performed so that the user can himself or herself inspect the accuracy results.



FIG. 12 depicts an operational scenario for a hold out process. In this operational scenario, a model group (MG) subset 1102 is extracted from the production data mart 1128 so that a base set of data can be built by process 1104 for model group analysis 1106.


Through use of this data, model configuration scenario analysis 1112 can be performed in order to explore and select the best model specification(s). After the model scenario configurations are set or modified at process 1114, the hold out method 1116 is then performed using the extracted model group data subset from process 1102 and the base data from process 1104. The model fit results are analyzed by process 1118 and the forecast accuracy results are analyzed by process 1120, and are used as the basis for selecting at 1122 the best performing model for forecasting.


As another example of the wide scope of the systems and methods disclosed herein, a system and method can be configured as disclosed herein to satisfy the following requirements: (1) accuracy monitoring should not be interfered with by model rebuilding, even though model rebuilding may require changing the data mart; and (2) the forecasting accuracy for the model rebuilding should be done in a consistent way with the accuracy monitoring.


It is further noted that the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation (as depicted at 1200 in FIG. 13), or on a networked system, or in a client-server configuration, or in an application service provider configuration.


It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, combinations thereof, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.


Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform methods described herein. Other implementations may also be used, however, such as firmware or even appropriately designed hardware configured to carry out the methods and systems described herein.


The systems' and methods' data (e.g., associations, mappings, etc.) may be stored and implemented in one or more different types of computer-implemented ways, such as different types of storage devices and programming constructs (e.g., data stores, RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.


The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein.


The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.


It should be understood that as used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of “and” and “or” include both the conjunctive and disjunctive and may be used interchangeably unless the context expressly dictates otherwise; the phrase “exclusive or” may be used to indicate situation where only the disjunctive meaning may apply.

Claims
  • 1. A computer-implemented system for performing accuracy analysis with respect to forecasting models, wherein the forecasting models provide predictions based upon a pool of production data, comprising: a forecast accuracy monitoring system for monitoring the accuracy of the forecasting models over time based upon the pool of production data;a forecast model construction system for building and rebuilding the forecasting models based upon the pool of production data;wherein the forecast accuracy monitoring system and the forecast model construction system are configured to operate concurrently;wherein the forecast accuracy monitoring system is configured to provide an indication in response to forecast accuracy of one or more of the forecasting models not satisfying pre-specified forecast accuracy criteria;wherein the forecast model construction system is configured to rebuild one or more of the forecast models in response to the provided indication.
  • 2. The system of claim 1, wherein the pool of production data includes retail data from multiple stores.
  • 3. The system of claim 2, wherein the forecast accuracy monitoring system is configured to monitor the pool of production data on a weekly basis.
  • 4. The system of claim 3, wherein the forecast models provide marketing mix effect estimates and demand predictions for retail products based upon the pool of production data.
  • 5. The system of claim 1, wherein the pool of production data is a single data mart which is used by both the forecast accuracy monitoring system and the forecast model construction system; wherein the forecast accuracy monitoring system uses the data mart for monitoring the accuracy of the forecasting models;wherein the forecast model construction system uses the data mart for building and rebuilding the forecasting models.
  • 6. The system of claim 5, wherein the use of the same single data mart by both the forecast accuracy monitoring system and the forecast model construction system obviates requirement of maintenance of separate data marts for use by the forecast accuracy monitoring system and the forecast model construction system; wherein the separate data marts include a testing data mart for use by the forecast accuracy monitoring system and include a production data mart for use by the forecast model construction system.
  • 7. The system of claim 1 further comprising: archiving software instructions configured to archive a base forecast from the pool of production data;wherein the archived based forecast does not include marketing mix effects;wherein the forecast accuracy monitoring system is configured to determine accuracy of one or more of the forecast models based upon the archived base forecast and based upon actual marketing mixes and sales information.
  • 8. The system of claim 7, wherein the accuracy determination is based upon sales forecasts that are calculated by the one or more forecast models with the marketing mix effects.
  • 9. The system of claim 7, wherein the base forecasts includes factors affecting the final sales forecast except marketing mix effect.
  • 10. The system of claim 1, wherein to avoid model over-fitting, a data hold out method is used upon a portion of production data while the remaining data is used for model building by the forecast model construction system.
  • 11. The system of claim 10, wherein accuracy of the model building is calculated against the held out data which was not used for model building.
  • 12. The system of claim 10, wherein, in order to avoid interference with accuracy monitoring by the forecast accuracy monitoring system, a portion of the production data is re-aligned, so that the data can be used for model building by the forecast model construction system as well as for accuracy monitoring by the forecast accuracy monitoring system.
  • 13. The system of claim 12, wherein the re-aligned data is stored in a separate data storage location than the production data.
  • 14. The system of claim 13, wherein the storage of the re-aligned data in the separate data storage location ensures that the original production data is preserved.
  • 15. The system of claim 14, wherein the forecast accuracy monitoring systems directly accesses and uses the production data and the re-aligned data that is stored in the separate data storage location in order to monitor the accuracy of the forecasting models.
  • 16. The system of claim 15, wherein the forecast model construction system stores model data in a separate storage location in order to use the stored model data when constructing models at a subsequent time.
  • 17. The system of claim 1, wherein statistical measures are used to assess the accuracy of the forecasting models.
  • 18. The system of claim 1, wherein the statistical measures include MAPE (mean absolute percentage errors) and RMSE (root mean square error) statistical measures.
  • 19. A computer-implemented method for performing accuracy analysis with respect to forecasting models, wherein the forecasting models provide predictions based upon a pool of production data, said method comprising: monitoring the accuracy of the forecasting models over time based upon the pool of production data;building and rebuilding the forecasting models based upon the pool of production data;said monitoring step and said building and rebuilding step operating concurrently with respect to each other;providing an indication in response to forecast accuracy of one or more of the forecasting models not satisfying pre-specified forecast accuracy criteria; andrebuilding one or more of the forecast models in response to the provided indication.
  • 20. A computer-readable storage medium encoded with instructions that cause a computer to perform a method for performing accuracy analysis with respect to forecasting models, wherein the forecasting models provide predictions based upon a pool of production data, said method comprising: monitoring the accuracy of the forecasting models over time based upon the pool of production data;building and rebuilding the forecasting models based upon the pool of production data;said monitoring step and said building and rebuilding step operating concurrently with respect to each other;providing an indication in response to forecast accuracy of one or more of the forecasting models not satisfying pre-specified forecast accuracy criteria; andrebuilding one or more of the forecast models in response to the provided indication.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority to and the benefit of U.S. Application Ser. No. 60/911,720 (entitled Computer-Implemented Forecast Accuracy Systems And Methods and filed on Apr. 13, 2007), of which the entire disclosure (including any and all figures) is incorporated herein by reference.

Provisional Applications (1)
Number Date Country
60911720 Apr 2007 US