The present invention relates to systems and methods for pricing markdown planning. In particular, the present invention includes generating an optimized markdown plan, and tuning the markdown plan, including demand model refresh and re-optimization, as needed.
In business and other areas, large quantities of information needs to be recorded, processed, and mathematically manipulated to make various determinations. From these determinations, decisions may be made. These decisions may heavily influence the ultimate success of the business.
Likewise, the discount schedule, or markdown plan, is important for the management of inventory and to ensure competitiveness in the market.
For example, in businesses, prices of various products must be set. Such prices may be set with the goal of maximizing margin or demand or for a variety of other objectives. Margin is the difference between total revenue and costs. Total sales revenue is a function of demand and price, where demand is a function of price. Demand may also depend on the day of the week, the time of the year, the price of related products, location of a store, the location of the products within the store, advertising and other promotional activity both current and historical, and various other factors. As a result, the function for forecasting demand may be very complex. Costs may be fixed or variable and may be dependent on sales volume, which in turn depends on demand. As a result, the function for forecasting margin may be very complex. For a chain of stores with tens of thousands of different products, identifying the relevant factors for each product and store, then determining a function representing that demand are difficult.
The enormous amount of data that must be processed for such determinations is too cumbersome, even when done by computer. Further, the methodologies used to forecast demand and the factors that contribute to it require the utilization of non-obvious, highly sophisticated statistical processes.
Such processes are described in U.S. patent application Ser. No. 09/742,472, entitled IMPUTED VARIABLE GENERATOR, filed Dec. 20, 2000 by Valentine et al., and U.S. patent application Ser. No. 09/741,958, entitled PRICE OPTIMIZATION SYSTEM, filed Dec. 20, 2000 by Neal et al., which both are incorporated by reference for all purposes.
These afore mentioned methodologies for forecasting demand may be utilized to generate a markdown plan which optimizes some goal. Currently, such markdown plan generation systems are inefficient and are not capable of dealing with changes in goals, constraints, or unexpected factors that occur post optimization.
Therefore, it is desirable to provide an efficient process and methodology for determining the pricing markdown of individual products, through a markdown plan, such that the markdown plan is optimized, and wherein the markdown plan is capable of being dynamically re-optimized on demand. Moreover, the markdown plans may be executed to be easily/automatically updated to reflect current business conditions.
To achieve the foregoing and other objects and in accordance with the purpose of the present invention, a system and method for tuning markdown plans is provided. Such a system and method may be useful in association with a price optimization system.
The system and method for markdown plan tuning may include configuring initial rule set. This configuration may be done by a user or may be automatically implemented. Default rules may also be provided in some embodiments.
The initial rule set may include at least one of enforced markdowns, objective, start date, markdown tolerance, point-of-sales handling rules, cost rules, salvage rules, continuous markdown and item selection. The objective rules may include selecting at least one of a volume objective, a profit objective and an inverse weighing objective. Additionally, in some embodiments, a combination of the volume objective, the profit objective and the inverse weighing objective may be utilized. The inverse weighing objective may include applying a weighing coefficient to the maximization of sales. This weighing coefficient may be a function of time, or other suitable measure.
In some embodiments, a first optimization for inventory pricing may be received from the price optimization system. This optimization may utilize demand models generated by an econometric engine, and cost data from a financial engine. Occasionally, the optimization may include failures. In these situations the failure(s) may result in a partial solve for the optimization.
A plan may then be generated by applying the initial rule set to the first optimization. The plan may then be implemented.
In some embodiments, updated data may be received which mandates a re-optimization of the plan. The updated data may include at least one of point-of-sales data, user implemented rule changes, out date changes, cost data changes, and inventory changes. Demand models may be refreshed using the updated data. Additionally, rules may be updated.
Updating the initial rule set includes looking up plan history, cross referencing plan history with the initial rule set; and updating rule parameters. Cross referencing plan history with the initial rule set includes identifying rule events that have occurred in the plan history. The rule parameters are updated by subtracting rule events that have occurred in the plan history from the initial rule set.
Then, a second optimization of inventory prices may be received from the optimization system. The second optimization may be generated from the refreshed demand models and cost data.
Then the markdown plan may be re-optimized by applying the updated rule set to the second optimization. The re-optimized plan may be reported. The reporting may include generating at least one of an overall report which highlights all markdown plan changes, a cross category report which summarizes specific schedule changes to the markdown plan, an exception based report and a financial forecast for the re-optimized markdown plan.
The re-optimized markdown plan then requires approval from a user. Lastly, the re-optimized markdown plan may be implemented.
These and other features of the present invention will be described in more detail below in the detailed description of the invention and in conjunction with the following figures.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
The present invention will now be described in detail with reference to several embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well-known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention. The features and advantages of the present invention may be better understood with reference to the drawings and discussions that follow.
The present invention relates generally to systems and methods for pricing markdown. In particular, the present invention includes generating an optimized pricing markdown plan, and tuning the markdown plan, including re-optimization, as needed. The markdown process consists of an initial optimization of a markdown plan, followed by continuous monitoring of that plan versus the originally expected result. Since markdown plans are often for products with volatile short lifecycles, it is important to continually adjust and update the markdown plan to ensure the best possible result.
These adjustments to the plan may include refreshing the demand models to ensure they account for the most current understanding of consumer demand and any changes to the target out date (which impacts the lifecycle curve); re-running the markdown optimizations (reoptimization) to reflect current inventory positions, while incorporating information about actual markdown occasions taken to date; and providing a summary of the changes to the markdown plan. This summary may include an overall reforecast of the plan, as well as exception reporting that highlights specific changes to plans.
Since Markdowns typically occur in a very short time window, the speed which these processes are turned around is critical, and thus implies the need for an ability to kick off an automated batch process.
In price markdown planning, it is desirable to use data to create optimization plans. For example, in the retail industry, it is desirable to use sales data to optimize margin (profit) by setting optimized prices or by optimizing promotions. For retail chains that carry a large variety of items, the optimizations may be performed less than three times a year due to the slowness in processing data due to the large quantities of data and the complex processing involved. As a result, changes in the market or a flaw in an optimization may not be noticed for several months, or may never be noticed.
The present invention is enabled to process large amounts of data, perform complex operations in a short time period, and provide frequently updated data analysis. Additionally, the invention has the unique ability to undergo re-optimization as needed due to changes in rule, goal or empirical data. Thus, if a six-month sales markdown plan is created and implemented, within the first few weeks of the markdown plan, an updated analysis may be made to determine if the markdown plan is incorrect or if conditions of the market have changed, and then generate an updated (tuned) markdown plan, if needed. The invention may provide a flag or some other indicator to suggest whether tuning is desirable and then provide updated information to a user and then allow a user to revise and implement an updated markdown plan.
Moreover, a data transformation and synthesis platform is provided, which allows a scalable and parallel system for processing large amounts of data.
I. Optimization System
To facilitate understanding, an embodiment of the invention will be provided as part of a price optimization system. The purpose of the price optimization system is to receive raw data that relates to a specified econometric problem and to produce coefficients for econometric modeling variables that represent the significant factors affecting the behaviors represented by the data. In one example, the price optimization system produces coefficients that represent the driving factors for consumer demand, synthesized from sales volume and other retail-business related data inputs.
II. Business Planning System
A plan is then generated (step 244). In order to generate a plan, the planner 117 provides to the support tool 116 optimization rules. The optimization engine 112 may use the demand equation, the variable and fixed costs, and the rules to compute an optimal set of prices that meet the rules. The planner 117 may be able to provide different sets of rules to create different scenarios to determine different “What if” outcomes. From the various scenarios and outcomes, the planner is able to create a plan.
For example, if a rule specifies the maximization of profit, the optimization engine would find a set of prices that cause the largest difference between the total sales and the total cost of all products being measured. If a rule providing a promotion of one of the products by specifying a discounted price is provided, the optimization engine may provide a set of prices that allow for the promotion of the one product and the maximization of profit under that condition. In the specification and claims, the phrases “optimal set of prices” or “preferred set of prices” are defined as a set of computed prices for a set of products where the prices meet all of the rules. The rules normally include an optimization, such as optimizing profit or optimizing volume of sales of a product and constraints such as a limit in the variation of prices. The optimal (or preferred) set of prices is defined as prices that define a local optimum of an econometric model, which lies within constraints specified by the rules. When profit is maximized, it may be maximized for a sum of all measured products. Such a maximization, may not maximize profit for each individual product, but may instead have an ultimate objective of maximizing total profit.
For a price optimization plan, the optimal set of prices is the plan. The plan may be for a long term. For example, the plan may set weekly prices for the next six months.
The plan is then implemented (step 248). This may be done by having the planner 117 send the plan 118 to the stores 124 so that the stores carry out the plan. In one embodiment, the support tool provides a graphic user interface that provides a button that allows the planner to implement the plan. The support tool would also have software to signal to the stores to implement the plan. In another embodiment, software on a computer used by the planner would integrate the user interface of the support tool with software that allows the implementation of the plan displayed by the support tool by signaling to the stores to implement the plan.
The results of the plan are measured with updated data (step 252). Updated data may be provided on a weekly or daily basis. The updated data may be sent to the processing system 103.
The updated data is used to generate a tuning recommendation (step 256). This may be done in various ways. One way is by generating a new plan, which may be compared with the long range plan. Another way may be to use the updated data to see how accurate the long range plan was for optimization or for prediction of sales. Other data may be measured to determine if tuning should be recommended without modeling the updated data.
In one embodiment, the detection of changes to externally defined cost and competitive price information, and updates to the plan required to maintain business rule conformance are used as factors to determine whether tuning is needed. To detect such factors, the econometric model is not needed, but instead other factors are used. The econometric model may then be updated based on such changes to “tune” the optimized plan for changing conditions
In another embodiment, tuning is performed when certain threshold conditions are reached—i.e., changes are substantial enough to materially impact the quality of the previously optimized plan. In such processes, the econometric model may be used to provide predictions and then compared to actual data.
The system is able to provide a tuning recommendation (step 260). This may be implemented by setting a range or limits either on the data itself or on the values it produces. In the first case, if changes to the updated data relative to the original data exceed a limit or move beyond a certain range, a flag or other indicator may be used to recommend tuning to the user. In the second case, if the updated data creates prediction errors beyond the specified range or limits, a flag may be used to recommend tuning to a user.
For example, a competitor price index may be used in the optimization and in generation of a tuning indicator. A competitor price index is a normalized index of competitor prices on a set of items sold at a set of locations in relation to those provided by the plan, using competitor price data that is provided through various services. As a specific example, a user might define a competitor price index on all brands and sizes of paper towels sold at stores with a WalMart located less than five miles away (the identification of WalMart locations may be done outside the system). An indicator can then be provided to identify when prices provided by the plan exceed a competitor price index of 105—in other words when they are above the competitor's prices by more than 5% on some subset of items (in the case above, when WalMart has lowered paper towel prices, resulting in a change to that competitor price index relative to the plan). In another example, costs are always changing. It is usually undesirable to change prices immediately every time costs change. Therefore, in another example, the system provides a tuning recommendation when either small cost changes cause an aggregate change of more than 5% or a single cost change causes a cost change of more than 3%. Therefore, the tuning indicators are based on formulas that measure either changes in individual data or changes in relationships between values of the data.
In viewing the re-predicted outcome and the tuning recommendation, the planner 117 is able to have the processing system 103 tune the plan (step 264). The planner 117 may then send out a message to implement the tuned plan (step 248). A single screen may show both the information that the planner needs to use to make a decision and provide a button to allow the planner to implement a decision. The button may also allow tuning on demand, whenever desired by the user.
This process allows for a long term plan to be corrected over the short term. This allows for corrections if the long term plan has an error, which in the short term may be less significant, but over the long term may be more significant. In addition, current events may change the accuracy of a long term model. Such current events may be a change in the economy or a natural disaster. Such events may make a six-month plan using data from the previous year less accurate. The ability to tune the plan on at least a weekly basis with data from the previous week makes the plan more responsive to current events.
In addition, the optimization system provides a promotional plan that plans and schedules product pricing markdowns and other promotions. Without the optimization system, poor-performing promotions may go unidentified until it is too late to make changes that materially affect their performance. The use of constant updates helps to recognize if such a plan creates business problems and also allows a short term tuning to avoid further damage. For example, a promotion plan may predict that a discount coupon for a particular product for a particular week will increase sales of the product by 50%. A weekly update will within a week determine the accuracy of the prediction and will allow a tuning of the plan if the prediction is significantly off.
The system may provide that if a long term plan is accurate within a certain percentage, the long term plan is not changed. In such an embodiment, the system may allow an automatic tuning when a long term plan is not accurate within a certain percentage. In another embodiment, the planner may be allowed to decide whether the long term plan is in enough agreement with the updated data so that the long term plan is kept without tuning.
Thus, the invention allows the integration between the operational system of a business, which sets prices and promotions and performs other sales or business functions, with the analytical system of a business which looks at sales or other performance information, to allow a planner to receive timely analytical information and then change the operational system and then to quickly, through the analytical system, see the results of the change in the operational system to determine if other changes in the operational system need to be made.
Such a constant tuning of a plan is made difficult by the large amount of data that needs to be processed and the complexity of the processing, which could take weeks to process or would be too expensive to process to make such tuning profitable. Therefore, the invention provides the ability to process large amounts of data with the required complexity quickly and inexpensively enough to provide weekly or daily tuning. A balance is made between the benefit of more frequent tuning and the cost and time involved for tuning, so that the tuning is done at a frequency where the benefit from tuning is greater than the cost of tuning at the desired frequency.
In addition, the sales data that is to be updated arrives as a set of records organized by time, product, and location—a data flow. The numeric operations that synthesize demand coefficients are performed as matrix operations, and require their inputs to be in a very specific format—one much different from the format in which the raw customer data arrives. One choke point that slows such operations is transforming customer data so that numerical matrix operations may be performed on the data.
For this purpose, the above inventive system uses data flow processing to transform input data into matrices that are partially in memory and partially on disk at any given time. Matrices are saved wholly on disk and references to the matrices are passed to numerical functions, which process the matrices. The numeric functions process the matrices to provide output data sets, which are kept partially on disk and partially in memory. Upstream data flow processing must complete a matrix before the matrix may be processed by a numerical function.
In addition to matrix processing, there are numerous other numerical functions that operate on different types of structures, including vectors, and tabular data. The data flow processing mechanism allows raw input data to be transformed into the appropriate structure for input to any numerical function, and allows the outputs of those functions to be further transformed as inputs to downstream functions.
Data flow transformations and numeric functions may not always read data row by row. Reading large amount of data from a disk in a nonsequential manner is time intensive and may create another choke point. The invention provides the using of parallel readers, the creating of smaller data subsets, and the processing of data while part of the data is in memory and part of the data is on disk to avoid the time intensive data reading process.
For a six-month plan, a weekly analysis could allow the tuning of the plan up to 26 times. Preferably, the plan is tuned at least 15 times. More preferably, the plan is tuned at least 6 times. In other embodiments, the tuning may be done on a daily basis.
Data 120 is provided to the processing system 103. The data 120 may be raw data generated from cash register data, which may be generated by scanners used at the cash registers, this Data 120 is known as Point of Sale (POS) data. The first data transformation engine 101 and the second data transformation engine 102 format the data so that it is usable in the econometric engine and financial model engine. Some data cleansing, such as removing duplicate entries, may be performed by the first and second data transformation engine 101, 102.
III. System Architecture
The data flow and numerics core 1304 processes large amounts of data and performs numerical operations on the data. An embodiment of the dataflow and numerics core 1304 that provides economic processing is an Econometric Data Transformation and Synthesis Engine (EDTSE). The dataflow and numeric core 1304 forms a combination of ETL (Extract/Transform/Load), which is a data processing term and numerical analytics). The data flow and numerics core 1304 is able to perform complex mathematical operations on large amounts of data. The modeling and optimization services 1308 may be a configurable optimization engine. The applications component 1312 supports applications.
The modeling and optimization vertical applications module 1340 provides applications that are vertical applications supported directly by the modeling and optimization services module 1308. Such applications may be applications for modeling oil and gas well optimization, and financial services portfolio optimization, retail price optimization, and other applications that can be described by a mathematical model, which can be modeled and optimized using the platform. The data flow and numeric applications module 1344 provides vertical applications that are supported directly by the data flow and numerics core module 1304.
The EDTSE allows the creation of complex econometric data outputs by breaking down the problem into a graph of operations on intermediate data sets. The EDTSE then executes this graph, allowing independent nodes to run simultaneously and sequencing dependent node execution. EDTSE graphs partition the data as well, allowing multiple subsets of data to be processed in parallel by those operations that have no intra-dataset dependencies.
This example illustrates the types of top-level operations performed by the EDTSE. All operations may accept multiple inputs and may produce multiple outputs. Operations fall into two primary types: Transformation Operations and Econometric Operations.
Transformation Operations change the structure of the input data set, but do not synthesize new information. These transformations may be simple from a structural perspective (such as filtering to removing selected elements) or may be complex from a structural perspective (such as partial transposition and extraction of non-transposed values in a different format).
Econometric Operations synthesize new values from one or more input data sets, and produce new output data sets from them. As with Transformation Operations, there is a range of complexity. Examples of Econometric Operations include missing value imputation, outlier detection and culling, etc.
Data provided to the EDTSE 400 may be provided by a first input data 404, a second input data 406, and a third input data 408, which may provide different types of data. For example, the first input data 404 may be point-of-sale input data, the second input data 406 may be cost data, and the third input data 408 may be product data. A first transformation operation 410 receives the first input data 404 and the second input data 406. A second transformation operation 412 receives the second input data 406 and the third input data 408. The first and second transformation operations 410, 412 perform transformation operations generally related to changing the structure, content, and format of the data. Such transformation operations do not perform complex mathematical operations to synthesize new information. Output from the first transformation operation 410 is stored as a first scratch data 414 as a first temporary file. Output from the second transformation operation 412 is stored as a second scratch data 416 as a second temporary file.
A first econometric operation 418 receives data from the first scratch data 414 and the second scratch data 416 and performs at least one mathematical operation on the data to synthesize new data, which is outputted as third scratch data 422 in a third temporary file and fourth scratch data 424 in a fourth temporary file. The mathematical operation may be at least one of a matrix operation, such as matrix inversion, transposition, multiplication, addition, subtraction, and arithmetic operations. In addition, it may perform extremely complex numerical algorithms that use matrices as their inputs and outputs; for example, regression analysis with a mix of linear and non-linear variables. In this example, the first econometric operation 418 is performed in parallel with a third transformation operation 420 which receives as input the third scratch data 416, performs transformational operations on the third scratch data, and then outputs fifth scratch data 426 in a fifth temporary file.
In this example, a second econometric operation 428 receives as input the third scratch data 422, performs mathematical operations on the third scratch data to synthesize new data, which is outputted as first output data 432 and second output data 434. One example of new data would be the generation of demand coefficients 128. The fourth transitional operation 430 receives as input the fourth scratch data 424 and the fifth scratch data 426, performs transformational operations, and outputs a third output data 436. Preferably, the first, second, and third output data 432, 434, 436 are stored on a shared storage.
CPU 622 is also coupled to a variety of input/output devices, such as display 604, keyboard 610, mouse 612, and speakers 630. In general, an input/output device may be any of: video displays, track balls, mice, keyboards, microphones, touch-sensitive displays, transducer card readers, magnetic or paper tape readers, tablets, styluses, voice or handwriting recognizers, biometrics readers, or other computers. CPU 622 optionally may be coupled to another computer or telecommunications network using network interface 640. With such a network interface, it is contemplated that the CPU might receive information from the network, or might output information to the network in the course of performing the above-described method steps. Furthermore, method embodiments of the present invention may execute solely upon CPU 622 or may execute over a network such as the Internet in conjunction with a remote CPU that shares a portion of the processing.
In addition, embodiments of the present invention further relate to computer storage products with a computer-readable medium that have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the computer software arts. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and execute program code, such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs) and ROM and RAM devices. Examples of computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer using an interpreter. Computer readable media may also be computer code transmitted by a computer data signal embodied in a carrier wave and representing a sequence of instructions that are executable by a processor.
The EDTSE flow segment 700 in
A first set of EDTSE flows 716 may be a plurality of EDTSE flows with each EDTSE flow running on a different computer on the network 500. A second set of EDTSE flows 718 may be a plurality of EDTSE flows with each EDTSE flow running on a different computer on the network 500. Each scratch data of the first scratch data set 712 and each scratch data of the second scratch data set 714 are used to signal a computer running an EDTSE flow of the first set EDTSE flows 716 to cause the EDTSE flow to process scratch data from the first scratch data set 712 and scratch data from the second scratch data set 714. For example, a first scratch data from the first scratch data set 712 and a first scratch data from the second scratch data set 714 may be used to signal a computer running a first EDTSE flow of the first set of EDTSE flows 716 on a first computer, which processes the first scratch data from the first scratch data set 712 and the first scratch data from the second scratch data set 714 and outputs a first scratch data of a third scratch data set 720 and a first scratch data of a fourth scratch data set 724. A second scratch data from the first scratch data set 712 and a second scratch data from the second scratch data set 714 may be used to signal a computer running a second EDTSE flow of the first set of EDTSE flows 716 on a second computer, which processes the second scratch data from the first scratch data set 712 and the second scratch data from the second scratch data set 714 and outputs a second scratch data of a third scratch data set 720 and a second scratch data of a fourth scratch data set 724. A third scratch data from the first scratch data set 712 and a third scratch data from the second scratch data set 714 may be used to signal a computer running a third EDTSE flow of the first set of EDTSE flows 716 on a third computer, which processes the third scratch data from the first scratch data set 712 and the third scratch data from the second scratch data set 714 and outputs a third scratch data of a third scratch data set 720 and a third scratch data of a fourth scratch data set 724.
In a similar manner the second set of EDTSE flows 718 takes input from the second scratch data set 714 and in a parallel manner produces a fifth scratch data set 726.
The third scratch data set 720 is inputted into a third EDTSE flow 728 to produce a first output data 732. The fourth scratch data set 724 and the fifth scratch data set 726 are inputted into a fourth EDTSE flow 730 to produce a second output data 734. The third EDTSE flow 728 and fourth EDTSE flows 730 are examples of how data sets created in parallel may be consolidated into a final form.
This example illustrates how the invention allows for a scalable process using parallel flows. Because of the scalability of this platform, the platform may be run on a single laptop computer or on a large network of computers with several racks of servers.
Flows can be made parallel either along the process domain, the data domain, or both. In both of these domains, the parallelization can be either implicit, explicit, or both.
The creator of a flow may also choose to make explicit choices about how to partition along the process domain. For example, in an implementation that uses a network of computers to solve large problems, the creator of a flow may choose to mark specific subflows as being of appropriate granularity for separate execution on a distinct computer. The system can then distribute the execution of those subflows across the network of computers. Within each individual computer, the subflow remains implicitly parallel along the process domain, meaning that any operations within it that accept the same inputs (or whose inputs simply do not depend on each other) can be executed in parallel. Flows can also be made parallel along the data domain. This can be done either explicitly or implicitly. Implicit data partitioning is performed by the system itself.
A first impute stockout process 820 receives as input the first category sales subset 812 and provides as output a first stock out adjusted category sales subset 828. A second impute stockout process 822 receives as input the second category sales subset 814 and provides as output a second stock out adjusted category sales subset 830. A third impute stockout process 824 receives as input the third category sales subset 816 and provides as output a third stock out adjusted category sales subset 832.
An imputed stockout process reviews entries where no items were sold and determines whether this was caused by the item being out of stock. If it is determined an item is out of stock, an adjustment is made in the data. This may be done by providing a flag to indicate that there was a stock out. The imputed stock out process requires a mathematical operation that analyzes sales of related items for a series of weeks to determine if a stock out occurred and a transformational operation that flags stock out events. Demand group data 826 may also be provided as input to the first, second, and third imputed stockout processes 820, 822, 824, since sales of other items in the same demand group as the item being checked for stockout are used see the demand for other items in the same demand group. If the demand for other items in the demand group was normal, that would help to indicate that lack of sales of the item was due to stock out.
Demand groups are groups of highly substitutable products as perceived by buyers, such as different sizes or brands of the same product or alternative products, but not limited to these attributes.
A first synthesize baseline prices and volumes process 834 receives as input the first stock out adjusted category sales subset 828 and provides as output a first synthesized category sales subset 840. A second synthesize baseline prices and volumes process 836 receives as input the second stock out adjusted category sales subset 830 and provides as output a second synthesized category sales subset 842. A third synthesize baseline prices and volumes process 838 receives as input the third stock out adjusted category sales subset 832 and provides as output a third synthesized category sales subset 844.
The synthesize baseline prices and volume processes impute normalized values for base price and base sales volume by examining the time series of sales for a given product/location and mathematically factoring out promotional, seasonal, and other effects. For example, baseline sales volume represents the amount of a product that would sell in a truly normal week, excluding promotional, seasonal, and all other related factors. This value may never appear in the actual sales data. It is strictly a mathematical construct. Base price similarly represents a normalized baseline sale price for a given item/location combination, excluding promotional and any other factors that affect a product's sale price.
A first imputed display variables process 846 receives as input the first synthesized category sales subset 840 and provides as output a first imputed category sales subset 854. A second imputed display variables process 848 receives as input the second synthesized category sales subset 842 and provides as output a second imputed category sales subset 856. A third imputed display variables process 850 receives as input the third synthesized category sales subset 844 and provides as output a third imputed category sales subset 858. Customer promotional sales data 852 may also be provided as input to the first, second, and third imputed display variable processes 846, 848, 850.
Customer promotional data is data which provides a promotional program for particular items, such as in-store promotional displays. Even though a chain may schedule a promotional display in all stores, some stores may not comply and not carry the promotional display. The impute display variables process measures sales data to determine whether a store actually had a promotional display as indicated by the customer promotional data. If it is determined that a store did not actually have a display, then the customer promotional data may be changed accordingly. In addition, if other types of promotion, such as a flyer, are being used concurrently with a promotional display, an imputed display variables process can determine whether a change in sales is due to the promotional display or other type of promotion.
A generate output datasets process 860 combines the parallel flow outputs of the first, second, and third imputed category sales subsets 854, 856, 858 and provides a first and second sales model input data sets 862, 864. The data is eventually provided to the econometric engine. Additional imputed variable generation steps may be performed before the data is provided to the econometric engine.
In the preferred embodiment, an entire flow for an entire program is put on every computer. The network controls can be used to set which computers on the network perform which part of the entire flow. In another embodiment, different flow segments may be placed on different computers. Output from one flow segment on one computer may then be sent to a subsequent flow segment on another computer.
Threads 907 are used so that each thread processes a normalize demand group volume process of a set of normalize demand group volume processes 910. The normalize demand group volume processes normalize the demand group volumes between zero and one. Each thread then processes a cluster by sales volume process of a set of cluster by sales volume processes 912. The cluster by sales volume processes finds clusters of data and groups them together.
Each thread then processes an evaluate cluster for statistical significance processes of a set of evaluate clusters for statistical significance processes 914. If sales volume fluctuates from one cluster to another randomly, it may be deemed noise and ignored. If sales volume is in one cluster for several weeks and then in another cluster for several weeks, that may be deemed statistically significant and therefore is not ignored. In addition, the evaluate clusters for statistical significance processes may use customer promotional data 852 to determine if customer promotions are related to the clusters.
Each thread then processes a generate display variable values process of a set of generate display variable values processes 918. The generate display variable values processes generate a set of display variable values 920 to indicate whether or not a cluster is significant. In this example, if the clusters are significant then a value of one is assigned as a display variable and if the clusters are not significant then a value of zero is assigned as a display variable.
Each thread then processes an add display variable to category sales process of a set of add display variable category sales processes 922. The add display variable to category sales processes receive as input the display values and category sales 924 and output imputed category sales 926. The add display variable to category sales processes are pure transformational operation since it takes an existing data set and creates a new value that applies to all of the items in the data set. Data that is generated to determine the imputed display variables by this flow may be discarded.
Although each of the first, second, and third imputed display variable processes 846, 848, 850 are run on a separate computer, a computer running the first imputed display variable process 846 may provide parallel processing by dividing of the first imputed display variable process 846 into multiple threads. While in this example all of the threads are run on a single computer, in an alternative embodiment each thread could be run on a different computer.
The flow 1000 therefore acts as a bucket brigade. To avoid a bottle neck, the parallel reader 1006 may be able to take data from a disk for multiple flow segments 1008 in a single seek operation, because the parallel reader 1006 knows the structure of the data files of the input data set 1004 and may put the data for each different flow segment in a different buffer, which is analogous to taking three buckets and filling them with water at the same time and then making each bucket available to a different recipient, so that the recipients may act in parallel. Acting as state machines, when a buffer for a flow segment of the first set of flow segments 1008 is filled, the flow segment acts on the data in the buffer and then outputs to a second buffer the intermediate data for the intermediate data set 1010. Acting as state machines when second buffers for the flow segments for the second set of flow segments 1012 are filled, the flow segments of the second set of flow segments 1012 operate on the intermediate data in the buffer and provide output to the parallel writer 1014. The parallel writer 1014 is able to combine the data from the second set of flows 1012 into a file on a disk as the output data set 1016. This would be analogous to passing buckets from first recipients, the first set of flow segments 1008, to second recipients, the second set of flow segments 1012, which pass it to a common place, the parallel writer 1014, which is able to dump all three buckets into a single location. As mentioned above, the parallel processing may be where each parallel flow is run on a different computer or a different thread on the same computer.
Each of these high level econometric operations may each be broken into smaller econometric operations. For example, the read data segment 1404 may be broken into its constituent data flow and econometric operations. A simplified description of this process for the read data segment is provided in
The operations are examples of various kinds of operations, such as using a single dataset to provide another single dataset, the first, second, and third operations 1116, 1120, 1124. The fourth operation 1128 combines two datasets to obtain a single dataset. The fifth operation 1132 does not have any input data but generates data. An example of such an operation would be a timestamp.
The ability to provide updating using large amounts of data and complex operations, which may be used for demand modeling, may also be used in ad or display performance modeling, brand management, supply chain modeling, financial services such as portfolio management, and even in seemingly-unrelated areas such as capacity optimization for airline or shipping industries, yield optimization for geological or oil/gas industries, network optimization for voice/data or other types of network.
Since the segment flows are created to automatically process data when data is received, the platform provides a more automated process. Such a process is considered an operations process instead of an ad hoc process, which may require a user to receive data and then initiate a program to process the received data to produce output data and then possibly initiate another program to process the output data. The user can configure the system to perform processes automatically as new data arrives, or to set thresholds and other rules so that users can be notified automatically about changes or processes for which they desire human or other approval.
The invention provides a system that is able to quickly process large amounts of sales data to generate resulting distilled and comprehendible information to a user (planner) in real time at the moment the user needs to make a decision and then the system allows the user to make a decision and implement the decision.
IV. System for Pricing Markdowns with Reoptimization
A Senior Buyer, or user, may have multiple sets of products on markdown over different sets of stores. Each combination of products and stores may be at different points in their respective markdown plans, but nonetheless, the buyer will often monitor the performance of all products under his purview at the same time. Typically this happens on a weekly basis and consists of measuring the actual performance and comparing it against forecasted performance. Besides measuring performance of the products on markdown, users also keep tabs on the status of the replacement products and whether they are on schedule to arrive as planned. If actual sales and the forecasted sales are wildly at odds or if the replacement dates for products change, the buyer may take several corrective actions.
For example, if the actual sales are considerably worse than expected the user may choose to accelerate the timing of the next planed markdown and increase the depth of the current markdown. These changes may occur at the product-store level, or at any higher level as deemed acceptable by corporate guidelines.
If the actual sales are higher than forecasted, then the user may choose to delay the next planned markdown. In some cases, users may order extra inventory to meet demand for the product. Although this appears to be counter-intuitive to the stated purpose of markdown pricing, which is to remove products from the assortment, such steps are sometimes necessary since exiting too quickly may lead to thin assortment and narrow consumer choice in the period before the replacement products arrive.
If the arrival date for replacement dates moves forward by a few days, or weeks, the inventory of the incoming products will build up rapidly, and therefore the user may choose to accelerate the markdown of the obsolete product by bringing the out-date forward and by decreasing the price.
Conversely, if the replacement date for an incoming product is delayed, user may choose to slow down the markdown cycle for the existing product so that stores have sufficient assortment choice until the new product arrives.
In a typical retail setting, the user may be responsible for anywhere from 5-10 products on markdown to over 500 at any one time.
The Econometric Engine 104 may generate demand coefficients for the products using past sales data, or estimates generated from industry standards. These demand coefficients may be provided to the Optimization System 100 for generation of optimizations for the products pricing. The Optimization System 100 may then supply the pricing optimizations to the Planner 1814 via the Coupler 1812.
The User 1802 may provide rule configurations and business goals to the Support Tool 116. The rules may then be provided to the Planner 1814. The Planner 1814 may utilize the configured rules and pricing optimizations to generate a pricing plan for the products of the Stores 124. Plans may include pricing schedules, promotion schedules and discount schedules. The plan generated by the Planner 1814 may then be provided to the Distributor 1818 for dissemination and implementation by the Stores 124.
The Stores 124 may provide feedback POS data to a Receiver 1820. This data may be used to determine relative success of the markdown plan. The Receiver 1820 may provide this data to the Econometric Engine 104 and the Rule Updater 1824. The Econometric Engine 104 may provide new demand coefficients, where necessary. These demand coefficients may be used to provide a new set of price optimizations. The Rule Updater 1824 may update the configured rules. The rule updates along with the new price optimizations may then be provided to the Plan Re-optimizer 1822 where the plan is re-optimized. The re-optimized plan may be provided to the Stores 124 via the Distributor 1818. Also, the Reporter 1816 may provide a reoptimization report to the User 1802.
The embodiment illustrated at
Additionally, in some embodiments, the user may be able to select at least one plan “disposition”, wherein each disposition includes a set of preconfigured defaults which enable the particular goals of the disposition. For example, an ‘aggressive’ disposition may have a default configuration which includes high thresholds, large markdown allowances and an emphasis in expansion of market share as a primary goal over profitability. Conversely, a ‘conservative’ disposition may be available. Such a configuration preset may include limited markdown allowances, and an emphasis on profitability.
Lastly, in some embodiments, the user may be able to manually configure the initial rules. In such embodiments, the user may configure each initial rule category individually. Alternatively, the user may select only particular rules in which to configure. In these situations, the rules not configured by the user may utilize the default preconfigured settings as explained above. In this way, the user may generate a personalized configuration scheme. In some embodiments, the user may be able to save this configured rule scheme for use on later planning sessions.
The process then proceeds to step 1904 where inventory pricing is optimized. Plan optimization may occur at the Optimization System 100 in the manner detailed in above. Optimization may be restrained by the initial rules that were configured at step 1902. As noted above, the Econometric Engine 104 processes the transformed data to provide Demand Coefficients 128 for a set of equations that may be used to estimate demand (volume sold) given certain marketing conditions (i.e., a particular store in the chain), including a price point. The equations utilized may include linear algebraic equations, Bayesian statistical methodologies or any appropriate modeling technique. The Demand Coefficients 128 are provided to the Optimization Engine 112. Additional processed data from the Econometric Engine 104 may also be provided to the Optimization Engine 112. The Financial Model Engine 108 may receive transformed data from the Second Data Transformation Engine 102 and processed data from the Econometric Engine 104. The transformed data is generally cost related data, such as average store labor rates, average distribution center labor rates, cost of capital, the average time it takes a cashier to scan an item (or unit) of product, how long it takes to stock a received unit of product and fixed cost data. The Financial Model Engine 108 may process the data to provide a variable cost and fixed cost for each unit of product in a store. The processing by the Econometric Engine 104 and the processing by the Financial Model Engine 108 may be done in parallel. Cost Data 136 is provided from the Financial Model Engine 108 to the Optimization Engine 112. The Optimization Engine 112 utilizes the Demand Coefficients 128 to create a demand equation. The Optimization Engine 112 is able to forecast demand and cost for a set of prices to calculate net profit (margin).
In some embodiments, the Optimization Engine 112 may be configured to generate Demand Coefficients 128 for each item in the store separately. Moreover, the Optimization Engine 112 may be configured to generate Demand Coefficients 128 for select subsets of products. Such subsets may include items that are to be discontinued, products in high demand, products with subpar performance, products with cost changes, or any other desired criteria.
Moreover, Demand Coefficients 128 may be generated for each product separately, or may generate more accurate Demand Coefficients 128 that take into account cross elasticity between products. While optimizing including cross elasticity effects may be more accurate, the processing requirements are greatly increased for such calculations. In some embodiments, the user may select whether to account for such cross elasticity effects. In some alternate embodiments, the Optimization System 100 may provide the user suggestions as to whether to account for such cross elasticity effects, or may even automatically determine whether to account for such cross elasticity effects.
In order to facilitate such a system of automated modeling equation decisions, every product may include an aggregate cross elasticity indicator. Said indicator may rapidly provide information as to the relative degree of cross elasticity any particular product is engaged in. For example, a product such as hamburger buns may include a high cross elasticity indicator, since sales of hamburger buns may exert a large degree of elasticity upon a number of other products such as charcoal, hamburger meat, ketchup and other condiments. Alternatively, apples may have a low relative cross elasticity indicator. The Optimization System 100 may aggregate the cross elasticity indicators of the products to be optimized. A threshold may be configured, and if the aggregate indicators are above the threshold then the set of products that are being optimized for may be assumed to have a relatively strong degree of cross elasticity effects. In such a situation, the Optimization System 100 may then opt to utilize models which include cross elasticity. Alternatively, the Optimization System 100 may simply utilize cross elasticity models when the optimization includes under a particular product number. This ensures that a large optimization is not helplessly mired in massive calculations.
After optimization, the process then proceeds to step 1906 where the initial plan is generated. The plan typically includes the optimization from step 1904 as restrained by the rule set from step 1902. The initial markdown plan may include a set of prices, promotions and markdown schedules for the products.
At step 1908 the markdown plan generated at step 1906 is implemented. Plan implementation may include dissemination of pricing to individual stores for posting to consumers. This may be done by having the planner 117 send the plan 118 to the stores 124 so that the stores carry out the plan. In one embodiment, the support tool provides a graphic user interface that provides a button that allows the planner to implement the plan. The support tool would also have software to signal to the stores to implement the plan. In another embodiment, software on a computer used by the planner would integrate the user interface of the support tool with software that allows the implementation of the plan displayed by the support tool by signaling to the stores to implement the plan. In some alternate embodiments, the pricing of the products may be automatically implemented, as is more typical for bulk and limited order sales, and in virtual, catalog or web-based store settings.
The process then proceeds to step 1910 where an inquiry is made as to whether there is a plan condition change that may warrant a markdown plan re-optimization. Such condition changes may include cost changes, divergence of actual sales from forecasts, business rule change, world event changes, product changes, or other condition changes. If there is a condition change the process then proceeds to step 1912 where the rules are updated. Rule updates may include reconfiguration of any of the rules that were set at step 1902. After rule update, the process proceeds to 1914 where the markdown plan is re-optimized. Re-optimization may include application of the updated rules to preexisting demand forecasts, or may include new forecast generation. Additionally, if all the rules cannot be satisfied, the system may be configured to selectively relax the lowest priority rules in order to satisfy the higher priority rules. Thus, the system also allows for the user to specify the relative hierarchy or importance of the rules. Selection on whether to regenerate product demand models for forecasts may depend heavily upon what kind of condition change warranted the re-optimization. For example, if the condition change includes a market-wide event, such as a hurricane, demand models may become invalid and new modeling and forecasts may be necessary. However, if the condition change is a cost change, or change of business policy, old forecasts may be still relevant and usable. After re-optimization of the markdown plan, this markdown plan may be implemented at step 1908, in the manner discussed above.
Markdown plan reoptimization allows for a long term markdown plan to be corrected over the short term. This enables corrections if the long term plan has an error, which in the short term may be less significant, but over the long term may be more significant.
As noted, current events may change the accuracy of a long term model. Such current events may be a change in the economy or a natural disaster. Such events may make a six-month markdown plan using data from the previous year less accurate. The ability to re-optimize the markdown plan on at least a weekly basis with data from the previous week makes the plan more responsive to current events.
Tuning and re-optimization of the markdown plan may, additionally, identify poor-performing promotions. The use of constant updates helps to recognize if such a plan creates business problems and also allows a short term tuning to avoid further damage. For example, a promotion plan may predict that a discount coupon for a particular product for a particular week will increase sales of the product by 50%. A weekly update will within a week determine the accuracy of the prediction and will allow a tuning of the plan if the prediction is significantly off.
The system may provide that if a long term markdown plan is accurate within a certain percentage, the long term markdown plan is not changed. In such embodiments, the system may allow an automatic reoptimization when a long term plan is not accurate within a certain percentage. In another embodiment, the planner may be allowed to decide whether the long term markdown plan is in enough agreement with the updated data so that the long term markdown plan is kept without re-optimization.
Else, if at step 1910 re-optimization of the markdown plan is not desired, the process then ends.
The embodiment illustrated at
The process then proceeds to step 1952 where initial demand models are generated. As noted above, the Econometric Engine 104 processes the transformed data to provide Demand Coefficients 128 for a set of equations that may be used to estimate demand (volume sold) given certain marketing conditions (i.e. a particular store in the chain), including a price point. The equations utilized may include linear algebraic equations, Bayesian statistical methodologies or any appropriate modeling technique.
Demand Coefficients 128 may be generated for each product separately, or may generate more accurate Demand Coefficients 128 that take into account cross elasticity between products. While optimizing including cross elasticity effects may be more accurate, the processing requirements are greatly increased for such calculations. In some embodiments, the user may select whether to account for such cross elasticity effects. In some alternate embodiments, the Optimization System 100 may provide the user suggestions as to whether to account for such cross elasticity effects, or may even automatically determine whether to account for such cross elasticity effects.
The process then proceeds to step 1954 where the initial Demand Coefficients 128 are provided to the Optimization Engine 112. As previously discussed, the Financial Model Engine 108 may also provide a variable cost and fixed cost for each unit of product in a store. The processing by the Econometric Engine 104 and the processing by the Financial Model Engine 108 may be done in parallel. Cost Data 136 is provided from the Financial Model Engine 108 to the Optimization Engine 112.
The process then proceeds to step 1904 where inventory pricing is optimized. Plan optimization may occur at the Optimization System 100 in the manner detailed above. Optimization may be restrained by the initial rules that were configured at step 1902. The Optimization Engine 112 utilizes the Demand Coefficients 128 to create a demand equation. The Optimization Engine 112 is able to forecast demand and cost for a set of prices to calculate net profit (margin).
In order to facilitate such a system of automated modeling equation decisions, every product may include an aggregate cross elasticity indicator as detailed above The Optimization System 100 may aggregate the cross elasticity indicators of the products to be optimized. A threshold may be configured, and if the aggregate indicators are above the threshold then the set of products that are being optimized for may be assumed to have a relatively strong degree of cross elasticity effects. In such a situation, the Optimization System 100 may then opt to utilize models which include cross elasticity. Alternatively, the Optimization System 100 may simply utilize cross elasticity models when the optimization includes under a particular product number. This ensures that a large optimization is not helplessly mired in massive calculations.
After optimization, the process then proceeds to step 1906 where the initial markdown plan is generated. The plan typically includes the optimization from step 1904 as restrained by the rule set from step 1902. The initial markdown plan may include a set of prices, promotions and markdown schedules for the products.
At step 1908 the markdown plan generated at step 1906 is implemented. Plan implementation may include dissemination of pricing to individual stores for posting to consumers. This may be done by having the planner 117 send the plan 118 to the stores 124 so that the stores carry out the plan in the manner described previously.
The process then proceeds to step 1956 where new Point of Sales (POS) data is received. POS data may be received on a regular schedule, such as on a weekly basis. Additionally, other relevant data may additionally be received at step 1958. Such relevant data may include inventory data, changes in out dates, and other significant data.
The process then proceeds to step 1960 where models are refreshed. Markdown models need to be updated frequently since initial models often run on short life cycle products with little data history. POS data will arrive and the model should pull in this data to reflect updates to the coefficients in the models. Significant variances (greater or less than expected) should result in a modification in the coefficients and the resulting forecasts. Modeling refreshment may have different scope levels dependent upon size of model refresh and available processing resources. For example, a minimum scope for the model refresh is the products already implemented in markdown plan and active in stores. An intermediate scope could be to use triggers to update products on draft/approved markdown plans that have not yet been implemented. A broad scope, which is preferable but demands more processing resources, includes all products having coefficients updated, so that any markdown plans created (even those without a formal trigger) reflect the best understanding of consumer demand. In some embodiments, modeling refresh will be performed on one or more scopes dependent upon user configuration, size of modeling refresh, available processing resources and other recent modeling refresh activity.
The out date of a product may be defined as the plan date beyond which the product will not be available for sale at a particular location or groups-of-locations. If the out date of a product is supplied, it impacts the estimation of the lifecycle coefficient for that product. Forecasts for products beyond their scheduled out date, go to zero. Customer interviews have shown that out dates are not always precisely known, and modifications to targeted out dates do occur either because of a delay in the arrival of the replacement merchandise, or because of a merchant decision to extend a season due to weather, etc.
As a result, model refreshes may account for changes in out dates, so that if these out dates shift to a later date (the most common scenario) there is an adequate forecast on the impact on the product and sales are not arbitrarily reduced to zero.
Model refresh may additionally be enabled to allow the addition of new products to the model. This then enables products to be added to a planned markdown. This may be desired when a user accidentally missed these products initially, or when new sales information suggests that the products are now candidates for markdowns. In cases where these products have a strong relationship to products already on markdown, the preference may be to add these to an existing plan, rather than create a new plan.
Further, the system may also allow for linking newer products to existing products to allow for information sharing for an accurate read on the elasticity. For example, a retailer is selling different types of cell phone chargers and plans to put a charger labeled UPC_B on the markdown for the first time. In cases when sufficient history is not available, the user may decide to link it to an existing product (another cell phone charger, say UPC_A). This is interpreted as the user intent to borrow information (consumer preferences, elasticity estimates) from UPC_A, in cases when sufficient data is not available from UPC_B. This process may be referred to as “product linking”.
The process then proceeds to step 1962 where the refreshed Demand Coefficients 128 are provided to the Optimization Engine 112. Also, updated Cost Data 136 may be provided from the Financial Model Engine 108 to the Optimization Engine 112.
The process then proceeds to step 1912 where the rules are updated. Rule updates may include reconfiguration of any of the rules that were set at step 1902. Rule updates may include both user reconfigurations of rules as business strategy changes, and automatic rule updates to account for previous pricing markdown activity to reflect what has actually occurred to date. For example, if one markdown has already occurred and a maximum of 3 markdown occasions are permitted, then the new optimization will have to adjust the constraint to 2 markdowns. After rule updating, plan timing is adjusted based on known shifts in out dates (not shown).
After rule update and timing adjustments, the process proceeds to 1914 where the markdown plan is re-optimized. Re-optimization may include application of the updated rules to refreshed demand forecasts.
After re-optimization of the markdown plan, the re-optimized markdown plan may be reported at step 1964. Reporting may include an overall report that highlights all plans with changes. Also, cross category plan may be reported hat summarizes all specific schedule changes. The cross category plan may be configured to have the ability to review changes without having to drill down separately into each plan.
Reporting may additionally be enabled to include exception based reporting on schedule changes (e.g., only show schedules in which Markdown %'s changed by more than a pre-configured percentage or shifted dates by more than a configured Weeks. Likewise, reporting may also include an updated financial forecast for the plan that blends actual to-date data with new forecasted results for the plan.
Then, at step 1966, the plan may be approved by the user. The re-optimized plan may then be implemented (not shown). The process then ends.
It should be noted that markdown plan refresh and reoptimization may occur at regularly scheduled intervals, as well as in response to condition changes. As such, the embodiments illustrated at
The model refresh process may be configured to be scheduled to occur at a set time, for example a particular day of each week. This refresh may generally occur simultaneously across categories, but the configuration of the schedule may enable the possibility of only certain categories being updated in a given time period. This may be of use because some retailers may stagger the number of markdowns they update each time period to account for variances in store labor.
Additionally, automated refresh may be configured to allow for a priority sequence to be set by category/department so that the highest value categories are sequenced first in the process. While the given example envisions this process occurring on a weekly basis, for super-seasonal or perishable products, it is possible that refresh and reoptimization occurs more frequently, such as on a daily basis. Therefore the configuration of the automation schedule may be readily configurable by user or automatically by product type and seasonality.
The model refresh process may additionally be scheduled to run an update based on when the POS, Product Status, and Inventory data arrives. For example, the process may be configured to automatically run based on when this data actually is received. This enables optimal turn around despite variances on when the data is collected.
The process then proceeds to step 2004 where the objective is configured. Objective may include the maximization of profits, or maximization of volume. When profit is maximized, it may be maximized for a sum of all measured products. Such a maximization may not maximize profit for each individual product, but may instead have an ultimate objective of maximizing total profit. Optionally, the user may select any subset from the universe of the products to measure profit maximization.
The process then proceeds to step 2006 where the start date is configured. Start date may include a price execution date, as well as markdown start dates. In some embodiments, users may want to be able to specify different markdown start dates for each store-group or product group. This means that in the same scenario, different store-SKUs may have to start their markdowns on different dates. This is slightly different from the price execution date. The price execution denotes the date by which they can get their prices into the store. A markdown prior to price execution is not relevant or practical since the retailers do not have time to take action on it.
Prior to the markdown start date, the system may use previously recommended prices. In some embodiments, previously recommended prices may simply be the initial prices; thus price may stay constant at the initial prices and there will be no markdowns. However, in re-optimization, the situation may arise where the previously recommended prices might contain a markdown. If the markdown start date has not changed between the first optimization and the re-optimization, previously recommended prices may stay constant. Else, if the markdown start-date is changed, a new optimization may be run, as opposed to a re-optimization.
The process then proceeds to step 2008 where the markdown tolerance may be configured. Markdown tolerance may be provided to the optimizer for generation of solution. In some embodiments, the optimizer may include a 3rd party solver, such as General Algebraic Modeling System (GAMS). A narrower tolerance may provide a more accurate optimization; however, the modeling may take longer and consume greater processing resources. On the other hand, a wider tolerance may provide a faster, rougher optimization. In some embodiments, a default tolerance may be provided, such as 95%.
The process then proceeds to step 2010 where the handling of Point-of-Sale (POS) data is configured. POS handling rules may come into play when there is missing, or otherwise deficient, POS data. In some embodiments, POS handling may be configured to utilize forecasts for the missing or deficient data. In some alternate embodiments, zero or place-marker values may be provided for these missing data points. POS data deficiencies may be the result of communication errors, or data transmission latency.
The process then proceeds to step 2012 where cost rule may be configured. Likewise, at step 2014, salvage rules may be configured. In many cases users want to be able to manage leftover inventory while getting rid of the excess inventory as profitably as possible. For example, during the holiday season the shelf space for baking goods (sugar, baking mixes, butter, etc.) is expanded. After the holidays this space is reduced to everyday levels and there is a need to reduce the baking goods inventory to a lower everyday level. In some embodiments, users have the ability to specify what leftover inventory they should have at the stores to eliminate this holiday overstock.
Cost rules may limit markdown to the cost, or some percentage of the cost, of the product. This rule may become effective when a given product goes into closeout mode. Likewise, the salvage rule may provide the absolute minimum allowable price for markdown. This is the “last ditch” effort to recoup at least some portion of the cost when liquidating a product. The importance of a salvage rule includes that the retailer may have a better margin (or revenue) by selling the product at a salvage value than by marking it below the salvage value on the store shelves. Again, salvage rules may be dependent upon cost data, or some percentage thereof.
Alternatively, in some embodiments, a maximum lifetime markdown rule is also configured (not shown). The maximum lifetime markdown may be dependent upon some percentage of the initial price value. This value may represent the maximum discount level a particular manufacturer or retailer desires to have for a product. For some products considered “high end” it may be important that the purchasing public perceive the item as exclusive. Part of this image may include limits on discounts from the full price. In such instances, maximum lifetime markdowns may be of particular use.
Moreover, cost rules, salvage rules and maximum lifetime markdowns may be combined. In such instances the lower bound for the price may then be set to the mean of these rules, the median of the rules, or the highest or lowest threshold of these rules. The default may set the lower bound of the price to the highest of the cost salvage and maximum lifetime markdown rules, however, this rule combination may be configurable.
The process then proceeds to step 2016 where continuous markdown may be configured. Continuous markdown may include a markdown limit which may be configured. The optimizer may then set the markdown to any amount within the markdown limit, as is desired to fulfill a particular goal. Configuring the markdown limits may include setting limits as to the allowed degree of a markdown. These limits may include an upper as well as lower limit. Markdown limits may be provided in terms of dollar amounts, percentages, or may be tied to external data such as cost data and competitor pricing.
In some embodiments, a steepest markdown may be configured (not shown). Steepest markdown may limit the rate of markdown for a particular product. For example, the steepest markdown may be configured to be a maximum of a 5% drop over any week period. Thus, in this example, even if a 10% markdown is optimal, the first week may be a 5% markdown and then at the second week the markdown may be increased to 10%.
Likewise, in some embodiments, markdown timing may be configured (not shown). Configuring markdown timing may restrict the number of times markdowns may occur in a given time period. This may prevent markdowns from occurring too close together.
The process then proceeds to step 2018 where item selection is configured. Item selection may include user choice of products for optimization and/or re-optimization. Item selection may be user configured for each individual product, by grouping of related products, or by store levels. In some embodiments, item selection may be automated, such that items are selected by certain trigger events. Such events may include cost changes in the product, seasonality effects, competitor action, or any additional criteria.
In some embodiments, sell-through may additionally be configured (not shown). Configuring sell-through may include setting a percentage of starting inventory that is required to be sold by a certain date. For example, the user may configure a particular product to sell at least 80% of the starting inventory within a two week period. Such a rule may apply pressures to the volume maximization functions within the optimizer. Sell-through may be configured as a percentage of the original inventory, or as a number of products (i.e., sell 50,000 widgets in the first quarter).
The process then concludes by progressing to step 1904 of
Else, if at step 2102 volume is not the desired primary objective, the process then proceeds to step 2104 where an inquiry is made as to whether an inverse weight objective is desired. Inverse weighting provides a primary profit maximization goal; however, as time progresses the secondary objective, maximizing volume, may be given increasing weight. This enables greater sell through over time. Inverse weighting will be discussed below in more detail at
If inverse weight objective is desired, the process then proceeds to step 2114 where the inverse weighting objective is applied. The process then concludes by progressing to step 2006 of
Otherwise, if at 2104 an inverse weighting function is not desired, the process then proceeds to step 2106 where profit is set as the primary objective. Volume is set as the secondary objective at step 2108. The process then concludes by progressing to step 2006 of
In this example, VolthenPFT is the inverse weighting function. The SalesVol (or SalesVol(t)) term refers to the sales objective. The added argument t indicates the allowance for simple of complicated dependence on time dimension. Note that implicitly the summation would typically cover other dimensions such as over the product-set, store-set etc. This sales objective is multiplied by the weighting coefficient, W (or W(t) The argument t denotes dependence on time). This weighting coefficient, W(t), may be a linear function dependent upon time. In some embodiments, weighting coefficient, W, may be a more complicated weighting function that incorporates time, sell-through rates, events, POS data adhesion to forecasts, or any other desired factor. The sales objective multiplied by the weighting coefficient may then be summed, and the maximum may be taken to give the inverse weighting function.
For
The process then proceeds to step 2204 where the weighting coefficient, W, is configured as a function of time. As previously mentioned, weighting coefficient, W, may additionally be configured to incorporate sell-through rates, events, POS data adhesion to forecasts, or any other desired factor. Thus, the weighting coefficient, W, may be a function comprised of the initial weighing coefficient plus any time, or other factor, dependencies.
The process then proceeds to step 2206 where the weighting coefficient, W, is applied to maximization of sales. That is, to multiply the weighting coefficient, W, by the sales volume and take the maximum of the resulting sum.
The process then concludes by progressing to step 2006 of
Else, if at step 2302 the user does not desire to select POS handling, then the process proceeds to step 2306 where POS handling rules are automatically selected. The process then concludes by progressing to step 2012 of
Else, if at step 2402 there is no optimization failure, the initial rule set, as configured at
Else, if a rule incompatibility exists the process then proceeds to step 2410 where an inquiry is made as to whether the rule incompatibility is beyond a tolerance level. If the incompatibility is below the tolerance, then GAMS may be run on the configured tolerance, at step 2412. This enables minor rule incompatibilities to be overlooked.
Otherwise, if at step 2410 the incompatibility is beyond tolerance then the process then proceeds to step 2416 where the rule is broken. In some embodiments, this may occur through rule relaxation, wherein rules are prioritized and the least priority rule which resolves the conflict is incrementally relaxed. The process then proceeds to step 2412 where a GAMS run may be performed to the configured tolerance. The GAMS run may result in a markdown plan which may be reported at step 2414. The process then concludes by progressing to step 1908 of
Else, if the user is not choosing to change rule configuration,—the process then proceeds to step 2504 where an inquiry is made as to whether new POS data is provided which is does not conform to the forecast. This may occur when there is an unexpected event, or when the demand models used to develop the forecasts are deficient. If the new POS data conforms to forecast data, the process then concludes by progressing to step 1918 of
Otherwise, if the new POS data is nonconforming to forecasts, then the process proceeds to step 2506 where an inquiry is made as to whether the discrepancy between POS data and forecasts are above a threshold. By checking against a threshold, minor deviations between POS data and forecasts may be ignored. However, discrepancies deemed significant may prompt a model refresh and a pricing markdown plan re-optimization. Thus, if the discrepancy is below the threshold, the process may conclude by progressing to step 1918 of
Else, if at step 2602 the user did not choose to reconfigure the rules, the process then proceeds to step 2604 where any rules that require changes due to new POS data is reconfigured. The process then proceeds to step 2608 where an inquiry is made as to whether the rule change is infeasible. If the new rule set is feasible, the process then proceeds to step 2614 where final rule set is approved by the user. The process then concludes by progressing to step 1914 of
Otherwise, if the new rule set is found infeasible at step 2608, the process then proceeds to step 2610 where an inquiry is made as to whether user input is required. If user input is required the process then proceeds to step 2606, where the user updates the rules. The process then proceeds to step 2614 where final rule set is approved by the user. The process then concludes by progressing to step 1914 of
Else, if user input is not required the process then proceeds to step 2612, where rules are relaxed. The process then proceeds to step 2614 where final rule set is approved by the user. The process then concludes by progressing to step 1914 of
With re-optimization, the user is free to edit most of the rules involved. This may lead to infeasibilities in the previously recommended prices. For anything prior to the price execution date, the system may be configured to ignore that the user did not adhere to the rules, as rules are meant to be forward looking. However, some of these infeasibilities will affect the prices going forward.
For example, in general, infeasibilities can be divided into the following: 1) where the previously recorded price in the week before price execution is in itself an infeasible price. This can be because the allowable percent offs have changed, or because the price points have changed, or the maximum lifetime markdown % has changed. Overridden prices might also have been infeasible; 2) where the previously recommended prices prior to the price execution date do not adhere to the rules in the new optimization. This is of little concern, as optimization is forward looking; and 3) where the previously recommended prices prior to the price execution date, in addition to the new rules, make optimization after the price execution date infeasible. This can happen if more markdowns where taken in the past than the new total allows. This may also occur if the user changed the maximum lifetime markdown to something that is lower than a markdown taken in the past.
In some embodiments, such infeasibilities may be resolved, respectively, in the following ways: 1) the optimization may be changed to allow for infeasible pePrices. However, the system may be configured to move everything to a set of prices that are on the same schedule and on the same markdown level as soon as the lead time is passed; 2) the system may ignore non-adherence of previously recommended prices prior to the price execution date; and 3) the system may be configured to check to see if a product exceeded the maximum lifetime markdown allowed or has taken more than the total number of markdowns. If either of these conditions is true, then the system may be configured to not optimize for the entire schedule.
Additionally, implemented markdowns could very well be different across the schedule. Thus the system may be configured to allow for infeasible pePrices, and markdown them down to the same schedule as soon as possible.
Also, if the user has changed the maximum number of markdowns, it is possible to have surpassed this number of allowed markdowns. If a product has been marked down more than the maximum number allowed, the system may stop marking down the entire schedule.
Moreover, if the user changes the allowable percent off, it is possible that the previously recommended price is no longer feasible. Since prices can only go down, there might not be a feasible price point to go down to. In such a situation the system may remove all products from the optimization that do not have a feasible price point to go down to. All the other products may still be optimized. This check may be done together with the maximum lifetime markdown check. Alternatively, the system may be configured to not allow users to edit the percent offs field.
The process then proceeds to step 2708 where an inquiry is made as to whether utilize the previous optimization. When re-optimization is prompted by user rule changes, the previous optimization may be an acceptable demand model. Thus, by using the previous optimization time, and computational resources, may be conserved.
If the previous optimization may be utilized, the process then proceeds to step 2710 where the previous optimization coefficients are utilized. The optimization may then be applied to the updated rule parameters at step 2714. The process then concludes by progressing to step 1908 of
Else, if at step 2708 the previous optimization is not to be utilized, such as in the situation where there are event changes that make the previous demand models inaccurate, then the process then proceeds to step 2712 where new optimization coefficients are generated. The optimization may then be applied to the updated rule parameters at step 2714. The process then concludes by progressing to step 1908 of
In the specification, examples of product are not intended to limit products covered by the claims. Products may for example include food, hardware, software, real estate, financial devices, intellectual property, raw material, and services. The products may be sold wholesale or retail, in a brick and mortar store or over the Internet, or through other sales methods.
Additionally, it should be noted that the present invention may be embodied as entirely hardware, entirely software, or some combination thereof.
While this invention has been described in terms of several preferred embodiments, there are alterations, permutations, and substitute equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and substitute equivalents as fall within the true spirit and scope of the present invention.
This is a continuation-in-part of co-pending U.S. application Ser. No. 11/365,634 filed on Feb. 28, 2006, entitled “Computer Architecture”, which is hereby fully incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 11365634 | Feb 2006 | US |
Child | 12208342 | US |