The present invention relates to an optimization system, an optimization method, and an optimization program that perform optimization based on a predictive model.
Various methods for generating predictive models based on past historical data have been proposed in recent years. For example, Patent Literature (PTL) 1 describes a learning method that automatically separates and analyzes mixture data.
As a method for performing optimization on a quantitative problem (hereafter, “mathematical optimization”), numerical optimization (mathematical programming) is known. Examples of mathematical programming include continuous variable-related methods such as linear programming, quadratic programming, and semidefinite programming, and discrete variable-related methods such as mixed integer programming. PTL 2 describes a method for determining an optimal charging schedule by applying mathematical programming to collected data.
PTL 1: U.S. Pat. No. 8,909,582
PTL 2: Japanese Patent Application Laid-Open No. 2012-213316
Mathematical optimization is typically performed on the premise that data input to mathematical programming is observed. For example, in the case of optimizing production lines of industrial products, data such as the quantity of material, cost, and production time necessary to produce a product in each line is input.
In the case where data is unobserved, however, data needs to be prepared manually,which hinders extensive optimization or frequent optimization. For example, if a future demand predictive line of each product in a retail store is obtained, it is possible to optimize order and stock based on the demand. However, the number of products for which a demand predictive line can be drawn manually is limited. Besides, it is not practical to repeat manual demand prediction upon each order that takes place once in several hours.
For example, to optimize the price of each product for a future period so as to maximize the sales in the period, a complex correlation between the price and demand of a large volume of products needs to be recognized, but manually doing so is difficult,
In view of these findings, the inventors of the present invention made an invention relating to, for example, a method of learning a model for predicting unobservable data from past data by the method described in PTL 2 as an example, automatically generating an objective function of mathematical programming and a constraint condition based on a future prediction result obtained based on the predictive model, and performing optimization. According to such invention, appropriate optimization can be performed even in a situation where there is massive unobservable input data in mathematical optimization or there are complex correlations between massive data.
In a process of performing such optimization, there are instances where the predictive model based on machine learning is based on a non-linear basis function. For example, consider the case of performing, for the above-mentioned price prediction problem, non-linear transformation such as squaring of price and logarithmic transformation of price as a feature value input to the predictive model based on machine learning. In this case, the objective function (sales for a future period) of mathematical optimization is a function of the feature value obtained by complex non-linear transformation of the price, so that efficiently solving such mathematical optimization using a typical method is difficult. Accordingly, it is preferable if mathematical optimization can be solved at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.
The present invention therefore has an object of providing an optimization system, an optimization method, and an optimization program that can solve mathematical optimization at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.
An optimization system according to the present invention is an optimization system for optimizing a value of an objective variable so that a value of an objective function is optimal, the optimization system including: a model input unit for receiving a linear regression model represented by a function having the objective variable as an explanatory variable; a candidate point input unit for receiving, for the objective variable included in the linear regression model, at least one candidate point which is a discrete candidate for a possible value of the objective variable; and an optimization unit for calculating the objective variable that optimizes the objective function having the linear regression model as an argument, wherein the optimization unit selects a candidate point that optimizes the objective variable, to calculate the objective variable.
An optimization method according to the present invention is an optimization method form optimizing a value of an objective variable so that a value of an objective function is optimal, the optimization method including: receiving a linear regression model represented by a function having the objective variable as an explanatory variable; receiving, for the objective variable included in the linear regression model, at least one candidate point which is a discrete candidate for a possible value of the objective variable; and calculating the objective variable that optimizes the objective function having the linear regression model as an argument, wherein in the optimization, a candidate point that optimizes the objective variable is selected to calculate the objective variable.
An optimization program according to the present invention is an optimization program applied to a computer for optimizing a value of an objective variable so that a value of an objective function is optimal, the optimization program causing the computer to execute: a model input process of receiving a linear regression model represented by a function having the objective variable as an explanatory variable; a candidate point input process of receiving, for the objective variable included in the linear regression model, at least one candidate point which is a discrete candidate for a possible value of the objective variable; and an optimization process of calculating the objective variable that optimizes the objective function having the linear regression model as an argument, wherein in the optimization process, the computer is caused to select a candidate point that optimizes the objective variable, to calculate the objective variable.
According to the present invention, a technically advantageous effect of solving mathematical optimization at high speed with high accuracy even in the case where a predictive model used fix optimization is based on a non-linear basis function can be achieved by the technical means described above.
An overview of the present invention is given first. According to the present invention, in a situation where there is massive unobservable input data in mathematical optimization or there are complex correlations between massive data, massive unobservable data or complex data correlations are learned by a machine learning technique to perform appropriate optimization. In detail, according to the present invention, a model for predicting unobservable data from past data is learned by, for example, the method described in PTL 2, and an objective function and a constraint condition in mathematical programming are automatically generated based on a future prediction result obtained based on the predictive model, to perform optimization.
Exemplary embodiments of the present invention are described below, with reference to drawings. The following describes an example where, based on sales prediction of a plurality of products, the prices of the plurality of products are optimized so as to maximize the total sales revenue of the plurality of products, according to need. However, the optimization target is not limited to such an example. In the following description, a variable subjected to prediction by machine learning is referred to as “explained variable”, a variable used for prediction is referred to as “explanatory variable”, and a variable as optimization output is referred to as “objective variable”. These variables are not in an exclusive relationship. For example, a part of explanatory variables can be an objective variable,
A method of automatically generating an objective function and a constraint condition in mathematical programming based on a future prediction result obtained based on a predictive model and performing optimization is described first.
The training data storage unit 10 stores each type of training data used by the learner 20 to learn a predictive model. In this exemplary embodiment, the training data storage unit 10 stores historical data acquired in the past, fbr a variable (objective variable) output as an optimization result by the below-mentioned optimization device 30. For example, in the case where the optimization device 30 is to optimize the prices of a plurality of products, the training data storage unit 10 stores the price of each product corresponding to an explanatory variable and the sales amount of each product corresponding to an explained variable, as historical data acquired in the past.
The training data storage unit 10 may also store external information such as weather and calendar information, other than the explained variable historical data and explanatory variable historical data acquired in the past.
The learner 20 learns a predictive model for each set explained variable by machine learning, based on each type of training data stored in the training data storage unit 10. The predictive model learned in this exemplary embodiment is expressed as a function of a variable (objective variable) output as an optimization result by the below-mentioned optimization device 30. In other words, the objective variable (or its function) is the explanatory variable of the predictive model.
For example, in the case of optimizing the prices so as to maximize the total sales revenue, the learner 20 generates, for each target product, a predictive model of sales amount having the price of the product as an explanatory variable, based on past sales information (price, sales amount, etc.) and external information (weather, temperature, etc.). By generating such a predictive model using the sales amounts of the plurality of products as an explained variable, it is possible to model price-demand relationships and market cannibalization caused by competing products, while taking complex external relationships such as weather into account.
The predictive model aeneration method may be any method. For example, a simple regression approach may be used, or the learning method described in PTL 1 may be used.
Here, an optimization target index set is denoted as {m|m=1, . . . , M}. In the above-mentioned example, the optimization target is the price of each product, and M corresponds to the number of products. An object of prediction for each optimization target m is denoted as Sm. In the above-mentioned example, Sm corresponds to the sales amount of product m. An object of optimization (i.e. objective variable of optimization) for each optimization target m is denoted as Pm or P′m. In the above-mentioned example, Pm corresponds to the price of product m. When modeling the dependency between Sm (e.g. sales amount (demand)) and Pm (e.g. price) using linear regression, a predictive model for predicting Sm is represented by the following Formula 1 as an example.
In Formula 1, fd is a feature generation function, and represents transformation for P′m. D is the number of feature generation functions, and indicates the number of transformations performed on P′m. fd may be any function, such as a linear transformation function, or a non-linear transformation function, e.g. logarithm or polynomial. In the case where Pm denotes the price of product m and Sm denotes the sales amount of product m as mentioned above, fd represents, for example, the sales reaction to the price. The sales reaction is, for example, as follows: a certain price reduction leads to better or worse sales reaction; and the sales amount is squared according to price reduction.
In Formula 1, gd is an external feature (weather, etc. in the above-mentioned example), and D′ is the number of external features. Regarding the external feature, transformation may be performed beforehand. Moreover, α, β, and γ in Formula 1 are constant terms and coefficients of a regression equation obtained as a result of machine learning by the learner 20. As is clear from the above description, the predictive model is learned based on the explained variable (Sm) and the explanatory variable (Pm, each type of external feature, etc.), indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable.
The foregoing Formula 1 may be modified as shown in the following Formula 2, based on the passage of time.
In Formula 2, superscript t represents a time index. For example, this corresponds to the case of temporally sliding the training data set by a window function and updating the predictive formula with time t. Thus, the predictive model is learned based on historical data of the objective variable of optimization acquired in the past, and represented by a function having this objective variable as an explanatory variable. Since the learner 20 uses historical data acquired in the past in this way, there is no need to manually generate training data. Moreover, since the predictive model is learned by machine learning, even massive data can be handled, and the model can be automatically relearned to follow the sales amount trend which varies with time. The learner 20 outputs the generated predictive model to the optimization device 30.
The optimization device 30 performs objective optimization. In detail, the optimization device 30 optimizes the value of an objective variable so that the value of an objective function is optimal (maximum, minimum, etc.), while satisfying each type of constraint condition (described in detail later) set for the objective variable and the like. In the above-mentioned example, the optimization device 30 optimizes the prices of the plurality of products.
The optimization device 30 includes a predictive model input unit 31, an external information input unit 32, a storage unit 33, a problem storage unit 34, a constraint condition input unit 35, an optimization unit 37, an output unit 38, and an objective function generation unit 39.
The predictive model input unit 31 is a device that receives a predictive model. In detail, the predictive model input unit 31 receives the predictive model learned by the learner 20. When receiving the predictive model, the predictive model input unit 31 also receives parameters necessary for the optimization process. The predictive model input unit 31 may receive a predictive model obtained by an operator manually correcting the predictive model learned by the learner 20. The predictive model input unit 31 thus receives a predictive model used in the optimization device 30, and so can be regarded as the predictive model reception unit for receiving a predictive model.
The external information input unit 32 receives external information used for optimization other than the predictive model. As an example, in the case of optimizing the prices for next week in the above-mentioned example, the external information input unit 32 may receive information about next week's weather. As another example, in the case where next week's store traffic is predictable, the external information input unit 32 may receive information about next week's store traffic. As in this example, the external information may be generated by the predictive model resulting from machine learning. The external information received here is, for example, applied to the explanatory variable of the predictive model.
The storage unit 33 stores the predictive model received by the predictive model input unit 31. The storage unit 33 also stores the external information received by the external information input unit 32. The storage unit 33 is realized by a magnetic disk device as an example.
The problem storage unit 34 stores an evaluation scale of optimization by the optimization unit 37. In detail, the problem storage unit 34 stores a mathematical programming problem to be solved by optimization. The mathematical programming problem is stored in the problem storage unit 34 beforehand by a user or the like. The, problem storage unit 34 is realized by a magnetic disk device as an example.
In this exemplary embodiment, the objective function or constraint condition of the mathematical programming problem is defined so that the predictive model is a parameter. In other words, the objective function or constraint condition in this exemplary embodiment is defined as a functional of the predictive model. In the above-mentioned example, the problem storage unit 34 stores a mathematical programming problem for maximizing the total sales revenue. In this case, the optimization unit 37 optimizes the price of each product so as to maximize the total sales revenue. Since the sales revenue of each product can be defined by multiplication of the price of the product by the sales amount predicted by the predictive model, the problem storage unit 34 may store, for example, a mathematical programming problem specified by the following Formula 3.
In Formula 3, Tte is a time index of a period subjected to optimization. For example, in the case of maximizing the total sales revenue for next week where the unit of time is “day”, Tte is a set of dates for one week from the next day.
The constraint condition input unit 35 receives a constraint condition in optimization. The constraint condition may be any condition. An example of the constraint condition is a business constraint. For example, in the case where a quota is imposed for the sales amount of a product, a constraint condition “Sm(t)≥quota” may be used. Moreover, a constraint condition (e.g. P1≥P2) specifying the magnitude relationship between the prices P1 and P2 of respective two products may be used.
In the case where the constraint condition has the predictive model as an argument, the constraint condition input unit 35 may operate as the predictive model reception unit for receiving a predictive model, or read the predictive model stored in the storage unit 33. The constraint condition input unit 35 may then generate the constraint condition having the acquired predictive model as an argument.
The objective function generation unit 39 generates the objective function of the mathematical programming problem. In detail, the objective function generation unit 39 generates the objective function of the mathematical programming problem having the predictive model as a parameter. For example, the objective function generation unit 39 reads, from the storage unit 33, the predictive model to be applied to the mathematical programming problem stored in the problem storage unit 34, and generates the objective function.
A plurality of predictive models are learned by machine learning depending on the object of prediction, as in the above-mentioned example. In this case, the problem storage unit 34 stores the plurality of predictive models. Here, the objective function generation unit 39 may read, from the storage unit 33, the plurality of predictive models to be applied to the mathematical programming problem stored in the problem storage unit 34, and generate the objective function.
The optimization unit 37 performs objective optimization based on the received various information. In detail, the optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal. Since each type of constraint condition is set for the objective variable and the like, the optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal (maximum, minimum, etc.), while satisfying the constraint conditions.
In this exemplary embodiment, the optimization unit 37 can be regarded as solving the mathematical programming problem so as to optimize the value of the objective function having the predictive model as a parameter as mentioned above. For example, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem specified in the foregoing Formula 3. In the case where a constraint condition has the predictive model as an argument, the optimization unit 37 can be regarded as calculating the objective variable that optimizes the objective function under this constraint condition.
The output unit 38 outputs the optimization result by the optimization unit 37.
The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 are realized by a CPU of a computer operating according to a program (information processing program or optimization program). For example, the program may be stored in the storage unit 33 in the optimization device 30, with the CPU reading the program and, according to the program, operating as the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39.
Alternatively, the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by dedicated hardware. The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each he realized by electric circuitry. The term “circuitry” here conceptually covers single device, multiple devices, chipset, and cloud. The optimization system according to the present invention may be realized by two or more physically separate devices that are connected wiredly or wirelessly.
The operation of the optimization system in this exemplary embodiment is described below.
The predictive model input unit 31 receives the predictive model venerated by the learner 20 (step S12), and stores the predictive model in the storage unit 33. The external information input unit 32 receives external information (step S13), and stores the external information in the storage unit 33.
The objective function generation unit 39 reads one or more predictive models received by the predictive model input unit 31 and a mathematical programming problem stored in the problem storage unit 34. The objective function generation unit 39 then generates an objective function of the mathematical programming problem (step S14). The constraint condition input unit 35 receives a constraint condition in optimization (step S15).
The optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal, under the received constraint condition (step S16).
As described above, in this exemplary embodiment, the predictive model input unit 31 receives a predictive model that is learned based on an explained variable and an explanatory variable, indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable. The optimization unit 37 calculates, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.
In detail, the objective function generation unit 39 defines the objective function of the mathematical programming problem using the predictive model as an argument, and the optimization unit 37 optimizes the value of the objective variable so as to maximize the value of the objective function of the mathematical programming problem, under the constraint condition having the predictive model as an argument. With such a structure, appropriate optimization can be performed even in a situation where there is unobservable input data in mathematical optimization.
This exemplary embodiment describes the method of optimizing the prices of the plurality of products so as to maximize the total sales revenue. Alternatively, the optimization unit 37 may optimize the prices of the plurality of products so as to maximize the profit.
Application examples of Exemplary Embodiment 1 are described below using simple specific examples, to facilitate the understanding of Exemplary Embodiment 1. First, an example of optimizing, based on sales prediction for a plurality of products, the prices of the plurality of products so as to maximize the total sales revenue of the plurality of products is described below as a first application example.
For example, consider the case of maximizing the total sales revenue of a sandwich group for the next month in a retail store. The sandwich group includes four types of sandwiches: sandwiches A, B, C, and D. In this case, a problem of optimizing the sales price of each of sandwiches A, B, C, and D so as to maximize the total sales revenue of the sandwich group, i.e. the total sales revenue of the four types of sandwiches of sandwiches A, B, C, and D, is to be solved.
The training data storage unit 10 stores data indicating the past sales revenue of each sandwich and the past sales price of each sandwich. The training data storage unit 10 may store external information such as weather and calendar information.
The learner 20 learns, for example, a predictive model for predicting the sales amount of each sandwich by machine learning, based on each type of training data stored in the training data storage unit 10.
A predictive model for predicting the sales amount of sandwich A is described below, as an example. The sales amount of sandwich A is expected to be influenced by the sales price of sandwich A. The sales amount of sandwich A is expected to be also influenced by the sales prices of the sandwiches displayed together with sandwich A on the product shelves, namely, sandwiches B, C, and D. This is because customers who visit the retail store are likely to selectively purchase a favorable sandwich from among sandwiches A, B, C, and D displayed together on the product shelves.
In such a situation, for example, suppose there is a day when sandwich B is sold at a greatly reduced price. Even a customer who usually prefers sandwich A may select and purchase not sandwich A but sandwich B in such a day. Given that the amount of sandwich a customer (person) can eat at one time is limited, a typical customer is unlikely to purchase both sandwiches A and B.
In such a case, selling sandwich B at a reduced price results in a decrease in the sales amount of sandwich A. This relationship is called cannibalization (market cannibalization).
In other words, cannibalization is such a relationship in which reducing the price of a product increases the sales amount of the product but decreases the sales amount of other competing products (a plurality of products similar in property or feature).
Therefore, the predictive model for predicting sales amount SA (explained variable) of sandwich A can be represented, for example, as a function including price PA of sandwich A, price PB of sandwich B, price PC of sandwich C, and price PD of sandwich D as explanatory variables.
The learner 20 generates each of a predictive model for predicting sales amount SA of sandwich A, a predictive model for predicting sales amount SB of sandwich B, a predictive model for predicting sales amount SC of sandwich C, and a predictive model for predicting sales amount SD of sandwich D, based on each type of training data stored in the training data storage unit 10.
Here, based on the assumption that the sales of sandwiches is influenced by external information (weather, temperature, etc.), each predictive model may be generated while also taking these external information into account. Moreover, the predictive model may be generated while taking the passage of time into account. The predictive model is, for example, represented by the foregoing Formula 1 or 2.
As is clear from the above description, the predictive model is learned based on the explained variable (the sales amount of a sandwich in this exemplary embodiment) and the explanatory variable (the sales price of the sandwich, the sales prices of its competing sandwiches, etc. in this exemplary embodiment), indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable.
The optimization device 30 performs objective optimization, i.e. optimization of each of the respective sales prices (i.e. PA, PB, PC, and PD) of sandwiches A, B, C, and D. In detail, the optimization device 30 optimizes the value of the objective variable (i.e. PA, PB, PC, and PD) so as to maximize the value of the objective function (i.e. the total sales revenue of the sandwich group), while satisfying each type of constraint condition set for the objective variable (i.e. PA, PB, PC, and PD), etc. The objective function is represented by the foregoing Formula 3 as an example.
This application example is an example in which the objective function is defined using the predictive model as an argument, where the objective function (i.e. the total sales revenue of the sandwich group) handled by the optimization device 30 can be represented by the foregoing Formula 3.
Suppose the optimization device 30 stores the “form” of objective function represented by the foregoing Formula 3 beforehand. The optimization device 30 generates the objective function of the optimization problem, by assigning the predictive model generated by the learner 20 (i.e. the predictive model for predicting SA, the predictive model for predicting SB, the predictive model for predicting SC, and the predictive model for predicting SD) to the “form” of objective function.
The optimization device 30 calculates, for the objective function having the predictive model as an argument, the value of the objective variable (i.e. the values of PA, PB, PC, and PD) that optimizes the objective function under the constraint condition. An application example of Exemplary Embodiment 1 has been described above using a simple specific example. Although the above describes the case where the sales price of each individual product is optimized so as to maximize the total sales revenue of only four products for simplicity's Sake, the number of optimization targets is not limited to four, and may be two, three, or five or more. Moreover, the prediction target is not limited to a product, and may be a service or the like.
Next, consider the case of handling a problem of optimizing the sales price of each individual product so as to maximize the total sales amount of a large volume of products in an actual retail store. Manually defining an objective function of such a mathematical programming problem (optimization problem) is too complicated and is not practical.
For example, if a future demand predictive line of each product in the retail store is obtained, it is possible to optimize order and stock based on the demand. However, the number of products for which a demand predictive line can be drawn manually is limited. Besides, it is not practical to repeat demand prediction upon each order that takes place once in several hours. For example, to optimize the price of each product for a future period so as to maximize the sales in the period, a complex correlation between the price and demand of a large volume of products needs to be recognized, but manually doing so is difficult.
By such design that defines the “form” of objective function beforehand and defines an actual objective function using a predictive model as an argument as in the foregoing application example, an objective function of the mathematical programming problem can be efficiently generated even in a situation where there is massive unobservable input data in mathematical optimization. Moreover, in this exemplary embodiment, appropriate optimization can be performed even in a situation where there is a complex correlation between massive data as in the case of cannibalization.
Other than determining the price of each product so as to maximize the sales or profit of the product, the optimization system in this exemplary embodiment may be applied to, for example, optimization of shelving allocation for products. In this case, for example, the learner 20 learns a predictive model of sales amount Sm of product m by a linear regression model, as follows. Here, P is a product price, H is a shelf position, and θm is a parameter.
S
m=linear_regression(P, H, θm)
The optimization device 30 then optimizes P and H so as to maximize the sales (specifically, the sum of the multiplication results of price Pm and sales amount Sm of product m). Any business constraint (e.g. price condition, etc.) may be set in this case, too.
Other than such shelving allocation, the optimization method according to the present invention is also applicable to optimization of an objective function represented by the multiplication result of the price of each commercial material (including both service and product) and the demand (function of the price of each of multiple commercial materials) of the commercial material in retail price optimization, hotel room price optimization, plane ticket price optimization, parking fee optimization, campaign optimization, and the like.
Following the foregoing first application example, application examples of Exemplary Embodiment 1 in these cases are described below using simple specific examples. As a second application example, hotel price optimization is described below. In this application example, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is, for example, package rate setting for each room of a hotel. In comparison with the above-mentioned retail, for example, “sandwich” in the retail corresponds to “bed and breakfast package of single room” in this application example. External information is, for example, weather, season, any event held near the hotel, etc.
As a third application example, hotel price and stock optimization is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is selected that takes price and stock into account. For example, a first objective variable is a variable indicating, for each package, when and how much a room in the package is sold, and a second objective variable is a variable indicating, for each package, when and how many rooms in the package are sold. External information is, for example, weather, season, any event held near the hotel, etc., as in the second application example.
As a fourth application example, plane ticket price and stock optimization is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is selected that takes price and stock into account, as in the third application example. Suppose each plane ticket represents the route to the destination and the seat type (class). For example, a first objective variable is a variable indicating, for each plane ticket, when and how much the plane ticket is sold, and a second, objective variable is a variable indicating, for each plane ticket, when and how many tickets are sold. External information is, for example, season, any event held, etc.
As a fifth application example, optimization of the parking fee of each parking lot is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is, for example, a time and a location-specific parking fee. External information is, for example, the parking fee of a nearby parking lot, location information (residential area, business district, the distance from a station, etc.).
A modification of the optimization system in Exemplary Embodiment 1 is described below based on the flow of a predictive model and data (predictive data) necessary for prediction, in comparison with the structure depicted in
The optimization system depicted in
First, analytical data 110d and predictive data 120d are generated from analysis/prediction target data 100d. The analysis/prediction target data 100d includes, for example, external information 101d such as weather and calendar data, sales/price information 102d, and product information 103d.
The analytical data 110d is data used by:the learning engine 170 for learning, and corresponds to data stored in the training data storage unit 10 in Exemplary Embodiment 1. The predictive data 120d is external data and other data necessary for prediction, and is specifically a value of an explanatory variable in a predictive model. The predictive data 120d corresponds to part or whole of data stored in the storage unit 33 in Exemplary Embodiment 1.
In the example depicted in
The learning engine 170 performs learning using the analytical data 110d, and outputs a predictive model 130d. The optimization device 180 receives the predictive model 130d and the predictive data 120d, and performs an optimization process.
Each type of data (analysis/prediction target data 110d (i.e. external information 101d, sales/price information 102d, and product information 103d), analytical data 110d, and predictive data 120d) depicted in
An objective function subjected to optimization is defined using a predictive model as an argument, as described in Exemplary Embodiment 1. Moreover, predictive data is also input data for optimization, as depicted in
Exemplary Embodiment 2 of an optimization system according to the present invention is described below. Exemplary Embodiment 1 describes a method of machine learning a model for predicting unobservable data from past data, automatically generating an objective function of mathematical programming and a constraint condition based on a future prediction result obtained based on the predictive model, and performing optimization.
In a process of performing such optimization, there are instances where the predictive model based on machine learning is based on a non-linear basis function, as mentioned earlier. For example, consider the case of performing, for the above-mentioned price prediction problem, non-linear transformation such as squaring of price and logarithmic transformation of price as a feature value input to the predictive model based on machine learning. In this case, the objective function (sales for a future period) of mathematical optimization is a function of the feature value obtained by complex non-linear transformation of the price, so that efficiently solving such mathematical optimization using a typical method is difficult.
Accordingly, Exemplary Embodiment 2 describes a method that can solve mathematical optimization at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.
The optimization device 40 includes the predictive model input unit 31, the external information input unit 32, the storage unit 33, the problem storage unit 34, the constraint condition input unit 35, a candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39.
The optimization device 40 is a device that performs objective optimization, as in Exemplary Embodiment 1. The optimization device 40 differs from the optimization device 30 in Exemplary Embodiment 1 in that the candidate point input unit 36 is further included. The optimization unit 37 in this exemplary embodiment performs optimization while also taking the input of the candidate point input unit 36 into account. The other components are the same as those in Exemplary Embodiment 1.
The candidate point input unit 36 receives a candidate point of optimization. A candidate point is a discrete value that is a candidate for an objective variable. In the above-mentioned example, price candidates (e.g. no discount, 5% discount, 7% discount, etc.) are candidate points. Input of such a candidate point can reduce the optimization cost.
In the example depicted in
A mathematical programming problem in the case where one or more candidate points are input is described below, using a specific example. Here, an optimization object index set is denoted as {k|k=1, . . . , K}. In the above-mentioned example, K corresponds to the number of price candidates. For example, in the case where there are four price candidates “no discount, 1% discount, 2% discount, and 5% discount” for the product “sandwich A”, K=4. An optimization object candidate set for product m is denoted as overlined Pmk, as written below, In the above-mentioned example, overlined Pmk indicates a price candidate for product m.
{
The k-th indicator of m is denoted as Zmk. Zmk satisfies the following condition.
Zmk∈{1,0} where ΣKk=1Zmk=1 [Math. 5]
With such definition, price P, of product m is defined in the following Formula 4. This definition indicates that price Pm which is an objective variable is discretized.
The foregoing Formula 1 can then be modified as follows.
where Fkmm′=ΣDd=1βdmm′fd(
Moreover, the foregoing Formula 3 can be modified as shown in the following Formula 5, where Z=(Z11, . . . , Z1K, . . . , ZMK).
For example, in the case where no candidate point is input, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem defined in the foregoing Formula 3. In the case where a candidate point is input, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem defined in the foregoing Formula 5.
Here, the constraint condition input unit 35 may receive input that takes a candidate point into account. A specific example of a constraint condition set in the above-mentioned product optimization is given below. Typically, when comparing the price of a single ballpoint pen and the price of a set of six ballpoint pens of the same brand, the price of per one ballpoint pen in a set of six ballpoint pens is expected to be lower than the price of a single ballpoint pen. This type of constraint condition is defined in the following Formula 6.
In Formula 6, PC denotes a set of index pairs to which the constraint condition is applied, and wm,n denotes a weight. PC and wm,n are given beforehand.
A specific example of the optimization process performed by the optimization unit 37 is described below, using the foregoing Formula 5. In the case of the foregoing Formula 5, the objective function can be modified as follows.
where
|Q|mk,m′k′=Σt∈T
Here, [Q]i,j is the (i,j)-th element of matrix Q, and [r]1 is the i-th element of vector r. Hence, the above-mentioned Q is not a symmetric matrix, and is not positive semidefinite. This problem is a sort of mixed integer quadratic programming problem called nonconvex cardinality (0-1 integer) quadratic programming. This problem is efficiently solved by transforming the problem into a mixed integer linear programming.
A method for solving the mathematical programming problem defined in the foregoing Formula 5 using mixed integer programming relaxation is described below. A modification process shown in the following Formula 7 is performed using new overlined variable Zi,j.
Here, a constraint in the following Formula 8 that overlined variable Zi,j takes in an optimal solution is defined.
Adding the equality shown in the foregoing Formula 8 allows the foregoing Formula 7 to be newly formulated as shown in the following Formula 9.
To reduce the number of constraint conditions for more efficient calculation, the following inequality may be deleted from the condition of the foregoing Formula 9.
ij
≥Z
i
+Z
j−1 [Math. 14]
The optimization unit 37 optimizes the prices of the plurality of products so as to maximize such modified formula. In the case where the candidate point input unit 36 receives no candidate point, the optimization unit 37 may solve the mathematical programming problem in the foregoing Formula 3. The constraint condition in the foregoing Formula 6 may be applied in mixed integer linear programming (MILP) relaxation.
The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 are realized by a CPU of a computer operating according to a program (information processing program or optimization program).
Alternatively, the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by dedicated hardware. The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by electric circuitry.
The operation by the optimization system in this exemplary embodiment is described below.
The candidate point input unit 36 receives a candidate point which is a candidate for a value that can be taken by the objective variable (step S18). The number of candidate points received may be one or more. The optimization unit 37 then optimizes the value of the objective variable so that the value of the objective function is optimal, under the received candidate point and constraint condition (step S19).
Thus, this exemplary embodiment describes the optimization system that optimizes the value of an objective variable so that the value of an objective function of a mathematical programming problem is optimal. In detail, the predictive model input unit 31 receives a linear regression model represented by a function having the objective variable of the mathematical programming problem as an explanatory variable. The candidate point input unit 36 receives, for the objective variable included in the linear regression model, a discrete candidate (candidate point) for the value that can be taken by the objective variable. The optimization unit 37 then calculates the objective variable that optimizes the objective function of the mathematical programming problem having the linear regression model as an argument. Here, the optimization unit 37 selects a candidate point that optimizes the objective variable, to calculate the objective variable.
With such a structure, mathematical optimization can be solved at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.
In detail, the optimization unit 37 optimizes an objective function having, as a parameter, a predictive model represented by the linear regression equation in the foregoing Formula 1. Here, the linear regression equation in Formula 1 has at least part of the explanatory variable represented by non-linear function fd.
For example, even for an objective variable, such as price, for which all kinds of candidates can be assumed, actual optimization is often performed by setting certain price candidates beforehand. Predictive model Sm represented in the form of the foregoing Formula I results from applying function fd to objective variable Pm which is an optimization target. In the case where an explanatory variable is represented by non-linear function fd, even a function represented in the form of linear regression equation is a non-linear function as far as price is concerned, and so its optimization is difficult.
In this exemplary embodiment, however, by discretizing an objective variable to provide a candidate point, a non-linear formula relating to an objective function of optimization can be modified to a linear formula relating to discrete variable Zd, regardless of fd. In other words, the optimization process can be performed at high speed for a linear regression equation that is expressed as linear regression but is non-linearly transformed, by setting an objective variable of optimization (e.g. provided by a person) beforehand.
The use of the method in this exemplary embodiment also enables application of a method in the below-mentioned Exemplary Embodiment 3, so that the optimization process can be performed at high speed.
This exemplary embodiment describes the method of optimizing the prices of the plurality of products so as to maximize the total sales revenue. Alternatively, the optimization unit 37 may optimize the prices of the plurality of products so as to maximize the profit. In this case, the objective function generation unit 39 may generate the following objective function as an example, where c is a term that does not depend on Z.
where
[F]
m′k=Σt∈T
This objective function is again a nonconvex cardinality (0-1 integer) quadratic programming where c does not depend on Z, and so the above-mentioned solving method can be applied as a problem mathematically equivalent to the foregoing Formula A.
This exemplary embodiment describes the case of optimizing the sales revenue (price×sales amount) by subjecting the sales amount to regression, as in Exemplary Embodiment 1. Alternatively, not the sales amount but the sales revenue may he subjected to regression. In the case of directly subjecting the sales revenue to regression, the learner 20 learns the sales revenue by a regression equation based on non-linear transformation of a quadratic function of the objective variable. The regression equation in such a case is represented by the following Formula B1 as an example.
where ϕd:2→, ψd:→
In Formula B1, φd, and ψd are each any basis function, and x corresponds to price. This function is set as the objective function of optimization. As in the foregoing Formula 4, x is discretized as shown in the following Formula B2.
After discretizing x in this way, the following modification can result in a BCP problem shown in the following Formula B3. Thus, the method in this exemplary embodiment can be used to perform optimization even in the case where the sales revenue is subjected to regression.
Exemplary Embodiment 3 of an optimization system according to the present invention is described below. A binary quadratic programming (BQP) problem is known as an optimization approach. Since the foregoing Formula A can be generated by applying discretization to linear prediction as described in Exemplary Embodiment 2, the problem in Exemplary Embodiment 2 can be transformed to BQP. BQP is an NP-hard problem, and is commonly known to be solved using a framework called integer programming because no exact solution is found.
Exemplary Embodiment 2 describes the method for solving BQP by mixed integer programming relaxation. This exemplary embodiment describes a method for solving BQP shown in the foregoing Formula A at higher speed. The optimization system in this exemplary embodiment has the same structure as the optimization system in Exemplary Embodiment 2, but differs from Exemplary Embodiment 2 in the method by which the optimization unit 37 performs the optimization process.
In detail, the optimization unit 37 in this exemplary embodiment relaxes BQP to an easy-to-solve problem called semidefinite programming (SDP) problem, and optimizes BQP based on the solution of SDP.
As an example, BQP is first formulated as shown in the following Formula 10. In Formula 10, M and K are natural numbers. Moreover, in Formula 10, Q is a KM×KM square matrix, and r is a KM-dimensional vector.
Let Symn be a set of all symmetric matrices of size n. In detail, Symn is written as follows.
Symn={X∈n×n|XT=X} [Math. 22]
A vector which is all 1 may be written as boldfaced 1, where boldfaced 1=(1, 1, . . . , 1)T. The inner product on Symn is defined as follows, using a black dot sign.
X·Y=Σ
n
i=1Σnj=1XijYij for X,Y∈Symn [Math. 23]
The following Formula 11 holds for all vectors x. Accordingly, Q in the foregoing Formula 10 can be replaced with the following Formula 12. Q is therefore assumed to be a symmetric matrix, without loss of generality.
[Math. 24]
x
T
Qx=x
T(Q+QT)x/2 (Formula 11)
(Q+QT)/2 (Formula 12)
An SDP relaxation method is described below. First, the optimization unit 37 transforms BQP in Formula 10 to a variable that takes {1, −1} value. Suppose t=−1+2Z. The foregoing Formula 10 is then modified to the following Formula 13.
The foregoing Formula 10 is thus equivalent to the following Formula 14.
Next, the optimization unit 37 relaxes each variable t1 that takes S0={1, −1} value, to variable xi that takes SKM value. Sn represents the unit n-sphere, as shown in the following Formula 15.
[Math. 27]
S
n
={x∈
n+1
|∥x∥
2=1} (Formula 15)
In this case, the foregoing Formula 14 is relaxed to the problem of the following Formula 16.
Here, “1” in the objective function of the foregoing Formula 14 is equally replaced with unit vector x0. For feasible solution t of the foregoing Formula 14, the feasible solution of Formula 16 is defined by the following Formula 17, with no contradiction of the value of the objective function. Hence, the problem of the foregoing Formula 16 is a result of relaxing the foregoing Formula 14.
[Math. 29]
x0=[1, 0 . . . 0]T, (Formula 17)
x
i
=[t
i, 0 . . . 0]T (i=1 . . . KM)
The optimization unit 37 transforms the problem of the foregoing Formula 16 to an SDP problem. The objective function in Formula 16 is transformed to the following Formula 18.
According to this definition, Y is positive semidefinite;:and satisfies the following Formula 19.
[Math. 31]
y
ij
=x
T
i
x
j (i=0, 1, . . . KM, j=0, 1 . . . KM) (Formula 19)
If Y is positive semidefinite, (KM+1)-dimensional vectors x0, xi, . . . xKM satisfy the condition defined in the foregoing Formula 18, and Formula 19.
Setting yij=1 using matrix Y enables constraint condition ∥xi∥2=1 to be expressed. Since x0 is a unit vector, the following Formula 21 holds only in the case where the following Formula 20 is satisfied.
Using matrix Y these conditions can be expressed as shown in the following Formula 22.
Thus, the optimization unit 37 can generate an SDP problem shown in the following Formula 23. This problem is equivalent to the problem shown in the foregoing Formula 16, and is a result of relaxing the foregoing Formula 10. Hence, an optimal value of Formula 23 is the upper bound of an optimal value of the foregoing Formula 10.
A method of, when an optimal solution of the problem shown in Formula 23 is given, transforming the optimal solution to Z of the problem shown in Formula 10 is described below. This transformation operation is hereafter referred to as rounding. Let tilde Y be the optimal solution derived by SDP relaxation.
In the derivation of the foregoing Formula 16, “1” has been replaced with vector x0, and ti (i=1, . . . , KM) has been replaced with vector xi. Accordingly, the relationship shown in the following Formula 24 exists between Z and Y.
[Math. 35]
2Zi−1=ti=1·ti≈xT0xi=y0i (i=1 . . . KM) (Formula 24)
It can therefore be assumed that fixing Zi to 1 for such i that tilde y0i exceeds other tilde y0i is appropriate. Based on this premise, the operation of the optimization unit 37 solving BQP shown in the foregoing Formula 10 by SDP relaxation is described below.
The optimization unit 37 transforms the BQP shown in the foregoing Formula 10 to the problem shown in Formula 23 resulting from SDP relaxation (step S21), and sets the optimal solution as tilde Y. The optimization unit 37 searches for a value (hereafter denoted as tilde k) that satisfies the following Formula 25 (step S22), where tilde k is an element of {1, . . . , K}.
[Math. 36]
{tilde over (y)}
0,Km+{tilde over (k)}=max{{tilde over (y)}0,Km+k|k=1 . . . K} (Formula 25)
The optimization unit 37 performs setting so that ZKm+tilde k is 1 (the others are 0) (step S23).
The optimization unit 37 first initializes index set U={1, . . . , M} (step S31). The optimization unit 37 performs the following process for each index included in U (steps S32 to S36).
First, the optimization unit 37 partially fixes Z, and constructs the problem shown in the foregoing Formula 10 into the problem (i.e. SDP) shown in Formula 23 (step S32). The optimization unit 37 solves the problem shown in Formula 23, and sets the optimal solution as tilde Y (step S33). The optimization unit 37 searches for tildes m and k that satisfy the following Formula 26 (step S34). The optimization unit 37 then partially fixes Z based on the following Formula 27 (step S35).
The optimization unit 37 updates U as follows (step S36).
[Math. 38]
U:=U\{{tilde over (m)}}
The optimization unit 37 acquires the following three values, by applying the algorithm depicted in
0<computed optimal value of Formula 10?optimal value of Formula 10?optimal value of Formula 23 (Formula 28)
Thus, the computed solution is guaranteed to satisfy the following Formula 29.
approximation rate of computed solution=(computed optimal value of Formula 10)/(optimal value of Formula 10)?(computed optimal value of Formula 10)/(optimal value of Formula 23) (Formula 29)
With this inequality, the quality of the calculated solution can be evaluated, and a more sophisticated algorithm such as branch and bound method can be derived.
The optimization unit 37 may perform exhaustive search for a solution based on a parameter defined by the user.
The operation example (algorithm) depicted in
The optimization unit 37 transforms the BQP shown in the foregoing Formula 10 to the problem shown in Formula 23 resulting from SDP relaxation (step S41), and sets the optimal solution as tilde Y. The optimization unit 37 searches for a value (tilde k) that satisfies the following Formula 30 (step S42). The optimization unit 37 also initializes index set Cm as shown in the following Formula 31 (step S43).
[Math. 39]
{tilde over (y)}
0,Km+{tilde over (k)}=max{{tilde over (y)}0,Km+k|k=1 . . . K} (Formula 30)
Cm={{tilde over (k)}}⊆{1 . . . K} (Formula 31)
The optimization unit 37 repeatedly performs the following process while the following Formula 32 is satisfied (steps S44 to S45).
[Math. 40]
ΠMm=1|Cm|<T (Formula 32)
The optimization unit 37 searches for two values (tildes m and k) that satisfy the following Formula 33 (step S44), where tilde m is an element of {1, . . . , M}, and tilde k is an element of {1, . . . , K}.
[Math. 41]
{tilde over (k)}∉C
{tilde over (m)}
. {tilde over (y)}
0,K{tilde over (m)}+{tilde over (k)}=max{{tilde over (y)}0,Km+k|m=1 . . . M, k=1 . . . K. k∉C{tilde over (m)}} (Formula 33)
The optimization unit 37 further adds tilde k to set Ctilde m (step S45). In detail, this is represented by the following Formula 34.
[Math. 42]
C{tilde over (m)}←C{tilde over (m)}∪{{tilde over (k)}} (Formula 34)
The optimization unit 37 sets D as a set of Z (step S46), where Z is given in the following form. In this case, D satisfies the following Formula 35.
The optimization unit 37 computes the value of the objective function for all Z (step S47), and rearranges the elements of D with the computed values (step S48).
The algorithm depicted in
Thus, this exemplary embodiment describes the optimization system that optimizes programming represented by a BQP problem. In detail, the optimization unit 37 relaxes a BQP problem to an SDP problem, and derives a solution of the SDP problem. Thus, an optimal solution can be derived at very high speed, as compared with a known typical BQP solving method.
As a result of an experiment using the method in this exemplary embodiment by means of a computer, a process requiring several hours to obtain an optimal solution of BPQ by a typical method was able to be accelerated to about 1 second.
Although this exemplary embodiment describes the operation of the optimization unit 37 using BQP formulated as shown in the foregoing Formula 10 as an example, BQP may be formulated as shown in the following Formula 36.
Let A be defined by the foregoing Formula 13. In this case, the problem shown in Formula 36 is equivalent to the problem shown in the following Formula 37. Relaxing the problem shown in Formula 37 yields the following Formula 38.
The problem shown in Formula 38 can be rewritten in a standard form including equalities and inequalities as shown in the following Formula 39. Here, B4u, B5u, and B6u are defined in the following Formula 40, and B1i, B2s, and B3s which are elements of Symn+1 are defined in the following Formula 41.
Meanwhile, the problem shown in the foregoing Formula 39 can be rewritten in a standard form represented by equalities as shown in the following Formula 42. Here, A′, B′1i, B′2s, B′3s, B′4u, B′5u, and B′6v are defined in the following Formula 43, and Kv which is an element of Symv is given by the following Formula 44.
A dual problem of the problem shown in the foregoing Formula 36 is described below. The dual problem of the problem shown in Formula 36 is defined in the following Formula 45.
In Formula 45, fj is given in the right side of the constraint of the foregoing Formula 42, and xj is a variable.
When feasible solution Z is given in Formula 36, the feasible solution in Formula 42 can be represented by the following Formula 46.
The feasible solution of the dual problem shown in Formula 45 is given by the following Formula 47.
Thus, the optimization unit 37 can use the foregoing Formulas 46 and 47 as an initial solution of the problem shown in the foregoing Formula 42.
This is summarized as follows. The optimization unit 37 relaxes a BQP problem shown in the following Formula 48 to an SDP problem shown in the following Formula 49. In detail, the optimization unit 37 relaxes a BQP problem with 1-of-K constraint (one-hot constraint), linear equality constraint, and linear inequality constraint as shown in Formula 48, to an SDP problem. The optimization unit 37 then transforms a solution derived from the problem shown in Formula 49 to a solution of the problem shown in Formula 48, thus deriving an optimal solution of the problem shown in Formula 48.
In Formula 48, S is the number of 1-of-K constraints (one-hot constraints), U is the number of linear equality constraints, and V is the number of linear inequality constraints. Of the input in Formula 48, a and c are n-dimensional vectors, and b and d are scalar values. In Formula 49, vector au=(au,1, au,2, . . . , au,n)T, and vector cu=(cu,1, cu,2, . . . , cu,n)T. Here, superscript T indicates transposition.
The present invention is summarized below.
With such a structure, mathematical optimization can be solved at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.
In detail, the linear regression model is represented by a function based on non-linear transformation of the objective variable, and the optimization unit 86 discretizes the objective variable for which the candidate point is received to result in a binary quadratic programming problem, to optimize the objective function.
As an example, the linear regression model is a model that includes a demand of a service or product as an explained variable and a price of the service or product as the explanatory variable. The objective function is a function indicating a total sales revenue for a plurality of services or products. The objective variable indicates a price of each of the plurality of services or products.
As another example, the linear regression model is a model that includes a sales revenue of a service or product as an explained variable and a price of the service or product as the explanatory variable. The objective function is a function indicating a total sales revenue for a plurality of services or products. The objective variable indicates a price of each of the plurality of services or products.
Moreover, the candidate point input unit 85 may display a list of objective variables subjected to optimization and one or more candidates for a possible value of each objective variable, and receive a candidate selected by a user as the candidate point.
The optimization system may include a predictive model generation unit (e.g. the learner 20) for generating a predictive model by machine learning, based on objective variable historical data acquired in the past. The model input unit 84 may receive the generated predictive model. With such a structure, a large number of objective variables or more complex objective functions can be automatically generated from past data.
An example of optimizing product price is given below. With a typical optimization method, in the case of simultaneously optimizing the prices of a large volume of products (e.g. 1000 products), it is difficult to manually optimize the price of each product. Besides, the optimization requires manually performing very simple prediction.
In this exemplary embodiment, on the other hand, the predictive model generation unit can automatically generate various models from data by machine learning, so that the generation of a large number of predictive models, the generation of complex objective variables, etc. can be automatized. By automatizing such processes, for example when the tendency of data changes, a machine learning model can be automatically updated to automatically perform optimization again (operation automatization).
The present invention realizes a process of solving a mathematical programming problem and a process of generating a predictive model, by the capability of a processor (computer) of processing massive data at high speed in a short time. Accordingly, the present invention is not limited to simple mathematical processing, but fully uses a computer to acquire a prediction result and an optimization result from massive data at high speed through the use of a mathematical programming problem.
The learner 20 and the optimization device 30 are each implemented by the computer 1000. The computer 1000 implementing the learner 20 may be different from the computer 1000 implementing the optimization device 30. The operation of each processing unit described above is stored in the auxiliary storage device 1003 in the form of a program (information processing program or optimization program). The CPU 1001 reads the program from the auxiliary storage device 1003, expands the program in the main storage device 1002, and executes the above-mentioned process according to the program. The learner 20 and the optimization device 30 may each be realized by electric circuitry. The term “circuitry” here conceptually covers single device, multiple devices, chipset, and cloud.
In at least one exemplary embodiment, the auxiliary storage device 1003 is an example of a non-transitory tangible medium. Examples of the non-transitory tangible medium include a magnetic disk, magneto-optical disk, CD-ROM, DVD-ROM, and semiconductor memory connected via the interface 1004. In the case where the program is distributed to the computer 1000 through a communication line, the computer 1000 to which the program has been distributed may expand the program in the main storage device 1002 and execute the above-mentioned process.
The program may realize part of the above-mentioned functions. The program may be a differential tile (differential program) that realizes the above-mentioned functions in combination with another program already stored in the auxiliary storage device 1003.
Although the present invention has been described with reference to the exemplary embodiments and examples, the present invention is not limited to the foregoing exemplary embodiments and examples. Various changes understandable by those skilled in the art can be made to the structures and details of the present invention within the scope of the present invention.
This application claims priority based on U.S. Provisional Application No. 62/235,044 tiled on Sep. 30, 2015, the disclosure of which is incorporated herein in its entirety.
10 training data storage unit
20 learner
30 optimization device
31 predictive model input unit
32 external information input unit
33 storage unit
34 problem storage unit
35 constraint condition input unit
36 candidate point input unit
37 optimization unit
38 output unit
39 objective function aeneration unit
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/003687 | 8/9/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62235044 | Sep 2015 | US |