INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Information

  • Patent Application
  • 20180218380
  • Publication Number
    20180218380
  • Date Filed
    August 09, 2016
    8 years ago
  • Date Published
    August 02, 2018
    6 years ago
Abstract
A predictive model reception unit 81 receives a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable. An optimization unit 82 calculates, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.
Description
TECHNICAL FIELD

The present invention relates to an information processing system, an information processing method, and an information processing program that perform optimization based on a learned predictive model.


BACKGROUND ART

Various methods for generating predictive models based on past historical data have been proposed in recent years. For example, Patent Literature (PTL) 1 describes a learning method that automatically separates and analyzes mixture data.


As a method for performing optimization on a quantitative problem (hereafter, “mathematical optimization”), numerical optimization (mathematical programming) is known. Examples of mathematical programming include continuous variable-related methods such as linear programming, quadratic programming, and semidefinite programming, and discrete variable-related methods such as mixed integer programming. PTL 2 describes a method for determining an optimal charging schedule by applying mathematical programming to collected data.


CITATION LIST
Patent Literature

PTL 1: U.S. Pat. No. 8,909,582


PTL 2: Japanese Patent Application Laid-Open No. 2012-213316


SUMMARY OF INVENTION
Technical Problem

Mathematical optimization is typically performed on the premise that data input to mathematical programming is observed. For example, in the case of optimizing production lines of industrial products, data such as the quantity of material, cost, and production time necessary to produce a product in each line is input.


There is, however, a problem that cannot be solved unless data that is unobservable from an analyst at the time when the analyst performs mathematical programming is used. An example is a technical problem of, to maximize the total sales revenue of a product group in a retail store, optimizing the price of each product belonging to the product group.


To perform mathematical programming for solving this problem, for example, a predictive value of the future sales amount of each product is needed as input data to mathematical programming. However, at the time when the analyst performs mathematical programming, such a predictive value of the future sales amount of each product is data unobservable from the analyst. Besides, it is not practical to manually repeat demand prediction upon each order that takes place once in several hours. Thus, with conventional methods, such a problem cannot be solved using mathematical programming.


The present invention therefore has an object of providing an information processing system, an information processing method, and an information processing program that can perform appropriate optimization even in a situation where there is unobservable input data in mathematical optimization.


Solution to Problem

An information processing system according to the present invention includes: a predictive model reception unit for receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and an optimization unit for calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


Another information processing system according to the present invention includes: a predictive model reception unit for receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and an optimization unit for calculating an objective variable that optimizes an objective function, under a constraint condition having the received predictive model as an argument.


An information processing method according to the present invention includes: receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


Another information processing method according to the present invention includes: receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and calculating an objective variable that optimizes an objective function, under a constraint condition having the received predictive model as an argument.


An information processing program according to the present invention causes a computer to execute: a predictive model reception process of receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and an optimization process of calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


Another information processing program according to the present invention causes a computer to execute: a predictive model reception process of receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and an optimization process of calculating an objective variable that optimizes an objective function, under a constraint condition having the received predictive model as an argument.


Advantageous Effects of Invention

According to the present invention, a technically advantageous effect of performing appropriate optimization even in a situation where there is unobservable input data in mathematical optimization can be achieved by the technical means described above.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram depicting an example of the structure of Exemplary Embodiment 1 of an optimization system according to the present invention.



FIG. 2 is a flowchart depicting an example of operation by the optimization system in Exemplary Embodiment 1.



FIG. 3 is an explanatory diagram depicting a modification of the optimization system in Exemplary Embodiment 1.



FIG. 4 is a block diagram depicting an example of the structure of Exemplary Embodiment 2 of an optimization system according to the present invention.



FIG. 5 is an explanatory diagram depicting an example of a screen for receiving input of candidate points.



FIG. 6 is a flowchart depicting an example of operation by the optimization system in Exemplary Embodiment 2.



FIG. 7 is a flowchart depicting an example of operation of solving BQP by SDP relaxation.



FIG. 8 is a flowchart depicting another example of operation of solving BQP by SDP relaxation.



FIG. 9 is a flowchart depicting yet another example of operation of solving BQP by SDP relaxation.



FIG. 10 is a block diagram schematically depicting an information processing system according to the present invention.



FIG. 11 is a schematic block diagram depicting the structure of a computer according to at least one exemplary embodiment.





DESCRIPTION OF EMBODIMENT

An overview of the present invention is given first. According to the present invention, in a situation where there is massive unobservable input data in mathematical optimization or there are complex correlations between massive data, massive unobservable data or complex data correlations are learned by a machine learning technique to perform appropriate optimization. In detail, according to the present invention, a model for predicting unobservable data from past data is learned by, for example, the method described in PTL 1, and an objective function and a constraint condition in mathematical programming are automatically generated based on a future prediction result obtained based on the predictive model, to perform optimization.


Exemplary embodiments of the present invention are described below, with reference to drawings. The following describes an example where, based on sales prediction of a plurality of products, the prices of the plurality of products are optimized so as to maximize the total sales revenue of the plurality of products, according to need. However, the optimization target is not limited to such an example. In the following description, a variable subjected to prediction by machine learning is referred to as “explained variable”, a variable used for prediction is referred to as “explanatory variable”, and a variable as optimization output is referred to as “objective variable”. These variables are not in an exclusive relationship. For example, a part of explanatory variables can be an objective variable.


Exemplary Embodiment 1


FIG. 1 is a block diagram depicting an example of the structure of Exemplary Embodiment 1 of an optimization system according to the present invention. The optimization system in this exemplary embodiment includes a training data storage unit 10, a learner 20, and an optimization device 30. The optimization system depicted in FIG. 1 corresponds to the information processing system according to the present invention.


The training data storage unit 10 stores each type of training data used by the learner 20 to learn a predictive model. In this exemplary embodiment, the training data storage unit 10 stores historical data acquired in the past, for a variable (objective variable) output as an optimization result by the below-mentioned optimization device 30. For example, in the case where the optimization device 30 is to optimize the prices of a plurality of products, the training data storage unit 10 stores the price of each product corresponding to an explanatory variable and the sales amount of each product corresponding to an explained variable, as historical data acquired in the past.


The training data storage unit 10 may also store external information such as weather and calendar information, other than the explained variable historical data and explanatory variable historical data acquired in the past.


The learner 20 learns a predictive model for each set explained variable by machine learning, based on each type of training data stored in the training data storage unit 10. The predictive model learned in this exemplary embodiment is expressed as a function of a variable (objective variable) output as an optimization result by the below-mentioned optimization device 30. In other words, the objective variable (or its function) is the explanatory variable of the predictive model.


For example, in the case of optimizing the prices so as to maximize the total sales revenue, the learner 20 generates, for each target product, a predictive model of sales amount having the price of the product as an explanatory variable, based on past sales information (price, sales amount, etc.) and external information (weather, temperature, etc.). By generating such a predictive model using the sales amounts of the plurality of products as an explained variable, it is possible to model price-demand relationships and market cannibalization caused by competing products, while taking complex external relationships such as weather into account.


The predictive model generation method may be any method. For example, a simple regression approach may be used, or the learning method described in PTL 1 may be used.


Here, an optimization target index set is denoted as (m|m=1, . . . , M). In the above-mentioned example, the optimization target is the price of each product, and M corresponds to the number of products. An object of prediction for each optimization target m is denoted as Sm. In the above-mentioned example, Sm corresponds to the sales amount of product m. An object of optimization (i.e. objective variable of optimization) for each optimization target m is denoted as Pm or P′m. In the above-mentioned example, Pm corresponds to the price of product m. When modeling the dependency between Sm (e.g. sales amount (demand)) and Pm (e.g. price) using linear regression, a predictive model for predicting Sm is represented by the following Formula 1 as an example.






[

Math
.




1

]










S
m

=


α
m

+





m


=
1

M










d
=
1

D








β

mm


d




f
d



(

P
m


)





+




d
=
1


D










γ
d
m



g
d








(

Formula





1

)







In Formula 1, fd is a feature generation function, and represents transformation for P′m. D is the number of feature generation functions, and indicates the number of transformations performed on P′m. fd may be any function, such as a linear transformation function, or a non-linear transformation function, e.g. logarithm or polynomial. In the case where Pm denotes the price of product m and Sm denotes the sales amount of product m as mentioned above, fd represents, for example, the sales reaction to the price. The sales reaction is, for example, as follows: a certain price reduction leads to better or worse sales reaction; and the sales amount is squared according to price reduction.


In Formula 1, gd is an external feature (weather, etc. in the above-mentioned example), and D′ is the number of external features. Regarding the external feature, transformation may be performed beforehand. Moreover, α, θ, and γ in Formula 1 are constant terms and coefficients of a regression equation obtained as a result of machine learning by the learner 20. As is clear from the above description, the predictive model is learned based on the explained variable (Sm) and the explanatory variable (Pm, each type of external feature, etc.), indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable.


The foregoing Formula 1 may be modified as shown in the following Formula 2, based on the passage of time.






[

Math
.




2

]










S
m

(
t
)


=


α
m

(
t
)


+





m


=
1

M










d
=
1

D








β

mm



d


(
t
)






f
d



(

P
m


)





+




d
=
1


D










γ
d

m


(
t
)





g
d

(
t
)









(

Formula





2

)







In Formula 2, superscript t represents a time index. For example, this corresponds to the case of temporally sliding the training data set by a window function and updating the predictive formula with time t. Thus, the predictive model is learned based on historical data of the objective variable of optimization acquired in the past, and represented by a function having this objective variable as an explanatory variable. Since the learner 20 uses historical data acquired in the past in this way, there is no need to manually generate training data. Moreover, since the predictive model is learned by machine learning, even massive data can be handled, and the model can be automatically relearned to follow the sales amount trend which varies with time. The learner 20 outputs the generated predictive model to the optimization device 30.


The optimization device 30 performs objective optimization. In detail, the optimization device 30 optimizes the value of an objective variable so that the value of an objective function is optimal (maximum, minimum, etc.), while satisfying each type of constraint condition (described in detail later) set for the objective variable and the like. In the above-mentioned example, the optimization device 30 optimizes the prices of the plurality of products.


The optimization device 30 includes a predictive model input unit 31, an external information input unit 32, a storage unit 33, a problem storage unit 34, a constraint condition input unit 35, an optimization unit 37, an output unit 38, and an objective function generation unit 39.


The predictive model input unit 31 is a device that receives a predictive model. In detail, the predictive model input unit 31 receives the predictive model learned by the learner 20. When receiving the predictive model, the predictive model input unit 31 also receives parameters necessary for the optimization process. The predictive model input unit 31 may receive a predictive model obtained by an operator manually correcting the predictive model learned by the learner 20. The predictive model input unit 31 thus receives a predictive model used in the optimization device 30, and so can be regarded as the predictive model reception unit for receiving a predictive model.


The external information input unit 32 receives external information used for optimization other than the predictive model. As an example, in the case of optimizing the prices for next week in the above-mentioned example, the external information input unit 32 may receive information about next week's weather. As another example, in the case where next week's store traffic is predictable, the external information input unit 32 may receive information about next week's store traffic. As in this example, the external information may be generated by the predictive model resulting from machine learning. The external information received here is, for example, applied to the explanatory variable of the predictive model.


The storage unit 33 stores the predictive model received by the predictive model input unit 31. The storage unit 33 also stores the external information received by the external information input unit 32. The storage unit 33 is realized by a magnetic disk device as an example.


The problem storage unit 34 stores an evaluation scale of optimization by the optimization unit 37. In detail, the problem storage unit 34 stores a mathematical programming problem to be solved by optimization. The mathematical programming problem is stored in the problem storage unit 34 beforehand by a user or the like. The problem storage unit 34 is realized by a magnetic disk device as an example.


In this exemplary embodiment, the objective function or constraint condition of the mathematical programming problem is defined so that the predictive model is a parameter. In other words, the objective function or constraint condition in this exemplary embodiment is defined as a functional of the predictive model. In the above-mentioned example, the problem storage unit 34 stores a mathematical programming problem for maximizing the total sales revenue. In this case, the optimization unit 37 optimizes the price of each product so as to maximize the total sales revenue. Since the sales revenue of each product can be defined by multiplication of the price of the product by the sales amount predicted by the predictive model, the problem storage unit 34 may store, for example, a mathematical programming problem specified by the following Formula 3.






[

Math
.




3

]










max
z






I


T
te








m
=
1

M








P
m



S
m

(
t
)









(

Formula





3

)







In Formula 3, Tte is a time index of a period subjected to optimization. For example, in the case of maximizing the total sales revenue for next week where the unit of time is “day”, Tte is a set of dates for one week from the next day.


The constraint condition input unit 35 receives a constraint condition in optimization. The constraint condition may be any condition. An example of the constraint condition is a business constraint. For example, in the case where a quota is imposed for the sales amount of a product, a constraint condition “Sm(t)≥quota” may be used. Moreover, a constraint condition (e.g. P1≥P2) specifying the magnitude relationship between the prices P1 and P2 of respective two products may be used.


In the case where the constraint condition has the predictive model as an argument, the constraint condition input unit 35 may operate as the predictive model reception unit for receiving a predictive model, or read the predictive model stored in the storage unit 33. The constraint condition input unit 35 may then generate the constraint condition having the acquired predictive model as an argument.


The objective function generation unit 39 generates the objective function of the mathematical programming problem. In detail, the objective function generation unit 39 generates the objective function of the mathematical programming problem having the predictive model as a parameter. For example, the objective function generation unit 39 reads, from the storage unit 33, the predictive model to be applied to the mathematical programming problem stored in the problem storage unit 34, and generates the objective function.


A plurality of predictive models are learned by machine learning depending on the object of prediction, as in the above-mentioned example. In this case, the problem storage unit 34 stores the plurality of predictive models. Here, the objective function generation unit 39 may read, from the storage unit 33, the plurality of predictive models to be applied to the mathematical programming problem stored in the problem storage unit 34, and generate the objective function.


The optimization unit 37 performs objective optimization based on the received various information. In detail, the optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal. Since each type of constraint condition is set for the objective variable and the like, the optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal (maximum, minimum, etc.), while satisfying the constraint conditions.


In this exemplary embodiment, the optimization unit 37 can be regarded as solving the mathematical programming problem so as to optimize the value of the objective function having the predictive model as a parameter as mentioned above. For example, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem specified in the foregoing Formula 3. In the case where a constraint condition has the predictive model as an argument, the optimization unit 37 can be regarded as calculating the objective variable that optimizes the objective function under this constraint condition.


The output unit 38 outputs the optimization result by the optimization unit 37.


The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 are realized by a CPU of a computer operating according to a program (information processing program or optimization program). For example, the program may be stored in the storage unit 33 in the optimization device 30, with the CPU reading the program and, according to the program, operating as the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39.


Alternatively, the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by dedicated hardware. The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by electric circuitry. The term “circuitry” here conceptually covers single device, multiple devices, chipset, and cloud. The optimization system according to the present invention may be realized by two or more physically separate devices that are connected wiredly or wirelessly.


The operation of the optimization system in this exemplary embodiment is described below. FIG. 2 is a flowchart depicting an example of the operation by the optimization system in this exemplary embodiment. First, the learner 20 learns a predictive model for each set explained variable, based on each type of training data stored in the training data storage unit 10 (step S11).


The predictive model input unit 31 receives the predictive model generated by the learner 20 (step S12), and stores the predictive model in the storage unit 33. The external information input unit 32 receives external information (step S13), and stores the external information in the storage unit 33.


The objective function generation unit 39 reads one or more predictive models received by the predictive model input unit 31 and a mathematical programming problem stored in the problem storage unit 34. The objective function generation unit 39 then generates an objective function of the mathematical programming problem (step S14). The constraint condition input unit 35 receives a constraint condition in optimization (step S15).


The optimization unit 37 optimizes the value of the objective variable so that the value of the objective function is optimal, under the received constraint condition (step S16).


As described above, in this exemplary embodiment, the predictive model input unit 31 receives a predictive model that is learned based on an explained variable and an explanatory variable, indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable. The optimization unit 37 calculates, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


In detail, the objective function generation unit 39 defines the objective function of the mathematical programming problem using the predictive model as an argument, and the optimization unit 37 optimizes the value of the objective variable so as to maximize the value of the objective function of the mathematical programming problem, under the constraint condition having the predictive model as an argument. With such a structure, appropriate optimization can be performed even in a situation where there is unobservable input data in mathematical optimization.


This exemplary embodiment describes the method of optimizing the prices of the plurality of products so as to maximize the total sales revenue. Alternatively, the optimization unit 37 may optimize the prices of the plurality of products so as to maximize the profit.


Application examples of Exemplary Embodiment 1 are described below using simple specific examples, to facilitate the understanding of Exemplary Embodiment 1. First, an example of optimizing, based on sales prediction for a plurality of products, the prices of the plurality of products so as to maximize the total sales revenue of the plurality of products is described below as a first application example.


For example, consider the case of maximizing the total sales revenue of a sandwich group for the next month in a retail store. The sandwich group includes four types of sandwiches: sandwiches A, B, C, and D. In this case, a problem of optimizing the sales price of each of sandwiches A, B, C, and D so as to maximize the total sales revenue of the sandwich group, i.e. the total sales revenue of the four types of sandwiches of sandwiches A, B, C, and D, is to be solved.


The training data storage unit 10 stores data indicating the past sales revenue of each sandwich and the past sales price of each sandwich. The training data storage unit 10 may store external information such as weather and calendar information.


The learner 20 learns, for example, a predictive model for predicting the sales amount of each sandwich by machine learning, based on each type of training data stored in the training data storage unit 10.


A predictive model for predicting the sales amount of sandwich A is described below, as an example. The sales amount of sandwich A is expected to be influenced by the sales price of sandwich A. The sales amount of sandwich A is expected to be also influenced by the sales prices of the sandwiches displayed together with sandwich A on the product shelves, namely, sandwiches B, C, and D. This is because customers who visit the retail store are likely to selectively purchase a favorable sandwich from among sandwiches A, B, C, and D displayed together on the product shelves.


In such a situation, for example, suppose there is a day when sandwich B is sold at a greatly reduced price. Even a customer who usually prefers sandwich A may select and purchase not sandwich A but sandwich B in such a day. Given that the amount of sandwich a customer (person) can eat at one time is limited, a typical customer is unlikely to purchase both sandwiches A and B.


In such a case, selling sandwich B at a reduced price results in a decrease in the sales amount of sandwich A. This relationship is called cannibalization (market cannibalization).


In other words, cannibalization is such a relationship in which reducing the price of a product increases the sales amount of the product but decreases the sales amount of other competing products (a plurality of products similar in property or feature).


Therefore, the predictive model for predicting sales amount SA (explained variable) of sandwich A can be represented, for example, as a function including price PA of sandwich A, price PB of sandwich B, price PC of sandwich C, and price PD of sandwich D as explanatory variables.


The learner 20 generates each of a predictive model for predicting sales amount SA of sandwich A, a predictive model for predicting sales amount SB of sandwich B, a predictive model for predicting sales amount SC of sandwich C, and a predictive model for predicting sales amount SD of sandwich D, based on each type of training data stored in the training data storage unit 10.


Here, based on the assumption that the sales of sandwiches is influenced by external information (weather, temperature, etc.), each predictive model may be generated while also taking these external information into account. Moreover, the predictive model may be generated while taking the passage of time into account. The predictive model is, for example, represented by the foregoing Formula 1 or 2.


As is clear from the above description, the predictive model is learned based on the explained variable (the sales amount of a sandwich in this exemplary embodiment) and the explanatory variable (the sales price of the sandwich, the sales prices of its competing sandwiches, etc. in this exemplary embodiment), indicates the relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable.


The optimization device 30 performs objective optimization, i.e. optimization of each of the respective sales prices (i.e. PA, PB, PC, and PD) of sandwiches A, B, C, and D. In detail, the optimization device 30 optimizes the value of the objective variable (i.e. PA, PB, PC, and PD) so as to maximize the value of the objective function (i.e. the total sales revenue of the sandwich group), while satisfying each type of constraint condition set for the objective variable (i.e. PA, PB, PC, and PD), etc. The objective function is represented by the foregoing Formula 3 as an example.


This application example is an example in which the objective function is defined using the predictive model as an argument, where the objective function (i.e. the total sales revenue of the sandwich group) handled by the optimization device 30 can be represented by the foregoing Formula 3.


Suppose the optimization device 30 stores the “form” of objective function represented by the foregoing Formula 3 beforehand. The optimization device 30 generates the objective function of the optimization problem, by assigning the predictive model generated by the learner 20 (i.e. the predictive model for predicting SA, the predictive model for predicting SB, the predictive model for predicting SC, and the predictive model for predicting SD) to the “form” of objective function.


The optimization device 30 calculates, for the objective function having the predictive model as an argument, the value of the objective variable (i.e. the values of PA, PB, PC, and PD) that optimizes the objective function under the constraint condition. An application example of Exemplary Embodiment 1 has been described above using a simple specific example. Although the above describes the case where the sales price of each individual product is optimized so as to maximize the total sales revenue of only four products for simplicity's sake, the number of optimization targets is not limited to four, and may be two, three, or five or more. Moreover, the prediction target is not limited to a product, and may be a service or the like.


Next, consider the case of handling a problem of optimizing the sales price of each individual product so as to maximize the total sales amount of a large volume of products in an actual retail store. Manually defining an objective function of such a mathematical programming problem (optimization problem) is too complicated and is not practical.


For example, if a future demand predictive line of each product in the retail store is obtained, it is possible to optimize order and stock based on the demand. However, the number of products for which a demand predictive line can be drawn manually is limited. Besides, it is not practical to repeat demand prediction upon each order that takes place once in several hours. For example, to optimize the price of each product for a future period so as to maximize the sales in the period, a complex correlation between the price and demand of a large volume of products needs to be recognized, but manually doing so is difficult.


By such design that defines the “form” of objective function beforehand and defines an actual objective function using a predictive model as an argument as in the foregoing application example, an objective function of the mathematical programming problem can be efficiently generated even in a situation where there is massive unobservable input data in mathematical optimization. Moreover, in this exemplary embodiment, appropriate optimization can be performed even in a situation where there is a complex correlation between massive data as in the case of cannibalization.


Other than determining the price of each product so as to maximize the sales or profit of the product, the optimization system in this exemplary embodiment may be applied to, for example, optimization of shelving allocation for products. In this case, for example, the learner 20 learns a predictive model of sales amount Sm of product m by a linear regression model, as follows. Here, P is a product price, H is a shelf position, and θm is a parameter.






S
m=linear_regression(P,H,θm)


The optimization device 30 then optimizes P and H so as to maximize the sales (specifically, the sum of the multiplication results of price Pm and sales amount Sm of product m). Any business constraint (e.g. price condition, etc.) may be set in this case, too.


Other than such shelving allocation, the optimization method according to the present invention is also applicable to optimization of an objective function represented by the multiplication result of the price of each commercial material (including both service and product) and the demand (function of the price of each of multiple commercial materials) of the commercial material in retail price optimization, hotel room price optimization, plane ticket price optimization, parking fee optimization, campaign optimization, and the like.


Following the foregoing first application example, application examples of Exemplary Embodiment 1 in these cases are described below using simple specific examples. As a second application example, hotel price optimization is described below. In this application example, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is, for example, package rate setting for each room of a hotel. In comparison with the above-mentioned retail, for example, “sandwich” in the retail corresponds to “bed and breakfast package of single room” in this application example. External information is, for example, weather, season, any event held near the hotel, etc.


As a third application example, hotel price and stock optimization is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is selected that takes price and stock into account. For example, a first objective variable is a variable indicating, for each package, when and how much a room in the package is sold, and a second objective variable is a variable indicating, for each package, when and how many rooms in the package are sold. External information is, for example, weather, season, any event held near the hotel, etc., as in the second application example.


As a fourth application example, plane ticket price and stock optimization is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is selected that takes price and stock into account, as in the third application example. Suppose each plane ticket represents the route to the destination and the seat type (class). For example, a first objective variable is a variable indicating, for each plane ticket, when and how much the plane ticket is sold, and a second objective variable is a variable indicating, for each plane ticket, when and how many tickets are sold. External information is, for example, season, any event held, etc.


As a fifth application example, optimization of the parking fee of each parking lot is described below. In this application example, too, since an objective is to maximize sales revenue or profit, an objective function is represented by a function for calculating sales revenue or profit. An objective variable is, for example, a time and a location-specific parking fee. External information is, for example, the parking fee of a nearby parking lot, location information (residential area, business district, the distance from a station, etc.).


A modification of the optimization system in Exemplary Embodiment 1 is described below based on the flow of a predictive model and data (predictive data) necessary for prediction, in comparison with the structure depicted in FIG. 1. FIG. 3 is an explanatory diagram depicting an example of the structure of an optimization system in this modification.


The optimization system depicted in FIG. 3 includes a data preprocessing unit 150, a data preprocessing unit 160, a learning engine 170, and an optimization device 180. The data preprocessing units 150 and 160 each have a function of performing general processes such as filling a missing value for each data. The learning engine 170 corresponds to the learner 20 in Exemplary Embodiment 1. The optimization device 180 corresponds to the optimization device 30 in Exemplary Embodiment 1.


First, analytical data 110d and predictive data 120d are generated from analysis/prediction target data 100d. The analysis/prediction target data 100d includes, for example, external information 101d such as weather and calendar data, sales/price information 102d, and product information 103d.


The analytical data 110d is data used by the learning engine 170 for learning, and corresponds to data stored in the training data storage unit 10 in Exemplary Embodiment 1. The predictive data 120d is external data and other data necessary for prediction, and is specifically a value of an explanatory variable in a predictive model. The predictive data 120d corresponds to part or whole of data stored in the storage unit 33 in Exemplary Embodiment 1.


In the example depicted in FIG. 3, the data preprocessing unit 150 generates the analytical data 110d from the analysis/prediction target data 100d, and the data preprocessing unit 160 generates the predictive data 120d from the analysis/prediction target data 100d.


The learning engine 170 performs learning using the analytical data 110d, and outputs a predictive model 130d. The optimization device 180 receives the predictive model 130d and the predictive data 120d, and performs an optimization process.


Each type of data (analysis/prediction target data 110d (i.e. external information 101d, sales/price information 102d, and product information 103d), analytical data 110d, and predictive data 120d) depicted in FIG. 3 is, for example, held in a database of a storage unit (not depicted) in the optimization system.


An objective function subjected to optimization is defined using a predictive model as an argument, as described in Exemplary Embodiment 1. Moreover, predictive data is also input data for optimization, as depicted in FIG. 3. The present invention is thus characterized in that a predictive model and predictive data are input for optimization, as depicted in FIG. 3.


Exemplary Embodiment 2

Exemplary Embodiment 2 of an optimization system according to the present invention is described below. Exemplary Embodiment 1 describes a method of machine learning a model for predicting unobservable data from past data, automatically generating an objective function of mathematical programming and a constraint condition based on a future prediction result obtained based on the predictive model, and performing optimization.


In a process of performing such optimization, there are instances where the predictive model based on machine learning is based on a non-linear basis function. For example, consider the case of performing, for the above-mentioned price prediction problem, non-linear transformation such as squaring of price and logarithmic transformation of price as a feature value input to the predictive model based on machine learning. In this case, the objective function (sales for a future period) of mathematical optimization is a function of the feature value obtained by complex non-linear transformation of the price, so that efficiently solving such mathematical optimization using a typical method is difficult.


Accordingly, Exemplary Embodiment 2 describes a method that can solve mathematical optimization at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.



FIG. 4 is a block diagram depicting an example of the structure of Exemplary Embodiment 2 of an optimization system according to the present invention. The optimization system in this exemplary embodiment includes the training data storage unit 10, the learner 20, and an optimization device 40. The optimization system depicted in FIG. 4 corresponds to the information processing system according to the present invention. The training data storage unit 10 and the learner 20 are the same as those in Exemplary Embodiment 1.


The optimization device 40 includes the predictive model input unit 31, the external information input unit 32, the storage unit 33, the problem storage unit 34, the constraint condition input unit 35, a candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39.


The optimization device 40 is a device that performs objective optimization, as in Exemplary Embodiment 1. The optimization device 40 differs from the optimization device 30 in Exemplary Embodiment 1 in that the candidate point input unit 36 is further included. The optimization unit 37 in this exemplary embodiment performs optimization while also taking the input of the candidate point input unit 36 into account. The other components are the same as those in Exemplary Embodiment 1.


The candidate point input unit 36 receives a candidate point of optimization. A candidate point is a discrete value that is a candidate for an objective variable. In the above-mentioned example, price candidates (e.g. no discount, 5% discount, 7% discount, etc.) are candidate points. Input of such a candidate point can reduce the optimization cost.



FIG. 5 is an explanatory diagram depicting an example of a screen for the candidate point input unit 36 receiving input of candidate points from the user. In the example depicted in FIG. 5, the candidate point input unit 36 displays a list of the prices of products used in a linear regression model on the left side, and a list of price candidates set for the price of each product on the right side. Thus, the candidate point input unit 36 displays a list of objective variables subjected to optimization and candidates for a value that can be taken by each objective variable, and receives a selected objective variable candidate.


In the example depicted in FIG. 5, the operator sets four candidates: no discount, 1% discount, 2% discount, and 5% discount, as candidates for the price of sandwich A (200 yen). Although discount information is displayed as objective variable candidates in the example depicted in FIG. 5, the candidate point input unit 36 may display actual price candidate values (e.g. candidate values such as 190 yen, 200 yen, 210 yen, and 220 yen).


A mathematical programming problem in the case where one or more candidate points are input is described below, using a specific example. Here, an optimization object index set is denoted as {k|k=1, . . . , K}. In the above-mentioned example, K corresponds to the number of price candidates. For example, in the case where there are four price candidates “no discount, 1% discount, 2% discount, and 5% discount” for the product “sandwich A”, K=4. An optimization object candidate set for product m is denoted as overlined Pmk, as written below. In the above-mentioned example, overlined Pmk indicates a price candidate for product m.





{Pmk}  [Math. 4]


The k-th indicator of m is denoted as Zmk. Zmk satisfies the following condition.






Z
mkϵ{1,0} where Σk=1KZmk=1  [Math. 5]


With such definition, price Pm of product m is defined in the following Formula 4. This definition indicates that price Pm which is an objective variable is discretized.









[

Math
.




6

]












P
m

=




k
=
1

K




Z
mk




P
_

mk







(

Formula





4

)







The foregoing Formula 1 can then be modified as follows.













S
m

=




α
m

+





m


=
1

M






d
=
1

D




β

m






m



d




f
d



(




k
=
1

K




Z


m



k





P
_



m



k




)





+




d
=
1


D






γ
d
m



g
d










=




α
m

+





m


=
1

M






d
=
1

D




β

m






m



d






k
=
1

K




Z


m



k





f
d



(


P
_



m



k


)







+




d
=
1


D






γ
d
m



g
d










=




α
m

+





m


=
1

M






k
=
1

K




Z


m



k







d
=
1

D




β

m






m



d




f
d



(


P
_



m



k


)







+




d
=
1


D






γ
d
m



g
d










=




α
m

+





m


=
1

M






k
=
1

K




Z


m



k




F

m






m



k




+




d
=
1


D






γ
d
m



g
d











[

Math
.




7

]







where






F

m






m



k


=




d
=
1

D




β

m






m



d




f
d



(


P
_



m



k


)
















Moreover, the foregoing Formula 3 can be modified as shown in the following Formula 5, where Z=(Z1I, . . . , Z1K, . . . , ZMK).









[

Math
.




8

]














max
Z






t


T
te








m
=
1

M




P
m



S
m

(
t
)








s
.
t
.








k
=
1

K



Z
mk







=
1.








Z
mk



{

1
,
0

}






(

Formula





5

)







For example, in the case where no candidate point is input, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem defined in the foregoing Formula 3. In the case where a candidate point is input, the optimization unit 37 may optimize the prices of the plurality of products by solving the mathematical programming problem defined in the foregoing Formula 5.


Here, the constraint condition input unit 35 may receive input that takes a candidate point into account. A specific example of a constraint condition set in the above-mentioned product optimization is given below. Typically, when comparing the price of a single ballpoint pen and the price of a set of six ballpoint pens of the same brand, the price of per one ballpoint pen in a set of six ballpoint pens is expected to be lower than the price of a single ballpoint pen. This type of constraint condition is defined in the following Formula 6.









[

Math
.




9

]















i
=

mK
+
1



mK
+
K





Z

m





i





P
_


m





i







ω
mn






i
=

nK
+
1



nK
+
K





Z
ni




P
_

m






(

m
,
n

)


PC









(

Formula





6

)







In Formula 6, PC denotes a set of index pairs to which the constraint condition is applied, and wm,n denotes a weight. PC and wm,n are given beforehand.


A specific example of the optimization process performed by the optimization unit 37 is described below, using the foregoing Formula 5. In the case of the foregoing Formula 5, the objective function can be modified as follows.














[

Math
.




10

]

















t



tr








m
=
1

M




P
m



S
m

(
t
)





=





t



te








m
=
1

M




(




k
=
1

K




Z
mk



P
mk



)



(


α
m

(
t
)


+





m


=
1

M






k
=
1

K




Z


m



k




F

m






m




k


(
t
)






+




d
=
1


D






γ
d

m


(
t
)





g
d




)




=





t



tr








m
=
1

M



(



(




k
=
1

K




Z
mk



P
mk



)



(





m


=
1

M







k


=
1

K




Z


m




k






F

m






m





k




(
t
)






)


+

(


α
m

(
t
)


+




d
=
1


D






γ
d

m


(
t
)





g
d




)


)



=





t



tr





{





m
=
1

M







m


=
1

M






k
=
1

K







k


=
1

K




P
mk



F

m






m





k




(
t
)





Z
mk



Z

mk








+




m
=
1

M






k
=
1

K




(


α
m

(
t
)


+




d
=
1


D






γ
d

m


(
t
)





g
d




)



P
mk



Z
mk





}


=



Z
T


QZ

+


r
T


Z



















where








[
Q
]


mk
,


m




k





=




t



te






P
mk



F

m






m





k




(
t
)






,











[
r
]

mk

=




t



tr






(


α
m

(
t
)


+




d
=
1


D






γ
d

m


(
t
)





g
d




)




P
_

mk









(

Formula





A

)







Here, [Q]i,j is the (i,j)-th element of matrix Q, and [r]1 is the i-th element of vector r. Hence, the above-mentioned Q is not a symmetric matrix, and is not positive semidefinite. This problem is a sort of mixed integer quadratic programming problem called nonconvex cardinality (0-1 integer) quadratic programming. This problem is efficiently solved by transforming the problem into a mixed integer linear programming.


A method for solving the mathematical programming problem defined in the foregoing Formula 5 using mixed integer programming relaxation is described below. A modification process shown in the following Formula 7 is performed using new overlined variable Zi,j.









[

Math
.




11

]













max





i
=
1

KM




(



[
r
]

i

+


[
Q
]


i
,
i



)



Z
i




+




i
=
1

KM






j
=

i
+
1


KM




(



[
Q
]


i
,
j


+


[
Q
]


j
,
i



)




Z
_


i
,
j













s
.
t
.








i
=

mK
+
1



mK
+
K




Z
i



=
1










m

=
0

,





,

M
-
1










Z
_

ij




Z
i

+

Z
j

-

1




i
<
j













Z
_

ij




Z
i





i
<
j












Z
_

ij




Z
j





i
<
j












Z
_

ij





1

i

j










Z
i




{

0
,
1

}





i
.








(

Formula





7

)







Here, a constraint in the following Formula 8 that overlined variable Zi,j takes ZiZj in an optimal solution is defined.














[

Math
.




12

]

















i
=

mK
+
1


j




Z
_


i
,
j



+




i
=

j
+
1



mK
+
K





Z
_


j
,
i




=






i
=

mK
+
1



mK
+
K





Z
i



Z
j



-

Z
j


=



(





i
=

mK
+
1



mK
+
K




Z
i


-
1

)



Z
j


=
0






(

Formula





8

)







Adding the equality shown in the foregoing Formula 8 allows the foregoing Formula 7 to be newly formulated as shown in the following Formula 9.









[

Math
.




13

]














max





i
=
1

KM




(



[
r
]

i

+


[
Q
]


i
,
i



)



Z
i




+




i
=
1

KM






j
=

i
+
1


KM




(



[
Q
]


i
,
j


+


[
Q
]


j
,
i



)




Z
_


i
,
j














s
.
t
.








i
=

mK
+
1



mK
+
K




Z
i



=


1



m


=
0


,





,

M
-
1










Z
_

ij




Z
i

+

Z
j

-

1




i
<
j













Z
_

ij




Z
i





i
<
j












Z
_

ij




Z
j





i
<
j















i
=

mK
+
1


j




Z
_


i
,
j



+




i
=

j
+
1



mK
+
K





Z
_


j
,
i




=
0









mK

j


mK
+
K



,



m

=
0

,





,

M
-
1










Z
_

ij



0




1

i

j











Z
i




{

0
,
1

}




i







(

Formula





9

)







To reduce the number of constraint conditions for more efficient calculation, the following inequality may be deleted from the condition of the foregoing Formula 9.







Z

i,j
≥Z
i
+Z
j−1  [Math. 14]


The optimization unit 37 optimizes the prices of the plurality of products so as to maximize such modified formula. In the case where the candidate point input unit 36 receives no candidate point, the optimization unit 37 may solve the mathematical programming problem in the foregoing Formula 3. The constraint condition in the foregoing Formula 6 may be applied in mixed integer linear programming (MILP) relaxation.


The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 are realized by a CPU of a computer operating according to a program (information processing program or optimization program).


Alternatively, the predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by dedicated hardware. The predictive model input unit 31, the external information input unit 32, the constraint condition input unit 35, the candidate point input unit 36, the optimization unit 37, the output unit 38, and the objective function generation unit 39 may each be realized by electric circuitry.


The operation by the optimization system in this exemplary embodiment is described below. FIG. 6 is a flowchart depicting an example of the operation by the optimization system in this exemplary embodiment. The process from step S11 to step S15 of receiving a learned model and external information, generating an objective variable, and receiving a constraint condition is the same as that in FIG. 2.


The candidate point input unit 36 receives a candidate point which is a candidate for a value that can be taken by the objective variable (step S18). The number of candidate points received may be one or more. The optimization unit 37 then optimizes the value of the objective variable so that the value of the objective function is optimal, under the received candidate point and constraint condition (step S19).


Thus, this exemplary embodiment describes the optimization system that optimizes the value of an objective variable so that the value of an objective function of a mathematical programming problem is optimal. In detail, the predictive model input unit 31 receives a linear regression model represented by a function having the objective variable of the mathematical programming problem as an explanatory variable. The candidate point input unit 36 receives, for the objective variable included in the linear regression model, a discrete candidate (candidate point) for the value that can be taken by the objective variable. The optimization unit 37 then calculates the objective variable that optimizes the objective function of the mathematical programming problem having the linear regression model as an argument. Here, the optimization unit 37 selects a candidate point that optimizes the objective variable, to calculate the objective variable.


With such a structure, mathematical optimization can be solved at high speed with high accuracy even in the case where a predictive model used for optimization is based on a non-linear basis function.


In detail, the optimization unit 37 optimizes an objective function having, as a parameter, a predictive model represented by the linear regression equation in the foregoing Formula 1. Here, the linear regression equation in Formula 1 has at least part of the explanatory variable represented by non-linear function fd.


For example, even for an objective variable, such as price, for which all kinds of candidates can be assumed, actual optimization is often performed by setting certain price candidates beforehand. Predictive model Sm represented in the form of the foregoing Formula 1 results from applying function fd to objective variable Pm which is an optimization target. In the case where an explanatory variable is represented by non-linear function fd, even a function represented in the form of linear regression equation is a non-linear function as far as price is concerned, and so its optimization is difficult.


In this exemplary embodiment, however, by discretizing an objective variable to provide a candidate point, a non-linear formula relating to an objective function of optimization can be modified to a linear formula relating to discrete variable Zd, regardless of fd. In other words, the optimization process can be performed at high speed for a linear regression equation that is expressed as linear regression but is non-linearly transformed, by setting an objective variable of optimization (e.g. provided by a person) beforehand.


The use of the method in this exemplary embodiment also enables application of a method in the below-mentioned Exemplary Embodiment 3, so that the optimization process can be performed at high speed.


This exemplary embodiment describes the method of optimizing the prices of the plurality of products so as to maximize the total sales revenue. Alternatively, the optimization unit 37 may optimize the prices of the plurality of products so as to maximize the profit. In this case, the objective function generation unit 39 may generate the following objective function as an example, where c is a term that does not depend on Z.













t



te








m
=
1

M




(


P
m

-

C
m


)



S
m

(
t
)





=






t



te








m
=
1

M




P
m



S
m

(
t
)





-




t



te








m
=
1

M




C
m



S
m

(
t
)






=



Z
T


QZ

+


r
T


Z

-




t



tr








m
=
1

M




C
m






(


α
m

(
t
)


+





m


=
1

M






k
=
1

K




Z


m



k




F

m






m




k


(
t
)






+




d
=
1


D






γ
d

m


(
t
)





g
d




)

=





Z
T


QZ

+



(

r
-
F

)

T


Z

+
c

=



Z
T


QZ

+


r
T


Z

+
c














where








[
F
]



m



k


=




t



te








m
=
1

M




C
m



F

m






m




k


(
t
)



















[

Math
.




15

]







This objective function is again a nonconvex cardinality (0-1 integer) quadratic programming where c does not depend on Z, and so the above-mentioned solving method can be applied as a problem mathematically equivalent to the foregoing Formula A.


This exemplary embodiment describes the case of optimizing the sales revenue (price×sales amount) by subjecting the sales amount to regression, as in Exemplary Embodiment 1. Alternatively, not the sales amount but the sales revenue may be subjected to regression. In the case of directly subjecting the sales revenue to regression, the learner 20 learns the sales revenue by a regression equation based on non-linear transformation of a quadratic function of the objective variable. The regression equation in such a case is represented by the following Formula B1 as an example.









[

Math
.




16

]













f


(
x
)


=





i
=
1

n






j
=
1

n






d
=
1

D




w
ijd




φ
d



(


x
i

,

x
j


)






+




i
=
1

n






d
=
1


D






v
id




ψ
d



(

x
i

)















where






φ
d



:







2


->

,



ψ
d



:







->






(

Formula





B1

)







In Formula B1, φd, and ψd are each any basis function, and x corresponds to price. This function is set as the objective function of optimization. As in the foregoing Formula 4, x is discretized as shown in the following Formula B2.









[

Math
.




17

]












x
i

=




k
=
1


K
i






x
_

ik




z
ik



(


i
=
1

,





,
u

)








(

Formula





B2

)







After discretizing x in this way, the following modification can result in a BQP problem shown in the following Formula B3. Thus, the method in this exemplary embodiment can be used to perform optimization even in the case where the sales revenue is subjected to regression.














[

Math
.




18

]





















g
ij



(


x
i

,

x
j


)


=




d
=
1

D




w
ijd




φ
d



(


x
i

,

x
j


)





,










then







g
ij



(


x
i

,

x
j


)



=


z
i




Q
ij



z
j













where







Q
ij

=

[





f
ij



(



x
_


i





1


,


x
_


j





1



)






f
ij



(



x
_


i





1


,


x
_


j





2



)









f
ij



(



x
_


i





1


,


x
_


j
,

K
j




)








f
ij



(



x
_


i





2


,


x
_


j





1



)






f
ij



(



x
_


i





2


,


x
_


j





2



)









f
ij



(



x
_


i





2


,


x
_


j
,

K
j




)






















f
ij



(



x
_


i
,

K
i



,


x
_


j





1



)






f
ij



(



x
_


i
,

K
i



,


x
_


j





2



)









f
ij



(



x
_


i
,

K
i



,


x
_


j
,

K
j




)





]







(

Formula





B2

)











[

Math
.




19

]






















h
i



(

x
i

)


=




d
=
1


D






v
id




ψ
d



(

x
i

)





,
then













h
i



(

x
i

)


=



[



h
i



(


x
_


i





1


)


,


h
i



(


x
_


i





2


)


,





,


h
i



(


x
_


i
,

K
i



)



]



z
i


=


:







r
i




z
i














where










r
=

[




r
1






r
2











r
n




]



















[

Math
.




20

]



















Maximize






z



Qz

+


r



z












subject





to








z
=




[


z
11

,





,

z

1
,

K
1



,

z
21

,





,

z

n
,

K
n




]







{

0
,
1

}





t
=
1

n


K


.












k
=
1


K
1




z
ik




=

1


(


i
=
1

,





,
n

)




,








where











Q
=

[




Q
11




Q
12







Q

1

n







Q
21




Q
22







Q

2

n





















Q

n





1





Q

n





2








Q
nn




]







(

Formula





B





3

)







Exemplary Embodiment 3

Exemplary Embodiment 3 of an optimization system according to the present invention is described below. A binary quadratic programming (BQP) problem is known as an optimization approach. Since the foregoing Formula A can be generated by applying discretization to linear prediction as described in Exemplary Embodiment 2, the problem in Exemplary Embodiment 2 can be transformed to BQP. BQP is an NP-hard problem, and is commonly known to be solved using a framework called integer programming because no exact solution is found.


Exemplary Embodiment 2 describes the method for solving BQP by mixed integer programming relaxation. This exemplary embodiment describes a method for solving BQP shown in the foregoing Formula A at higher speed. The optimization system in this exemplary embodiment has the same structure as the optimization system in Exemplary Embodiment 2, but differs from Exemplary Embodiment 2 in the method by which the optimization unit 37 performs the optimization process.


In detail, the optimization unit 37 in this exemplary embodiment relaxes BQP to an easy-to-solve problem called semidefinite programming (SDP) problem, and optimizes BQP based on the solution of SDP.


As an example, BQP is first formulated as shown in the following Formula 10. In Formula 10, M and K are natural numbers. Moreover, in Formula 10, Q is a KM×KM square matrix, and r is a KM-dimensional vector.









[

Math
.




21

]













Maximize






Z
T


QZ

+


r
T


Z








subject





to





Z

=




[


Z
1

,





,

Z
KM


]

T





{

0
,
1

}

KM

.








k
=
1

K



Z

Km
+
k





=

1


(


m
=
0

,





,

M
-
1


)







(

Formula





10

)







Let Symn be a set of all symmetric matrices of size n. In detail, Symn is written as follows.






Sym
n
={Xϵ
custom-character
n×n
|X
T
=X}  [Math. 22]


A vector which is all 1 may be written as boldfaced 1, where boldfaced 1=(1, 1, . . . , 1)T. The inner product on Symn is defined as follows, using a black dot sign.






X·Y=Σ
i=1
nΣj=1nXijYij for X,YϵSymn  [Math. 23]


The following Formula 11 holds for all vectors x. Accordingly, Q in the foregoing Formula 10 can be replaced with the following Formula 12. Q is therefore assumed to be a symmetric matrix, without loss of generality.





[Math. 24]






x
T
Qx=x
T(Q+QT)x/2  (Formula 11)





(Q+QT)/2  (Formula 12)


An SDP relaxation method is described below. First, the optimization unit 37 transforms BQP in Formula 10 to a variable that takes {1, −1} value. Suppose t=−1+2Z. The foregoing Formula 10 is then modified to the following Formula 13.














[

Math
.




25

]


















Z
T


QZ

+


r
T


Z


=





1
4




(

t
+
1

)

T



Q


(

t
+
1

)



+


1
2




r
T



(

t
+
1

)










=





1
4



t
T


Qt

+


1
2




(

r
+

Q





1


)

T


t

+


1
4



(



1
T


Q





1

+

2


r
T


1


)









=




[



1



t
T




]




A


[



1




t



]


.









(

Formula





13

)











where











A
=


1
4



[






1
T


Q





1

+

2


r
T


1






(

r
+

Q





1


)

T






r
+

Q





1




Q



]



,

A


Sym

KM
+
1

















The foregoing Formula 10 is thus equivalent to the following Formula 14.









[

Math
.




26

]













Maximize




[



1



t
T




]



A


[



1




t



]










subject





to





t

=




[


t
1

,





,

t
KM


]

T





{

-
1.1

}

KM

.








k
=
1

K



t

Km
+
k





=


-
K

+
2.









(


m
=
0

,





,

M
-
1


)





(

Formula





14

)







Next, the optimization unit 37 relaxes each variable ti that takes S0={1, −1} value, to variable xi that takes SKM value. Sn represents the unit n-sphere, as shown in the following Formula 15.





[Math. 27]






S
n
={xϵ
custom-character
n+1
|∥x∥
2=1}  (Formula 15)


In this case, the foregoing Formula 14 is relaxed to the problem of the following Formula 16.









[

Math
.




28

]












Maximize






tr


(


[


x
0

,

x
1

,





,

x
KM


]



A


[




x
0
T











x
KM
T




]



)











subject





to






x
i






KM
+
1



,









x
i



2

=

1






(


i
=
0

,
1
,





,
KM

)



,









k
=
1

K







x

Km
+
k



=


(


-
K

+
2

)



x
0







(


m
=
0

,





,

M
-
1


)








(

Formula





16

)







Here, “1” in the objective function of the foregoing Formula 14 is equally replaced with unit vector x0. For feasible solution t of the foregoing Formula 14, the feasible solution of Formula 16 is defined by the following Formula 17, with no contradiction of the value of the objective function. Hence, the problem of the foregoing Formula 16 is a result of relaxing the foregoing Formula 14.





[Math. 29]






x
0=[1,0 . . . 0]T,






x
i
=[t
i,0 . . . 0]T(i=1 . . . KM)  (Formula 17)


The optimization unit 37 transforms the problem of the foregoing Formula 16 to an SDP problem. The objective function in Formula 16 is transformed to the following Formula 18.














[

Math
.




30

]














tr


(


[


x
0

,

x
1

,





,

x
KM


]



A


[




x
0
T











x
KM
T




]



)


=


A
·

(


[




x
0
T











x
KM
T




]



[


x
0

,

x
1

,





,

x
KM


]


)


=

A
·
Y












where






Y
=


[




y
00




y
01







y

0
,
KM







y
10




y
11







y

1
,
KM





















y

KM
,
0





y

KM
,
1








y

KM
,
KM





]

=






[




x
0
T






x
1
T











x
KM
T




]



[


x
0

,

x
1

,





,

x
KM


]











Y



Sym

KM
+
1










(

Formula





18

)







According to this definitions, Y is positive semidefinite, and satisfies the following Formula 19.





[Math. 31]






y
ij
=x
i
T
x
j(i=0,1, . . . ,KM,j=0,1 . . . KM)  (Formula 19)


If Y is positive semidefinite, (KM+1)-dimensional vectors x0, x1, . . . , xKM satisfy the condition defined in the foregoing Formula 18, and Formula 19.


Setting yii=1 using matrix Y enables constraint condition ∥xi2=1 to be expressed. Since x0 is a unit vector, the following Formula 21 holds only in the case where the following Formula 20 is satisfied.









[

Math
.




32

]













x
0
T






k
=
1

K







x

Km
+
k




=



-
K

+

2
·







k
=
1

K







x

Km
+
k





2
2



=


(


-
K

+
2

)

2






(

Formula





20

)










k
=
1

K







x

Km
+
k



=


(


-
K

+
2

)



x
0






(

Formula





21

)







Using matrix Y, these conditions can be expressed as shown in the following Formula 22.














[

Math
.




33

]
















k
=
1

K







y

0
,

Km
+
k




=



-
K

+

2
·




k
=
1

K










l
=
1

K







y


Km
+
k

,

Km
+
l







=


(


-
K

+
2

)

2






(

Formula





22

)







Thus, the optimization unit 37 can generate an SDP problem shown in the following Formula 23. This problem is equivalent to the problem shown in the foregoing Formula 16, and is a result of relaxing the foregoing Formula 10. Hence, an optimal value of Formula 23 is the upper bound of an optimal value of the foregoing Formula 10.














[

Math
.




34

]


















Maximize






A
·
Y










subject





to





Y

=



(

y
ij

)


0


i


,


j

KM



Sym

KM
+
1



,





Y

O

,










Y

i
,
i


=

1






(


i
=
0

,





,
KM

)



,













k
=
1

K







y

0
,

Km
+
k




=


-
K

+

2






(


m
=
0

,





,

M
-
1


)




,









k
=
1

K










l
=
1

K







y


Km
+
k

,

Km
+
1





=



(


-
K

+
2

)

2







(


m
=
0

,





,

M
-
1


)









(

Formula





23

)







A method of, when an optimal solution of the problem shown in Formula 23 is given, transforming the optimal solution to Z of the problem shown in Formula 10 is described below. This transformation operation is hereafter referred to as rounding. Let tilde Y be the optimal solution derived by SDP relaxation.


In the derivation of the foregoing Formula 16, “1” has been replaced with vector x0, and ti (i=1, . . . , KM) has been replaced with vector xi. Accordingly, the relationship shown in the following Formula 24 exists between Z and Y.





[Math. 35]





2Zi−1=ti=1·ti≈x0Txi=y0i(i=1, . . . ,KM)  (Formula 24)


It can therefore be assumed that fixing Zi to 1 for such i that tilde y0i exceeds other tilde y0j is appropriate. Based on this premise, the operation of the optimization unit 37 solving BQP shown in the foregoing Formula 10 by SDP relaxation is described below.



FIG. 7 is a flowchart depicting an example of the operation of the optimization unit 37 solving BQP by SDP relaxation. The operation example (algorithm) depicted in FIG. 7 performs rounding once.


The optimization unit 37 transforms the BQP shown in the foregoing Formula 10 to the problem shown in Formula 23 resulting from SDP relaxation (step S21), and sets the optimal solution as tilde Y. The optimization unit 37 searches for a value (hereafter denoted as tilde k) that satisfies the following Formula 25 (step S22), where tilde k is an element of {1, . . . , K}.





[Math. 36]






{tilde over (y)}
0,Km+{tilde over (k)}=max{{tilde over (y)}0,Km+k|k=1, . . . K}  (Formula 25)


The optimization unit 37 performs setting so that ZKm+tilde k is 1 (the others are 0) (step S23).



FIG. 8 is a flowchart depicting another example of the operation of the optimization unit 37 solving BQP by SDP relaxation. The operation example (algorithm) depicted in FIG. 8 performs rounding repeatedly.


The optimization unit 37 first initializes index set U={1, . . . , M} (step S31). The optimization unit 37 performs the following process for each index included in U (steps S32 to S36).


First, the optimization unit 37 partially fixes Z, and constructs the problem shown in the foregoing Formula 10 into the problem (i.e. SDP) shown in Formula 23 (step S32). The optimization unit 37 solves the problem shown in Formula 23, and sets the optimal solution as tilde Y (step S33). The optimization unit 37 searches for tildes m and k that satisfy the following Formula 26 (step S34). The optimization unit 37 then partially fixes Z based on the following Formula 27 (step S35).









[

Math
.




37

]













y
_


0
,


K


m
_


+

k
_




=

max


{




y
_


0
,

Km
+
k



|

m

U


,





k
=
1

,





,
K

}






(

Formula





26

)







Z


K


m
_


+
k


:=

{



1



(

k
=


k
_


_







0



(

k



{

1
,





,
K

}


\


{

k
_

}



)









(

Formula





27

)







The optimization unit 37 updates U as follows (step S36).






U:=U\{{tilde over (m)}}  [Math. 38]


The optimization unit 37 acquires the following three values, by applying the algorithm depicted in FIG. 7 or 8 to the problem shown in the foregoing Formula 10. The first is computed (approximate) solution to the problem shown in Formula 10. The second is computed (approximate) optimal value of the problem shown in Formula 10. The third is an optimal value of the problem shown in Formula 23. This yields an inequality shown in the following Formula 28.





0<computed optimal value of Formula 10≤optimal value of Formula 10≤optimal value of Formula 23  (Formula 28)


Thus, the computed solution is guaranteed to satisfy the following Formula 29.





approximation rate of computed solution=(computed optimal value of Formula 10)/(optimal value of Formula 10)≥(computed optimal value of Formula 10)/(optimal value of Formula 23)  (Formula 29)


With this inequality, the quality of the computed solution can be evaluated, and a more sophisticated algorithm such as branch and bound method can be derived.


The optimization unit 37 may perform exhaustive search for a solution based on a parameter defined by the user. FIG. 9 is a flowchart depicting yet another example of the operation of the optimization unit 37 solving BQP by SDP relaxation.


The operation example (algorithm) depicted in FIG. 9 enumerates at least a solution of T near an optimal solution, where T is a parameter defined by the user.


The optimization unit 37 transforms the BQP shown in the foregoing Formula 10 to the problem shown in Formula 23 resulting from SDP relaxation (step S41), and sets the optimal solution as tilde Y. The optimization unit 37 searches for a value (tilde k) that satisfies the following Formula 30 (step S42). The optimization unit 37 also initializes index set Cm as shown in the following Formula 31 (step S43).





[Math. 39]






{tilde over (y)}
0,Km+{tilde over (k)}=max{{tilde over (y)}0,Km+k|k=1 . . . K}  (Formula 30)






C
m
={{tilde over (k)}}⊆{1 . . . K}  (Formula 3)


The optimization unit 37 repeatedly performs the following process while the following Formula 32 is satisfied (steps S44 to S45).





[Math. 40]





Πm=1M|Cm|<T  (Formula 32)


The optimization unit 37 searches for two values (tildes m and k) that satisfy the following Formula 33 (step S44), where tilde m is an element of {1, . . . , M}, and tilde k is an element of {1, . . . , K}.





[Math. 41]






{tilde over (k)}∉C
{tilde over (m)}
·{tilde over (y)}
0,K{tilde over (m)}+{tilde over (k)}=max{{tilde over (y)}0,Km+k|m=1 . . . M,k=1 . . . K,k∉C{tilde over (m)}}  (Formula 33)


The optimization unit 37 further adds tilde k to set Ctilde m (step S45). In detail, this is represented by the following Formula 34.





[Math. 42]






C
{tilde over (m)}
←C
{tilde over (m)}
∪{{tilde over (k)}}  (Formula 34)


The optimization unit 37 sets D as a set of Z (step S46), where Z is given in the following form. In this case, D satisfies the following Formula 35.









[

Math
.




43

]













Z
=


[


Z
1
T

,





,

Z
M
T


]

T


,





Z



{

0
,
1

}

KM










Z
m

=



[

0
,





,
0
,
1
,
0
,





,
0

]

T







(

k


C
m


)













=





m
=
1

M









C
m





T






(

Formula





35

)







The optimization unit 37 computes the value of the objective function for all Z (step S47), and rearranges the elements of D with the computed values (step S48).


The algorithm depicted in FIG. 9 combines SDP relaxation and exhaustive search. By the optimization unit 37 performing optimization using the algorithm depicted in FIG. 9, it is possible to limit the range of exhaustive search using the solution of SDP.


Thus, this exemplary embodiment describes the optimization system that optimizes programming represented by a BQP problem. In detail, the optimization unit 37 relaxes a BQP problem to an SDP problem, and derives a solution of the SDP problem. Thus, an optimal solution can be derived at very high speed, as compared with a known typical BQP solving method.


As a result of an experiment using the method in this exemplary embodiment by means of a computer, a process requiring several hours to obtain an optimal solution of BPQ by a typical method was able to be accelerated to about 1 second.


Although this exemplary embodiment describes the operation of the optimization unit 37 using BQP formulated as shown in the foregoing Formula 10 as an example, BQP may be formulated as shown in the following Formula 36.









[

Math
.




44

]













Maximize






Z
T


QZ

+


r
T


Z










subject





to





Z

=



[


Z
1

,





,

Z
n


]

T




{

0
,
1

}

n



,









i


I
s





Z
i


=

1






(


s
=
1

,







S


)



,







a
u
T


Z

=


b
u







(


u
=
1

,





,
U

)



,







c
v
T


Z




d
v







(


v
=
1

,





,
V

)








(

Formula





36

)







Let A be defined by the foregoing Formula 13. In this case, the problem shown in Formula 36 is equivalent to the problem shown in the following Formula 37. Relaxing the problem shown in Formula 37 yields the following Formula 38.














[

Math
.




45

]



















Maximize




[

1






t
T


]



A


[



1




t



]















subject





to





t

=



[


t
1

,





,

t
n


]

T




{


-
1

,
1

}

n



,













i


I
s





t
i


=

2
-




I
s









(

s
=

1











S


)




,











a
u
T


t

=


2






b
u






-


1
T



a
u







(


u
=
1

,





,
U

)




,











c
v
T


t




2






d
v






-


1
T



c
v







(


v
=
1

,





,
V

)










(

Formula





36

)












Maximize






A
·
Y














subject





to





Y

=



(

y
ij

)


0


i


,



j

n





Sym

n
+
1







Y



O
.









y
ii




=


1







(


i
=
0

,





,
n

)

.












i


I
s





y

0
,
i





=


2
-




I
s










(


s
=
1

,







S


)

.












i


I
s








j


I
s





y

i
,
j







=




(

2
-



I
s




)

2








(


s
=
1

,





,
S

)

.












i
=
1

n








a

u
,
i




y

0
,
i






=



2






b
u


-


1
T



a
u








(


u
=
1

,





,
U

)

.








i
=
1

n










j
=
1

n








a

u
,
i




a

u
,
j




y

i
,
j








=




(


2






b
u


-


1
T



a
u



)

2








(


u
=
1

,





,
U

)

.












i
=
1

n








c

v
,
i




y

0
,
i









2






b
v


-


1
T




c
v

2








(


v
=
1

,





,
V

)

.















(

Formula





37

)







The problem shown in Formula 38 can be rewritten in a standard form including equalities and inequalities as shown in the following Formula 39. Here, B4u, B5u, and B6u are defined in the following Formula 40, and B1i, B2s, and B3s which are elements of Symn+1 are defined in the following Formula 41.









[

Math
.




46

]












Maximize






A
·
Y










subject





to





Y

=



(

y
ij

)


0


i


,






j

n



Sym

n
+
1



,






Y



O
.





B

1





i



·
Y


=


1








(


i
=
0

,





,
u

)

.





B

2





s



·
Y


=


2
-




I
s











(


s
=
1

,





,
S

)

.





B

3





s



·
Y



=




(

2
-



I
s




)

2









(


s
=
1

,





,
S

)

.





B

4





u



·
Y


=



2






b
u


-


1
T



a
n









(


u
=
1

,





,
U

)

.





B

5





u



·
Y



=




(


2






b
u


-


1
T



a
u



)

2









(


u
=
1

,





,
U

)

.





B

6





v



·
Y





2






d
c


-


1
T



c
v








(


v
=
1

,





,
V

)

.














(

Formula





39

)









B

4





u


=


1
2



[



0



a
u
T






a
u




O

n
,
n





]



,







B

5





u


=

[



0



0
n
T






0
n





a
u



a
u
T





]


;









B

6





v


=


1
2



[



0



c
v
T






c
v




O

n
,
n





]







(

Formula





40

)








B

1





i




[

k
,
j

]


=

{





1



(

k
=

j
=
i


)





0


otherwise










B

2





s




[

k
,
j

]



=

{






1
2




(


k
=
0

,

j


I
s



)






1
2




(


j
=
0

,

k


I
s



)






1
2



otherwise










B

3





s




[

k
,
j

]



=

{



1



(

k
,

j


I
s



)





0


otherwise












(

Formula





41

)







Meanwhile, the problem shown in the foregoing Formula 39 can be rewritten in a standard form represented by equalities as shown in the following Formula 42. Here, A′, B′1i, B′2s, B′3s, B′4u, B′5u, and B′6v are defined in the following Formula 43, and Kv which is an element of Symv is given by the following Formula 44.









[

Math
.




47

]












Maximize







A


·

Y












subject





to






Y



=


[



Y

























a
1



















































a
V




]



Sym

n
+
V
+
1




,






Y



O

,







B

1





i



·

Y



=

1






(


i
=
0

,





,
n

)



,







B

2





s



·

Y



=

2
-




I
s









(


s
=
1

,





,
S

)




,







B

3





s



·

Y



=



(

2
-



I
s




)

2







(


s
=
1

,





,
S

)



,







B

4





u



·

Y



=


2






b
u


-


1
T



a
u







(


u
=
1

,





,
U

)




,







B

5





u



·

Y



=



(


2






b
u


-


1
T



a
u



)

2







(


u
=
1

,





,
U

)



,







B

6





v



·

Y



=


2






d
v


-


1
T



c
v







(


v
=
1

,





,
V

)









(

Formula





42

)








A


=

[



A



O


n
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,


B

1





i



=

[




B

1





i





O


n
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,






B

2





s



=

[




B

2





s





O


u
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,


B

3





s



=

[




B

3





s





O


n
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,






B

4





u



=

[




B

4





u





O


n
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,


B

5





u



=

[




B

5





u





O


n
+
1

,
V







O

V
,

n
+
1






O

V
,
V





]


,






B

6





v



=

[




B

6





v





O


n
+
1

,
V







O

V
,

n
+
1






K
v




]






(

Formula





43

)








K
v



[

i
,
j

]


=

{



1



(

i
=

j
=
v


)





0


otherwise








(

Formula





44

)







A dual problem of the problem shown in the foregoing Formula 36 is described below. The dual problem of the problem shown in Formula 36 is defined in the following Formula 45.














[

Math
.




48

]


















Minimize






?














subject





to






x
1




R

n
+
1



,

x
2

,


x
3






R
s



?


-

A




O










?



indicates text missing or illegible when filed







(

Formula





45

)







In Formula 45, fj is given in the right side of the constraint of the foregoing Formula 42, and xj is a variable.


When feasible solution Z is given in Formula 36, the feasible solution in Formula 42 can be represented by the following Formula 46.









[

Math
.




49

]












Y


=

[



1



Z
T




















Z



ZZ
T































2


(


d
1

-


c
1
T


Z


)






























































2


(


d
1

-


c
1
T


Z


)





]





(

Formula





46

)







The feasible solution of the dual problem shown in Formula 45 is given by the following Formula 47.









[

Math
.




50

]













x

1





i


=




j
=
0

n









A


[

i
,
j

]






,






x
m

=

0






(


m
=
2

,





,
0

)







(

Formula





47

)







Thus, the optimization unit 37 can use the foregoing Formulas 46 and 47 as an initial solution of the problem shown in the foregoing Formula 42.


This is summarized as follows. The optimization unit 37 relaxes a BQP problem shown in the following Formula 48 to an SDP problem shown in the following Formula 49. In detail, the optimization unit 37 relaxes a BQP problem with 1-of-K constraint (one-hot constraint), linear equality constraint, and linear inequality constraint as shown in Formula 48, to an SDP problem. The optimization unit 37 then transforms a solution derived from the problem shown in Formula 49 to a solution of the problem shown in Formula 48, thus deriving an optimal solution of the problem shown in Formula 48.









[

Math
.




51

]














Input


:






Q




n
×
n



,

r


n


,


{

I
s

}


s
=
1

S

,






{

(


a
u

,

b
u


)

}


u
=
1

U

,


{

(


c
v

,

d
v


)

}


v
=
1

V

,






Maximize






Z
T


QZ

+


r
T


Z











subject





to





Z

=



[


Z
1

,





,

Z
u


]

T




{

0
,
1

)

n



,








i

I








,


Z
i

=

1






(


s
=
1

,





,
S

)



,







a
u
T


Z

=


b
u







(


u
=
1

,





,
U

)



,







c
v
T


Z




d
v







(


v
=
1

,





,
V

)



,







(

Formula





48

)









Input


:






A





(

n
+
1

)

×

(

n
+
1

)




,


{

I
s

}


s
=
1

S

,


{

(


a
u

,

b
u


)

}


u
=
1

U

,






{

(


c
v

,

d
v


)

}


v
=
1

V

,





Maximize






A
·
Y











subject





to





Y

=



(

u
ij

)


0


i


,


j

n



Sym

n
+
1



,

Y

O

,






y
ii

=

1






(


i
=
0

,





,
n

)



,








i

I








,


y

0
,
i


=

2
-




I
s









(


s
=
1

,





,
S

)




,








i


I
s











j

I







,


y
ij

=



(

2
-



I
s




)

2







(


s
=
1

,





,
S

)



,









i
=
1

n








a

u
,
i




y

0
,
i




=


2






b
u


-


1
T



a
u







(


u
=
1

,





,
U

)




,









i
=
1

n










j
=
1

n








a

u
,
i




a

u
,
j




y

i
,
j





=



(


2






b
u


-


1
T



a
u



)

2







(


u
=
1

,





,
U

)



,









i
=
1

n








c

v
,
i




y

0
,
i







2






d
v


-


1
T



c
v







(


v
=
1

,





,
V

)











(

Formula





49

)







In Formula 48, S is the number of 1-of-K constraints (one-hot constraints), U is the number of linear equality constraints, and V is the number of linear inequality constraints. Of the input in Formula 48, a and c are n-dimensional vectors, and b and d are scalar values. In Formula 49, vector au=(au,1, au,2, . . . , au,n)T, and vector cu=(cu,1, cu,2, . . . , cu,n)T. Here, superscript T indicates transposition.


An overview of the present invention is given below. FIG. 10 is a block diagram schematically depicting an information processing system according to the present invention. The information processing system according to the present invention includes: a predictive model reception unit 81 (e.g. the predictive model input unit 31) for receiving a predictive model (e.g. the predictive model in the foregoing Formula 1) that is learned (e.g. by the learner 20) based on an explained variable (e.g. Sm) and an explanatory variable (e.g. Pm), indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; and an optimization unit 82 (e.g. the optimization unit 37) for calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


With such a structure, appropriate optimization can be performed even in a situation where there is unobservable input data in mathematical optimization.


The predictive model may be a predictive model that includes a demand of a service or product as the explained variable and a price of the service or product as the explanatory variable. For example, the objective function is a function indicating a total sales revenue for a plurality of services or products, and the objective variable indicates a price of each of the plurality of services or products.


In detail, the predictive model reception unit 81 receives a first predictive model and a second predictive model. As an example, the first predictive model is a predictive model that includes a demand (e.g. sales amount) of a first product as the explained variable and a price of the first product and a price of a second product each as the explanatory variable, and the second predictive model is a predictive model that includes a demand (e.g. sales amount) of the second product as the explained variable and the price of the second product and the price of the first product each as the explanatory variable.


As another example, the first predictive model is a predictive model that includes a demand of a first service as the explained variable and a price of the first service and a price of a second service each as the explanatory variable, and the second predictive model is a predictive model that includes a demand of the second service as the explained variable and the price of the second service and the price of the first service each as the explanatory variable.


The optimization unit 82 may calculate an objective variable that optimizes an objective function, under a constraint condition having the received predictive model as an argument, instead of calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.


The information processing system may include a predictive model generation unit (e.g. the learner 20) for generating a predictive model by machine learning, based on objective variable historical data acquired in the past. The predictive model reception unit 81 may receive the predictive model generated by the predictive model generation unit. With such a structure, a large number of objective variables or more complex objective functions can be automatically generated from past data.


An example of optimizing product price is given below. With a typical optimization method, in the case of simultaneously optimizing the prices of a large volume of products (e.g. 1000 products), it is difficult to manually optimize the price of each product. Besides, the optimization requires manually performing very simple prediction.


In this exemplary embodiment, on the other hand, the predictive model generation unit can automatically generate various models from data by machine learning, so that the generation of a large number of predictive models, the generation of complex objective variables, etc. can be automatized. By automatizing such processes, for example when the tendency of data changes, a machine learning model can be automatically updated to automatically perform optimization again (operation automatization).


The present invention realizes a process of solving a mathematical programming problem and a process of generating a predictive model, by the capability of a processor (computer) of processing massive data at high speed in a short time. Accordingly, the present invention is not limited to simple mathematical processing, but fully uses a computer to acquire a prediction result and an optimization result from massive data at high speed through the use of a mathematical programming problem.



FIG. 11 is a schematic block diagram depicting the structure of a computer according to at least one exemplary embodiment. A computer 1000 includes a CPU 1001, a main storage device 1002, an auxiliary storage device 1003, and an interface 1004.


The learner 20 and the optimization device 30 are each implemented by the computer 1000. The computer 1000 implementing the learner 20 may be different from the computer 1000 implementing the optimization device 30. The operation of each processing unit described above is stored in the auxiliary storage device 1003 in the form of a program (information processing program or optimization program). The CPU 1001 reads the program from the auxiliary storage device 1003, expands the program in the main storage device 1002, and executes the above-mentioned process according to the program. The learner 20 and the optimization device 30 may each be realized by electric circuitry. The term “circuitry” here conceptually covers single device, multiple devices, chipset, and cloud.


In at least one exemplary embodiment, the auxiliary storage device 1003 is an example of a non-transitory tangible medium. Examples of the non-transitory tangible medium include a magnetic disk, magneto-optical disk, CD-ROM, DVD-ROM, and semiconductor memory connected via the interface 1004. In the case where the program is distributed to the computer 1000 through a communication line, the computer 1000 to which the program has been distributed may expand the program in the main storage device 1002 and execute the above-mentioned process.


The program may realize part of the above-mentioned functions. The program may be a differential file (differential program) that realizes the above-mentioned functions in combination with another program already stored in the auxiliary storage device 1003.


Although the present invention has been described with reference to the exemplary embodiments and examples, the present invention is not limited to the foregoing exemplary embodiments and examples. Various changes understandable by those skilled in the art can be made to the structures and details of the present invention within the scope of the present invention.


This application claims priority based on U.S. Provisional Application No. 62/235,056 filed on Sep. 30, 2015, the disclosure of which is incorporated herein in its entirety.


REFERENCE SIGNS LIST






    • 10 training data storage unit


    • 20 learner


    • 30 optimization device


    • 31 predictive model input unit


    • 32 external information input unit


    • 33 storage unit


    • 34 problem storage unit


    • 35 constraint condition input unit


    • 36 candidate point input unit


    • 37 optimization unit


    • 38 output unit


    • 39 objective function generation unit




Claims
  • 1. An information processing system comprising: a hardware including a processor;a predictive model reception unit, implemented by the processor, for receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; andan optimization unit, implemented by the processor, for calculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.
  • 2. The information processing system according to claim 1, wherein the predictive model is a predictive model that includes a demand of a service or product as the explained variable and a price of the service or product as the explanatory variable, wherein the objective function is a function indicating a total sales revenue for a plurality of services or products, andwherein the objective variable indicates a price of each of the plurality of services or products.
  • 3. The information processing system according to claim 2, wherein the predictive model reception unit receives a first predictive model and a second predictive model, wherein the first predictive model is a predictive model that includes a demand of a first product as the explained variable and a price of the first product and a price of a second product each as the explanatory variable, andwherein the second predictive model is a predictive model that includes a demand of the second product as the explained variable and the price of the second product and the price of the first product each as the explanatory variable.
  • 4. The information processing system according to claim 2, wherein the predictive model reception unit receives a first predictive model and a second predictive model, wherein the first predictive model is a predictive model that includes a demand of a first service as the explained variable and a price of the first service and a price of a second service each as the explanatory variable, andwherein the second predictive model is a predictive model that includes a demand of the second service as the explained variable and the price of the second service and the price of the first service each as the explanatory variable.
  • 5. (canceled)
  • 6. An information processing method comprising: receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; andcalculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.
  • 7. (canceled)
  • 8. A non-transitory computer readable information recording medium storing an information processing program, when executed by a processor, which performs a method for: receiving a predictive model that is learned based on an explained variable and an explanatory variable, indicates a relationship between the explained variable and the explanatory variable, and is represented by a function of the explanatory variable; andcalculating, for an objective function having the received predictive model as an argument, an objective variable that optimizes the objective function, under a constraint condition.
  • 9. (canceled)
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2016/003686 8/9/2016 WO 00
Provisional Applications (1)
Number Date Country
62235056 Sep 2015 US