The present application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2020-032048 filed on Feb. 27, 2020, with the Japanese Patent Office, the entire contents of which are incorporated herein by reference
The disclosures herein relate to an optimization apparatus, an optimization method, and an optimization program.
The multiple knapsack problem is a problem in combinatorial optimization. In the multiple knapsack problem, a plurality of items each having a given weight and a given value are packed into a plurality of knapsacks each having a weight capacity limit, such that the total weight is less than or equal to the limit. The solution of the multiple knapsack problem is obtained by finding the combination of knapsacks and items that maximizes the sum of values of items allocated to the plurality of knapsacks.
In combinatorial optimization problems, an increase in the number of dimensions of search space results in an explosive increase in the number of combinations of variables. In such a case, the use of exhaustive search, which calculates all possible combinations, requires lengthy computational time that is practically infeasible. Instead of finding the true optimum solution, thus, a general-purpose approximation algorithm (i.e., metaheuristic algorithm) based on a heuristic approach may be used, or an approximation algorithm that obtains a good approximate solution within a practically feasible computational time may be used.
A metaheuristic algorithm can obtain an optimum solution or a solution sufficiently close to the optimum solution, if given a sufficiently long computational time, through state transitions starting from an initial state to search for solutions attaining successively smaller values of an objective function. However, a solution that is sufficiently close to the optimum solution is not always readily obtained within a practically feasible computational time.
A greedy algorithm is one of the approximation algorithms that can obtain a good approximate solution within a feasible computational time. In the greedy algorithm, an evaluation value obtained by dividing a value by a weight, for example, is given to each item, and items are packed into knapsacks in a descending order of evaluation values. With this arrangement, a combination attaining a relatively large sum of evaluation values, among all the combinations of knapsacks and items, may be obtained at high speed. Precision of the solution, however, is lower than in the case of metaheuristic algorithms.
In order to obtain a solution sufficiently close to the optimum solution within a feasible computational time, it may be conceivable to use a greedy algorithm to fix allocations for some items first, and then to apply a metaheuristic algorithm with respect to the remaining items. In this case, the greedy algorithm can sufficiently reduce the size of the combinatorial optimization problem at a preprocessing stage prior to use of the metaheuristic algorithm, which may make it possible to obtain a high-quality solution within a feasible computational time.
In so doing, a high-quality solution close to the optimum solution should still be obtained even after allocations for some items are fixed by the greedy algorithm. In consideration of this, there is a need to fix suitable pairs only, among all the pairs each comprised of a knapsack and an item allocated thereto.
[Patent Document 1] Japanese Laid-open Patent Publication No. 2019-046031
[Patent Document 2] Japanese Laid-open Patent Publication No. 2011-100303
According to an aspect of the embodiment, an information processing apparatus for allocating a plurality of items each having a first-attribute value for a first attribute and a second-attribute value for a second attribute to a plurality of places of allocation each having a maximum limit for the first attribute such that a sum of first-attribute values is less than or equal to the maximum limit, such as to make as large as possible a sum of second-attribute values of items that have been allocated to the places of allocation, includes a memory and one or more arithmetic circuits coupled to the memory and configured to perform calculating an evaluation value for each of the plurality of items based on the first-attribute value and the second-attribute value, successively allocating as many unallocated items as possible in a descending order of evaluation values to a single place of allocation that has been selected from the places of allocation in a predetermined order, such that, a sum of first-attribute values is less than or equal to the maximum limit, selecting one or more items from the items allocated to the single place of allocation in accordance with a predetermined selection rule based on at least one of the first-attribute value and the second-attribute value, to create a replica having a same evaluation value, a same first-attribute value, and a same second-attribute value as a respective one of the one or more selected items, followed by adding one or more created replicas to the unallocated items, deleting replicas and the items that have served as a basis for replica creation from the places of allocation after allocation of items inclusive of replicas comes to an end by repeating item allocation and replica addition, thereby fixing allocations to the places of allocation with respect to items left without being deleted, and executing a metaheuristic algorithm to allocate, to the places of allocation, items which are among the plurality of items and for which allocation to the places of allocation has not been fixed.
The object and advantages of the embodiment will be realized and attained by means of the elements and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
In the following, embodiments of the invention will be described with reference to the accompanying drawings.
In the multiple knapsack problem. illustrated in
In the greedy algorithm, an evaluation value obtained by dividing a value by a weight, for example, is given to each item, and the items are packed into the knapsacks in a descending order of evaluation values. In so doing, as many items as possible may be packed in a descending order of evaluation values into a single knapsack that has been selected from the plurality of knapsacks #1 through #3 it a predetermined order (e.g., in an ascending order of sequence numbers), for example. Upon this knapsack. becoming full, a next knapsack may be selected according to the predetermined order, followed by packing items in the same manner.
The arrangement that packs items in a descending order of evaluation values each obtained by dividing a value by a weight enables items having higher values per unit weight to be packed preferentially. With this arrangement, more preferable items having higher cost performance with respect to the weight limit can be packed preferentially, so that a relatively good solution may be obtained.
The first knapsack #1 to be packed can contain items up to a weigh of 13 (kg). When items axe selected in a descending order of evaluation values, the first four items #9, #3, #1, and #4 have a total weight of 12 (kg) , so that these four items are packed into the knapsack #1. Similarly, the second knapsack #2 to be packed can contain items up to a weight of 15 (kg), and is thus packed with the fifth and sixth items in the descending order of evaluation values, i.e., the item #2 and the item #5 (having a total weight of 10 kg). The third knapsack #3 to be packed can contain items up to a weight of 11 (kg), and is thus packed with the seventh item in the descending order of evaluation values, i.e., the item #8 (having a weight of 10 kg). The eighth and ninth items in the descending order of evaluation values, i.e., the item #7 and the item #6, cannot be packed into the knapsacks according to the. above-noted greedy algorithm.
As is understood from the example described above, packing items by use of a greedy algorithm results in a solution having lower quality than the optimum solution. In consideration of this, the optimization apparatus and the optimization method as will be described in the following use a greedy algorithm to pack some items, and then use a metaheuristic algorithm to determine allocations for the remaining items. Use of a metaheuristic algorithm after reducing the problem size with a greedy algorithm allows a high-quality solution to be obtained with a feasible computational time.
However, when the greedy algorithm fixes allocations for items, which one of the items are fixedly allocated will affect the quality of a final solution obtained by the metaheuristic algorithm. In the example illustrated in
In consideration of the above, it is preferable to refrain from fixing allocations with respect to the items that are selected from the items for which the greedy algorithm has determined allocations as illustrated in
The optimization method first calculates an evaluation value based on a weight value and a worth value for each item. The item list 10 illustrated in
The optimization method successively allocates as many unallocated items as possible in a descending order of evaluation values to a single place of allocation that has been selected from the plurality of places of allocation (e.g., knapsacks #1 through #3) in a predetermined order, such that the sum of weights is less than or equal to the maximum limit.
Subsequently, one more items are selected from the items allocated to the single selected knapsack #1 in accordance with a selection rule based on at least one of a weight value and a worth value. Then, a replica having the same evaluation value, the same weight value, and the same worth value as a corresponding one, of the one or more selected items is created, followed by adding the one or more created replicas to the unallocated items in the item list 10.
After this, the allocation step and the replica adding step described above are repeated as many times as needed, until the process of allocating items inclusive of replicas to all the places of allocation comes to an end.
The knapsack #3 is also packed with as many unallocated items as possible that are successively fed in the descending order of evaluation values, followed by selecting the item #7 that has the smallest evaluation value among these allocated items. A replica 13 is created with respect to the selected item #7. This created replica 13 of the item #7 is then added to the item list 10.
In the optimization method, the replicas and the items that have served as a basis for the replica creation are deleted from the places of allocation (i.e., the knapsacks #1 through #3) after the allocation step comes to an end. Then, allocations to the places of allocation (i.e., the knapsacks #1 through #3) are fixed with respect to the items left without being deleted.
As illustrated in
The optimization method then uses a metaheuristic algorithm to allocate, to the places of allocation (i.e., the, knapsacks #1 through #3), the items for which allocation to the places of allocation has not been fixed, the items in the item list 10 which are not marked as having been allocated. More specifically, the item #4, the item #5 the item #6, the item #7, and the item #8 are allocated a heuristic algorithm the available space in the knapsacks #1 through #3. Namely, the item #4, the item #5, the item #6, the item #7, and the item #8 are allocated by a heuristic algorithm to the three knapsacks, i.e., the knapsacks #1 through #3 that are regarded as having the maximum weight capacity limits of 5 kg, 10 kg, and 11 kg, respectively.
Examples of metaheuristic algorithms include a random-walk search algorithm, a simulated annealing algorithm, a genetic algorithm, a stochastic evolutionary algorithm, and the like. These approximation algorithms are designed such that a probabilistic element is introduced into state transitions that are performed from an initial state, i.e., a start point, to search for solutions attaining successively improved values of an objective function, thereby allowing the state to converge on as satisfactory solution as possible without being stuck in an unfavorable local minimum. In the case of a genetic algorithm, for example, the selection of pairs, crossover, selection, mutation, and the like are controlled in a probabilistic manner during the process in which the fitness of the population serving as an objective function value increases in successive generations, thereby avoiding getting stuck at an unfavorable local solution. In the case of a simulated annealing algorithm, for example, state transitions are controlled in a probabilistic manner so as to allow a given state transition to occur with a certain probability even when the value of an objective function worsens as a result of such a state transition, thereby avoiding getting stuck at an unfavorable local solution.
Examples of a mechanism for performing simulated annealing include an Ising machine (i.e., Boltzmann machine) using an Ising energy function. In an Ising machine, the problem to be solved is translated into an Ising model, which represents the behavior of spins of a magnetic material, and, then, a solution to the problem is calculated.
A knapsack problem may be formulated as an Ising problem as follows. The number of items is denoted as N, the number of knapsacks denoted as K, the worth of an item #i denoted as ci, the weight of the item #i denoted as wi, and the maximum weight capacity limit of a knapsack #j denoted as Wj. Further, a variable xij indicates whether the item #i is contained in the knapsack #j. The variable xij being 1 indicates that the item #i is contained in the knapsack #j, and the variable xij being 0 indicates that the item #i is not contained in the knapsack #j.
It may be noted that the knapsack problem is formulated herein by using a QUBO (quadratic unconstrained binary optimization) form in which variables assume either +1 or 0, rather than using an Ising model in which variables assume either +1 or −1.
The objective function may be defined by an expression (1) as follows.
Further, an expression (2) and an expression (3) as follows may be used as constraints.
The constraint expression (2) indicates that the total weight of items packed into each knapsack less than or equal to the maximum weight capacity limit of the knapsack. The constrain expression (3) indicates that no is selected two or more times.
In order for a simulated annealing algorithm to search for a solution, xij are subjected to probabilistic transitions to find xij that minimize the objective function defined by the expression (1) under the condition in which the constraint conditions (2) and (3) are satisfied. It may be noted that the constraint expressions (2) and (3) may be incorporated into the objective function. In doing so, auxiliary variables may be introduced in order to allow the act of minimizing the objective function to produce a solution that satisfies the constraint expressions. Specifically, the condition requiring that a value Z (e.g., Σwixij) is less than or equal to K may be rewritten into the condition requiring that an expression (4) having an auxiliary variable yk shown below is minimized.
(1−Σyk)2+(Σkyk−Z)2 (4)
The symbol “Σ” means obtaining the sum from k=1 to k=K. The first term in the expression (4) requires that only one of y1 through yK is set to 1. The second term requires that a value of Z is set equal to the value of the subscript of the auxiliary variable that is one of y1 through yK and that is set to 1. The value of the expression (4) is able to become zero when the value of Z is equal to one of the natural numbers from 1 to K. The value of the expression (4) is not able to become zero when the value of Z is greater than K. The optimization process that minimizes the original objective function while satisfying the constraint condition requiring Z≤K can be formulated as the process of minimizing a new objective function obtained by adding the expression (4) to the original objective function. Specifically, two expressions may be obtained by using each of the constraint expression (2) and the constraint expression (3) as the above-explained value of Z, and may be added to the expression (1). This arrangement allows the constraint conditions as defined by the constraint expressions (2) and (3) to be incorporated into the objective function that needs to be minimized.
In a simulated annealing algorithm, a state S may be defined as follows.
S=(x11, x12, . . . , x1N, x21, x22, . . . , x2N, . . . , xM1, xM2, . . . , xMH)
An objective function value E of the current state S is calculated, and, then, an objective function value E′ of the next state S′ obtained by making a slight change (e.g., 1 bit inversion) from the current state S is calculated, followed by calculating a difference ΔE (=E′−E) between these two states. In the case in which the Boltzmann distribution is used to represent the probability distribution of S and the Metropolis method is used, for example, probability P with which a transition to the next state S′ occurs may be defined by the following formula.
P=min[1, exp(−βΔE)] (5)
Here, β is thermodynamic beta (i.e., the reciprocal of absolute temperature). The function min(1, x] assumes a value of 1 or a value of x, whichever is smaller. According to the above formula, a transition to the next state occurs with probability “1” in the case of ΔE≤0, and a transition to the next state occurs with probability exp(−βΔE) in the case of 0<ΔE.
Lowering temperature at a sufficiently slow rate, while performing state transitions, allows the state to be converged, theoretically, on an optimum solution having the smallest objective function value. The Metropolis method is a non-limiting example, and other transition control algorithms such as Gibbs sampling may alternatively be used.
In the optimization method of the present disclosures, the state of allocation in a place of allocation (e.g., knapsack) is fixed with respect to an item for which allocation has been fixed by the greedy algorithm as described in connection with
As described above, the optimization method according to the embodiment creates replicas of some items and allocates items inclusive of the replicas to the places of allocation during the execution of a greedy algorithm, The items for which replicas are created are one or more items that are selected from the items allocated to a single place of allocation for which allocation has been completed, and that are selected in accordance with a selection rule based on at least one of a weight value and a worth value. In the example described above, one item having the lowest evaluation value is selected. Alternatively, two or more items having the lowest evaluation values may be selected.
In the state illustrated in
Further, a replica is created for the item having a low degree of certainty, followed by treating both the original item and the replica as items to be packed into the knapsacks by the greedy algorithm. Both the original item and the replica are then removed from the knapsacks after the greedy algorithm has completed allocations. This arrangement allows a sufficient space (i.e., available weight capacity) usable for subsequent allocation of items with low degrees of certainty to be saved for a metaheuristic algorithm. Namely, the degree of freedom in selecting places of allocation during the execution of a metaheuristic algorithm is increased, thereby increasing the probability of achieving a solution close to the optimum solution.
The description of the embodiment has been directed to the case in which one or more replicas are created. Alternatively, a greedy algorithm may allocate items such as to secure a space (i.e., weight capacity) for allocating an item of a low degree of certainty in both the knapsack of an original allocation and the next knapsack. For example, in the state illustrated in
In the above-noted example, the item having the lowest evaluation value is selected as an item having a low degree of certainty. Alternatively, an item having a low degree of certainty may be selected based on other selection rules, depending on the circumstances. For example, a knapsack for which item allocation has been completed may contain one or more items having the lowest evaluation value, and also contain other items having evaluation values which are not much different from the lowest evaluation value. In such a case, the evaluation value may not be used as a selection criteria, and, instead, one or more items having the lightest weight may be selected as items having a low degree of certainty. This is because items having a light weight provide flexibility (i.e., greater freedom) in packing items into knapsacks, compared with items having a heavy weight. In some cases, it is believed to be preferable for such flexible items to be kept in an unfixed state, rather than to be fixedly allocated. Selecting one or more items having the lightest weight as items having a low degree of certainty can increase the probability that a solution obtained by the metaheuristic algorithm is closer to the optimum solution than otherwise.
Further, a weight threshold may be set for each knapsack when selecting one or more items. One or more items packed into a knapsack in excess of the threshold of the knapsack may then be selected as items having a low degree of certainty. This arrangement makes it possible to select items corresponding in number to the weight threshold of each knapsack, rather than selecting a predetermined specific number of items. For example, the knapsack #1 may have a threshold of 9 kg. In the state in which items are allocated as in
When selecting a predetermined number X of items as items having a low degree of certainty, the number X may be determined or learned based on past data or test data such as to facilitate the finding of an optimal solution. Alternatively, X is successively changed, followed by executing a metaheuristic algorithm to obtain solutions with respect to a plurality of cases in which respective, different numbers X are used. The best solution among the obtained solutions may then be presented to the user. These arrangements allow a solution to be obtained that is closer to the optimum solution then otherwise.
The input unit 23 provides user interface, and receives various commands for operating the optimization apparatus and user responses responding to data requests or the like. The display unit 22 displays the results of processing by the optimization apparatus, and further displays various data that make it possible for a user to communicate with the optimization apparatus. The network interface 27 is used to communicates with peripheral devices and with remote locations.
The optimization apparatus illustrated in
Upon receiving user instruction for program execution from the input unit 23, the CPU 21 loads the program to the RAM 25 from the memory medium M, the peripheral apparatus, the remote memory medium, or the HDD 26. The CPU 21 executes the program loaded to the RAM 25 by use of an available memory space of the RAM 25 as a work area, and continues processing while communicating with the user as such a need arises. The ROM 24 stores control programs for the purpose of controlling basic operations of the CPU 48 or the like.
By executing the computer program as described above, the optimization apparatus performs the greedy-algorithm-based allocation process. The metaheuristic calculation unit 29 may be a dedicated hardware specifically designed to execute a metaheuristic algorithm, and may be a dedicated hardware that performs simulated annealing to search for a solution of an Ising problem. In an alternative configuration, the metaheuristic calculation unit 29 may not be provided. In such a case, the CPU 21, which is the processor of the general-purpose computer, functions as a metaheuristic calculation unit to perform a metaheuristic algorithm.
It may be noted that boundaries between functional blocks illustrated as boxes indicate functional boundaries, and may not necessarily correspond to boundaries between program modules or separation in terms of control logic. One functional block and another functional block may be combined into one functional block that functions as one block. One functional block may be divided into a plurality of functional blocks that operate in coordination.
The data acquisition unit 31 stores, in the item database 30A and the knapsack database 30B, item data and knapsack data that are supplied from an external source to define a multiple knapsack problem. The evaluation value calculating unit 32 calculates an evaluation value for each item based on a first attribute value (e.g., weight value) and a second-attribute value (e.g., worth value).
The allocation unit 33 successively allocates as many unallocated items as possible in a descending order of evaluation values to a single place of allocation that has been selected from the plurality of places of allocation (e.g., knapsacks) in a predetermined order, such that the sum of first-attribute values is less than or equal to the maximum limit. The replica selecting unit 34 may select one or more items from the items allocated to the single selected place of allocation (e.g., knapsack) in accordance with a predetermined selection rule based on at least one of a first-attribute, value (e.g., weight value) and a second-attribute value (e.g., worth value). The replica creating unit 35 creates a replica having the same evaluation value, the same first-attribute value (e.g., weight value), and the same second-attribute value (e.g., worth value) as a respective one of the one or more selected items, followed by adding the one or more created replicas to the unallocated items in the item database 30A (e.g., the item list 10 previously described).
The allocation finalizing unit 36 deletes the replicas and the items that have served as a basis for replica creation from the places of allocation (e.g., knapsacks) after the allocation of items inclusive of replicas comes to an end, thereby fixing allocations to the places of allocation with respect to the items left without being deleted.
The metaheuristic calculation unit 37 uses a metaheuristic algorithm to allocate, to the places of allocation (e.g., knapsacks), the items which are among the plurality of items defined in the problem and for which allocation to the places of allocation has not been fixed. In so doing, the items to be allocated do not include replicas. It is not always the case that all of the plurality of items can be allocated to the places of allocation (knapsacks). The data output unit 38 outputs a solution (i.e., data indicative of a finally obtained combination of items and the places of allocation) obtained by the metaheuristic calculation unit 37. The output data may be supplied to a display screen via the display unit 22, to the HDD 26, to the memory medium M via the removable-memory-medium drive 28, or to an external device via the network, interface 27.
In step S1, the input unit 23 receives input data. The input data are information regarding items and information regarding knapsacks.
In step S2, the CPU 21 makes a list from the information regarding the plurality of knapsacks to store the list in a stack “knapsackList”, and calculates an evaluation value for each of the items to store in an item list “itemList” the items which are arranged in the descending order of evaluation values. In the stack “knapsackList”, the knapsacks are arranged in a predetermined order (e.g., in the ascending order of sequence numbers).
In step S3, the CPU 21 checks whether the stack “knapsackList” is empty. If the stack is not empty, the procedure proceeds to step S4.
In step S4, the CPU 21 removes the top item from the stack “knapsackList”, and assigns the removed knapsack as the place-of-allocation “knapsack”. In step S5, the CPU 21 checks whether the item list “itemList” is empty. If the item list is not empty, the procedure proceeds to step S6.
In step S6, the CPU 21 assigns the top item in the item list “itemList” as the allocation item “item”. Namely, the top item in the list in which a plurality of items are arranged in the descending order of evaluation values is assigned as the allocation item “item”.
In step S , the CPU 21 checks whether allocating the allocation item “item” to the place-of-allocation “knapsack” results in the weight limit (i.e., the maximum weight capacity limit) being violated. If violation does not occur, the procedure proceeds to step S8.
In step S8, the CPU 21 assigns (i.e., allocates) the allocation item “item” to the place-of-allocation “knapsack”. Namely, the item identified by the allocation item “item” is packed into the knapsack identified by the place-of-allocation “knapsack”.
In step S9, the CPU 21 removes (i.e., deletes), from the item list “itemList”, the item that has been allocated in step S8. Thereafter, the procedure returns to step S5, from which the subsequent steps are repeated.
If the check in step S7 finds that the weight limit is violated, the CPU 21 in step SIC creates a replica of one or more items having a low degree of certainty among the items having been allocated to the place-of-allocation “knapsack”, followed by adding the one or more created replicas to the item list “itemList”. In step S11, the CPU 21 calculates evaluation values with respect to the items (including replicas) in the item list “itemList” as needed, followed by arranging the items including the replicas in the descending order of evaluation values in the item list “itemList”. Thereafter, the procedure returns to step S3, from which the subsequent steps are repeated.
If the check in step S3 finds that the stack “knapsackList” is empty, the CPU 21 in step S12 removes all the replicas from the knapsacks for which allocation has been completed, and returns all the items serving as a basis for the replicas to the item list “itemList”. The items having remained in the knapsacks are the items which are fixedly allocated (i.e., fixed).
In step S13, the metaheuristic calculation unit 29 (i.e., Ising machine) performs simulated annealing with respect to the items in the item list “itemList”. Alternatively, the CPU 21 may perform simulated annealing with respect to the items in the item list “itemList”.
In step S14, the CPU21 presents the solution obtained by the simulated annealing to the user via a specified medium (e.g., a display screen or a memory medium). With this, the execution of the optimization method comes to an end.
In step S21, the input unit 23 receives input data. The input data are information regarding items and information regarding knapsacks.
In step S22, the CPU 21 sets a threshold value “threshold” to its initial value “0”. Subsequent steps S23 through S30 are identical to steps S2 through S9 illustrated in
In step S31, the CPU 21 creates a replica with respect to a number “threshold” of items which are last allocated among the items allocated to the place-of-allocation “knapsack” (i.e., a number “threshold” of items having the lowest evaluation values), followed by adding the created replicas to the item list “itemList”. Steps S32 through S34 are identical to steps S11 through S13 illustrated in
After simulated annealing is performed in step S34, the CPU 21 in step S35 increases the threshold value “threshold” by 1. In step S36, the CPU 21 checks whether the threshold value “threshold” is greater than a predetermined number N that has been set in advance. In the case of the threshold value “threshold” being no greater than the predetermined number N, the procedure goes back to step S23 to repeat the execution of the subsequent steps.
If the check in steps S36 finds that the threshold value “threshold” is greater than the predetermined number N, the CPU21 presents the best solution among the solutions obtained by the simulated annealing to the user via a specified medium (e.g., a display screen or a memory medium). With this, the execution of the optimization method comes to an end.
In the second embodiment of the optimization method described above, the predetermined number N is set in advance, and an allocation process by the greedy algorithm and a solution search by the metaheuristic algorithm are performed when the threshold value “threshold” is no greater than N. Upon the threshold value “threshold” becoming greater than N, the procedure comes to an end to present the best solution. Instead of utilizing such a predetermined number N, a different check criterion may be utilized to put an end to an allocation process by the greedy algorithm and a solution search by the metaheuristic algorithm. For example, the check in step S36 may check whether the number of knapsacks that contain one or more fixed items upon the completion of an allocation process by the greedy algorithm is less than or equal to one. Upon finding that the number is less than or equal to one, the procedure may come to an end to present the best solution. In this arrangement, a solution search by the metaheuristic algorithm continues co be performed until the number of fixed items allocated by the greedy algorithm becomes close to the minimum possible number. The probability of obtaining the optimum solution is thus increased. Further, without setting the predetermined number N in advance, the condition under which the procedure comes to an end is automatically set in accordance with the size and aspect of the problem. As a result, a solution close to the optimum solution can be obtained in an adaptive manner in accordance with the size and aspect of the problem.
In this example, a plurality of objects (i.e., tasks) are defined, each of which has a first-attribute value (i.e., the time needed for the task) for a first attribute (i.e., the amount of the task) and a second-attribute value (i.e., payment for the task) for a second attribute (i.e., the worth of the task). In the example illustrated in
Further, the tasks are allocated to the places of allocation (i.e., workers) each having the maximum limit for the first attribute (i.e., maximum work time limit) such that the sum of the first-attribute values (i.e., the sum of the time needed for the tasks) is less than or equal to the maximum limit (i.e., maximum work time limit). In the example illustrated in
In this problem, it is required to find a combination of workers and tasks that makes as large as possible the sum of second-attribute values (i.e., payments for tasks) associated with the objects (i.e., tasks) that have been allocated to the places of allocation (i.e., the workers). In finding the solution of the problem, a greedy algorithm may employ an evaluation value obtained by dividing the payment illustrated in the task list 40 by the time needed for the task.
As described above, a combinatorial optimization problem equivalent to a multiple knapsack problem exists under different problem settings than the problem settings comprised of items and knapsacks. The optimization apparatus and the optimization method of the present disclosures are applicable to such a combinatorial optimization problem chat is equivalent to a multiple knapsack problem.
Further, although the present invention has been described with reference to the embodiments, the present invention is not limited to these embodiments, and various variations and modifications may be made without departing from the scope as defined in the claims.
According to at least one embodiment, items suitable for fixed allocations can be selected when using a greedy algorithm to fix allocations for items prior to use of a metaheuristic algorithm in a multiple knapsack problem.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiment (s) of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit, and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2020-032048 | Feb 2020 | JP | national |