1. Field of Invention
The present invention relates to a method of optimizing solutions to complex problems, more specifically a method of employing an innovative selection mechanism of genetic algorithms to solve constrained optimization problems and a system employing such optimization
2. Related Art
A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection to develop the most appropriate solution. Engineers utilize nonlinear programming to develop optimization software employing GAs to help solve problems such as industrial optimization problems. The optimization software assists decision-makers in business and industry to improve organizational efficiency. Many decision-makers prefer to utilize GAs in their optimizations because the methods inherently have benefits over conventional search techniques employing gradient based methods. Linear programming solvers are rarely preferred because they often lead to abstraction and simplification of the assumptions.
One of the major advantages of GAs compared to conventional search algorithms is that they operate on a population of solutions, rather than on a single point. This makes GAs more robust and accurate. GAs are less likely to be trapped by a local optimal, unlike Newton and gradient descent methods. GAs require no derivative information about the fitness criterion. Additionally, GAs have been shown to be less sensitive to the presence of noise and uncertainty in measurements.
GAs follow a general methodology to find the optimal solution. Firstly a population is generated of potential candidate solutions. The system then compares the fitness of each candidate to the solution. The candidates having a greater fitness for the solution are selected and combined to form a new population. It is possible that when the candidates are combined that they are modified or mutated in a certain way. Various methods of modification and mutation exist. The new population is then used to repeat the same process until the software is terminated. Termination can occur after a predefined number of iterations or when a certain fitness level has been reached.
The algorithm in the system must typically contain a genetic representation of the solution domain and a fitness function to evaluate the solution domain. The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is very difficult to define in many situations. The difficulty is increased because the fitness function is problem dependent. In some cases, it is extremely difficult or impossible to guess what the fitness function may be. The inability to easily and accurately define the fitness function has lead GAs to lose some effectiveness and mislead the evolutionary search. The ineffectiveness and deception can be due to such factors as the presence of many candidates in a given population being outside of the search space. In order to increase the benefit of utilizing GAs, it is preferable for the system to be as effective as possible by curtailing the population to candidates that are most suitable for assisting in finding the solution.
Evolutionary computation has shown success in managing constrained optimization problems. Evolutionary computation utilizes various methods to reject infeasible solutions. Generic algorithms (GAs) are able to handle infeasible solutions by employing a penalty function. Prior scientists found it difficult to adopt a strategy to select which of the numerous penalty functions should apply for certain problems. There are at least five commonly acceptable penalty functions for handling constraints: Homaifar, Lai, and Qi method; Joines and Houck method; Schoenauer and Zanthakis method; Michalewicz and Attia method; and Powell and Skolnuck method. Penalty functions have been arranged into three categories by those skilled in the art. The first category contains barrier penalty functions in which no infeasible solution is considered. The next category contains partial penalty functions in which a penalty is applied near to the feasibility boundary. The last category contains global penalty functions which apply penalties throughout the infeasible region.
An example of a fitness function employing a penalty function is given below
where:
Given the above example function, if the solution occurs within the feasible solutions, penalty(
In one aspect a computational device implemented method of solving constrained optimization problems includes generating an initial population composed of individuals. The gintess value of each of the individuals is determined based on a fitness function. Each value is then evaluated against a convergence criterion. If none of the criterion is met, a plurality of individuals are selected and a crossover operator is applied to them. It is then determined if the operated individuals are in the feasible search domain. All feasible individuals are then mutated.
In another aspect, a computational device implemented method of solving constrained optimization problems includes running a genetic algorithm. While employing the genetic algorithm, it is determined if an offspring is in a feasible search space.
In yet another aspect, a computational device implemented method of solving constrained optimization problems includes performing a genetic algorithm. While performing the algorithm, it is determined if an offspring is not in a feasible search space. A HSQPC or a NCP mechanism is applied. The application of the mechanism may be to an offspring that was found to be outside of a feasible search space.
The features of the invention believed to be novel are set forth with particularity in the appended claims. The invention itself, however, may be best understood by reference to the following detailed description of the invention, which describes an exemplary embodiment of the invention, taken in conjunction with the accompanying drawings, in which:
In cooperation with the attached drawings, the technical contents and detailed description of the present invention are described thereinafter according to a preferable embodiment, not being used to limit its executing scope. Any equivalent variation and modification made according to appended claims is covered by the claims claimed by the present invention.
Please refer to
Step 20 determines the fitness value for each individual based on the fitness function. After performing step 20, this embodiment employs step 30. Step 30 asks if the convergence criteria have been achieved. If so, step 35 occurs by getting the best result. If not, the GA continues while the convergence criterion is not met. The GA continues by first selecting at least two individuals, step 40, and performing a crossover operation, step 50. As described above, there are various types of crossover that may be employed. The resulting offspring will either be in the feasible search space or the infeasible search space. The determination of such feasibility is step 60. If the offspring is in the infeasible search space, the infeasible solution will be processed by either an NFC or HSQPC mechanisms.
An NFC mechanism works by employing a crossover between an infeasible chromosome and the nearest feasible chromosome in the search space. The nearest chromosomes is determined by following the below formula:
Min.distance=√{square root over ((x2−x1)2+(y2−y1)2)}{square root over ((x2−x1)2+(y2−y1)2)}
If the new child is located in the feasible domain, the GA mutates the child and continues on to the next generation. If the new child remains in the infeasible search domain, an additional crossover is performed utilizing the NFC mechanism. The process is repeated until the new child is in the feasible search domain. A graphical representation of how an NFC mechanism may function is provided in
A HSQPC mechanism is a type of sequential quadratic programming. Sequential quadratic programming is one of the most powerful techniques for solving complex non-linear constraint problems. Sequential quadratic programming uses a quadratic model for the objective function and a linear model for the constraint. In order to utilize HSQPC, the problem to be solved must fit the abstract pattern:
min:f(x)
St:c(x)=0
where f (x) is a function which measures the error in the least squares polynomial fit, and c(x) is a vector of non-linear constraints. Sequential quadratic programming is an iterative method which solves, at the kth iteration, a quadratic program of the following form:
∇hi(xk)td+hi(xk)=0,i=1, . . . , p
∇gi(xk)td+gi(xk)≦0,i=1, . . . , p
where d is defined as the search direction and hk is a positive definite approximation to the Hessian matrix of the Lagrangian function of the problem. The algorithm uses a pure Newton step in attempting to find the local minimum of the Lagrangian function. The Lagrangian function can be described as:
where γ and β are the Lagrangian multipliers. The developed quadratic sub-problems can then be solved using the active set strategy. The solution xk at each iteration is updated according to the following equation:
x
k+1
=x
k+αkdk
where α is defined as the step size and takes a value in the interval [0,1]. After each iteration, the matrix Hk is updated based on the Newton Method. One known method to update the matrix Hk is the Broyden-Fletcher-Goldfarb-Shanno method. Thus:
where:
s
k
=x
k+1
−x
k
γk=∇L(xk+1,γk+1,βk+1)−∇L(xk+1,γk,βk)
An example of a HSQPC mechanism can be seen in
After performing step 65 to return the offspring to the feasible search space, a mutation operator is applied to the new feasible solution, step 70. Alternatively, if step 60 determined that the offspring was already in the feasible search space, the mutation operator can be directly applied, step 70, without first performing step 65. After mutation, the population is updated, step 80. The results obtained in any one simulation may be saved to a file. More specifically, the results may be saved in a Bestsofar file. This file may contain statistics about the solution of the problem after each generation. The above process continues till the stopping criterion is met or the best solution is obtained.
The water pumping system is shown in
subject to the following restraints:
∇p(kPa)=810−25w1−3.754w12
∇p(kPa)=900−65w2−30w22
Mass balance:
w=w
1
+w
2
where w1 and w2 are the flow rates through pump 1 and pump 2, respectively.
The water pumping system was reformulated to:
Min.f=x3
subject to:
x
1=250+30x1−6x12
x
2=300+20x2−12x22
x
3=150+0.5(x1+x2)2
given that 0≦x1≦9.422, 0≦x2≦5.903, and 0≦x3≦267.42.
Since equality constraints can be difficult to handle, it is often preferred to transfer the equality constraints into inequality constraints. This can typically be accomplished in one of two ways: 1.) eliminate some of the parameters thus reducing the dimensions of the problem; 2.) reformulate the equality to two inequalities by introducing deviation variables in the problem parameters. Thus, the above problem can be reformulated as:
Min.f=x3=150+0.5(x1+x2)2
subject to:
30x12−249.99999+150+0.5(x1+x2)2
120x22−20x2−249.99999+150+0.5(x1+x2)2≧0
Utilizing the method described in detail above, the results for such a water pumping system would be as follows:
When the penalty type is set to NCP: x1=6.293426, x2=3.82190, and f(x1,x2)=201.15996.
When the penalty type is set to SQP: x1=6.293429, x2=3.82183, and f(x1,x2)=201.15933.
This invention claims the priority filing date of US provisional application 61/553,734, filed Oct. 31, 2011. The contents of the priority provisional application are incorporated by reference and in its entirety.
Number | Date | Country | |
---|---|---|---|
61553734 | Oct 2011 | US |