1. Field of the Invention
The invention pertains to the field of mathematical analysis and modeling, nonlinear programming and optimization technology. More particularly, the invention pertains to dynamical methods for systematically obtaining local optimal solutions, as well as, the global optimal solution of continuous as well as discrete optimization problems.
2. Description of Related Art
A large variety of the quantitative issues such as decision, design, operation, planning, and scheduling abound in practical systems in the sciences, engineering, and economics that can be perceived and modeled as either continuous or discrete optimization problems. Typically, the overall performance (or measure) of a system can be described by a multivariate function, called the objective function. According to this generic description, one seeks the best solution of an optimization problem, often expressed by a real vector, in the solution space which satisfies all stated feasibility constraints and minimizes (or maximizes) the value of an objective function. The vector, if it exists, is termed the globally optimal solution. For most practical applications, the underlying objective functions are often nonlinear and depend on a large number of variables; making the task of searching the solution space to find the globally optimal solution very challenging. The primary challenge is that, in addition to the high-dimension solution space, there are many local optimal solutions in the solution space; the globally optimal solution is just one of them and yet both the globally optimal solution and local optimal solutions share the same local properties.
In general, the solution space of an optimization problem has a finite (usually very large) or infinite number of feasible solutions. Among them, there is one and, only one, global optimal solution, while there are multiple local optimal solutions (a local optimal solution is optimal in a local region of the solution space, but not in the global solution space.) Typically, the number of local optimal solutions is unknown and it can be quite large. Furthermore, the values of an objective function at local optimal solutions and at the global optimal solution may differ significantly. Hence, there are strong motivations to develop effective methods for finding the global optimal solution.
We next discuss the discrete optimization problem. The task of solving discrete optimization problems is very challenging. They are generally NP-hard (No solution algorithm of polynomial complexity is known to solve them). In addition, many discrete optimization problems belong to the class of NP-complete problems for which no efficient algorithm is known. A precise definition of NP-complete problems is available in the literature. Roughly speaking, NP-complete problems are computationally difficult; any numerical algorithm would require in the worst case an exponential amount of time to correctly find the global optimal solution.
One popular approach to attack discrete optimization problems is to use the class of iterative improvement local search algorithms [1]. They can be characterized as follows: start from an initial feasible solution and search for a better solution in its neighborhood. If an improved solution exists, repeat the search process starting from the new solution as the initial solution; otherwise, the local search process will terminate. Local search algorithms usually get trapped at local optimal solutions and are unable to escape from them. In fact, the great majority of existing optimization techniques for solving discrete optimization problems usually come up with local optimal solutions but not the global optimal one.
The drawback of iterative improvement local search algorithms has motivated the development of a number of more sophisticated local search algorithms designed to find better solutions by introducing some mechanisms that allow the search process to escape from local optimal solutions. The underlying ‘escape’ mechanisms use certain search strategies to accept a cost-deteriorating neighborhood to make escape from a local optimal solution possible. These sophisticated local search algorithms include simulated annealing, genetic algorithm, Tabu search and neural networks.
However, it has been found in many studies that these sophisticated local search algorithms, among other problems, require intensive computational efforts and usually can not find the globally optimal solution.
In addition, several effective methods are developed in this invention for addressing the following two important and challenging issues in the course of searching for the globally optimal solution:
In the past, significant efforts have been directed towards attempting to address these two issues, but without much success. Issue (i) is difficult to solve and the existing methods all encounter this difficulty. Issue (ii), related to computational efficiency during the course of search, is also difficult to solve and, again, the majority of the existing methods encounter this difficulty. Issue (ii) is a common problem which degrades the performance of many existing methods searching for the globally optimal solution: re-visitation of the same local optimal solution several times; this action indeed wastes computing resources without gaining new information regarding the location of the globally optimal solution. From the computational viewpoint, it is important to avoid revisiting the same local optimally solution in order to maintain a high level of efficiency.
The task of finding the global optimal solution of general optimization problems is important for a very broad range of various engineering disciplines and the sciences. The invention presents dynamical methods for obtaining the global optimal solution of general optimization problems including the steps of first finding, in a deterministic manner, one local optimal solution starting from an initial point, and then finding another local optimal solution starting from the previously found one until all the local optimal solutions are found, and then finding from said points the global optimal solution.
We propose that an effective approach to solve general optimization problems is the one that first finds multiple, if not all, local optimal solutions and then select the best solution from them.
In the present invention, we develop a new systematic methodology, which is deterministic in nature, to find all the local optimal solutions of general optimization problems. The following dynamical methodologies for solving various types of optimization problems are developed in this invention:
To address issue (i), we develop in this invention a DDP-based numerical method, which, in combination with the DDP search method, performs a systematical procedure that, starting from a local optimal solution, move away from the solution and find another local optimal solution in an effective and deterministic manner. To address issue (ii), we develop in this invention an anti-revisiting search method to avoid revisiting the local optimal solutions that are already known. The theoretical basis of the anti-revisiting search method rests on the dynamical decomposition points developed in this invention.
In the present invention, the effective methods developed to overcome these issues (i) & (ii) are then incorporated into two dynamical methodologies for finding all the local optimal solutions of general discrete optimization problems. One distinguishing feature of these two dynamical methodologies is that these dynamical methodologies can incorporate any existing local search algorithm to achieve computation efficiency in finding a local optimal solution, while maintaining the global ability to find all of the local optimal solutions.
The developed dynamical methods can easily incorporate any current optimization method. This is a big advantage for users of our invention because they do not have to abandon their customized optimization methods that are very efficient in finding a local optimal solution. Our methods will use the customized methods to efficiently find a local optimal solution and will direct the customized methods to move from one local optimal solution from another.
Dynamical methods developed in this invention can be programmed into computer packages, and can be combined with effective existing computer packages to find the complete set of local optimal solutions. The invention can be programmed to interface with any existing computer package without the need to modify the ‘environment’ of the existing computer package, including the graphical user interface and database. In particular, this invention imposes no new learning curve for the user in the resulting computer packages. This feature of intellectual property reuse of the existing computer package makes this invention very attractive.
General Optimization Problems
We consider in this invention general nonlinear optimization problems which are conventionally divided into nonlinear continuous optimization problems and nonlinear discrete optimization problems. They can be further divided into the following four classes
We developed a dynamical method to find all of their local optimal solutions for each class of nonlinear optimization problems and to find the global optimal solution.
We next describe each class of nonlinear optimization problems.
Unconstrained Continuous Optimization Problems
A general unconstrained continuous optimization problem is of the form:
Minimize UC(x) (4-1)
where the objective function UC: Rn→R can be either a closed-form expression or a black-box expression. The function UC(x) is bounded below so that its global minimal (optimal) solution exists and the number of local minimal (optimal) solutions is finite. An expression for a black-box objective function is not available; instead, it is only computable, which can be viewed as an external subroutine which takes a state vector x and returns the corresponding objective function value UC(x).
Constrained Continuous Optimization Problems
A general constrained continuous optimization problem is of the form:
Minimize C(x)
Subject to G(x)≦0 (4-2)
where the objective function C: Rn→R can be either a closed-form expression or a black-box expression. The collection of state vectors satisfying the inequality expressed by the constraint equations, G(x), is called the feasible region. The function C(x) is a bounded below over the feasible region so that its global minimal (optimal) solution exists and the number of local minimal (optimal) solutions is finite.
Unconstrained Discrete Optimization Problems
A general unconstrained discrete optimization problem is of the form:
Minimize UC(x) (4-3)
where the objective function UC: S→R can be either a closed-form expression or a black-box expression. The function UC(x) is a bounded below so that its global minimal (optimal) solution exists and the number of local minimal (optimal) solutions is finite.
Constrained Discrete Optimization Problems
A general constrained discrete optimization problem is of the form:
Minimize C(x) (4-4)
Subject to xεFS
where the objective function C: S→R can be either a closed-form expression or a black-box expression. The function C(x) is a bounded below so that its global minimal (optimal) solution exists and the number of local minimal (optimal) solutions is finite. The constraint set, FS, is composed of all the feasible solutions. The set FS is characterized either by a set of constraint equations with analytical expressions or by a computable black box model which can be viewed as an external subroutine which takes a state vector x and returns the satisfiability of constraints. The state space S is a finite or countably infinite set.
Obviously, this class of discrete optimization problems includes the following problems.
Minimize C(x)
Subject to G(x)≦0 (4-5)
where the objective function C: S→R can be either a closed-form expression or a black-box expression. The function C(x) is a bounded below function over the feasible region so that its global minimal solution exists and the number of local minimal solutions is finite. The constraint function vector G: S→Rm has closed-form expression. The state space S is a finite or countably infinite set.
We developed in this invention dynamical methods for solving the optimization problems described by (4-1), (4-3) and (4-5), to be presented in the following sections.
5. Dynamical Methods for Solving Unconstrained Continuous Optimization Problems
We present in this section dynamical methods for solving unconstrained optimization problems (4-1).
To solve the unconstrained continuous optimization problem (4-1), we develop a dynamical methodology by utilizing certain trajectories of the following associated nonlinear dynamical system to guide the search of all local optimal solutions:
{dot over (x)}=f(x) (5-1)
where xεRn and the vector field f(x) satisfies the existence and uniqueness of solutions.
We propose the following sufficient conditions for such nonlinear dynamical systems whose certain trajectories can be employed to locate all the local optimal solutions of optimization problem (4-1).
Next we will show that, given any unconstrained continuous optimization problem (4-1) either with an analytical closed-form objective function or with an black-box objective function, we can construct a nonlinear dynamical system, satisfying (LC1)˜(LC3), in the form of (5-1).
We first develop the following two theorems to provide some guidelines for finding the nonlinear dynamical system.
We say a function, say E(x), is an energy function of (5-1) if the following two conditions are satisfied: (1) the derivative of E(x) along any system trajectory of (5-1) is non-positive, i.e, Ė(x)≦0; (ii) if x(t) is not an equilibrium point of (5-1), then the set {tεR:Ė(x)=0} along x(t) has a measure zero in R.
It can be shown that for the optimization problem (4-1) with an analytical closed-form objective function, say UC, one example of a nonlinear dynamical system satisfying conditions (LC1)˜(LC3) is the following:
{dot over (x)}=−∇UC(x) (5-2)
where ∇(UC): Rn→Rn is the gradient vector of the objective function UC. Generally speaking, there are nonlinear dynamical systems other than the one described by equation (5-2) that satisfy conditions (LC1) through (LC3) whose certain dynamical trajectories can be employed to compute all the local optimal solutions of problem (4-1) with analytical objective functions. We can put these nonlinear dynamical systems into the following general form:
{dot over (x)}=FCU(x) (5-3)
For the optimization problem (4-1) with a black-box objective function, say UC, the task of constructing a nonlinear dynamical system satisfying conditions (LC1)˜(LC3) is difficult. One can develop several methods to resolve this difficulty; one of them, described below, is based on the divided-difference of the black-box object function
where the derivative of UC with respect to the ith component of x is approximated by the above one-sided differences, ei is the ith Cartesian basis vector and h is a sufficiently small scalar. Another nonlinear dynamical system can be constructed based on the following central differences of the black-box objective function
where the derivative of UC(x) with respect to the i-th component of x is approximated by the above central differences, ei is the i-th Cartesian basis vector and h is a sufficiently small scalar.
There may exist other nonlinear dynamical systems, in addition to those described by (5-4) or (5-5) above, satisfying conditions (LC1)˜(LC3) for solving the unconstrained continuous optimization problem (4-1) with black-box objective functions. We put these nonlinear dynamical systems into the following general form:
For clarity of explanation, we put both the nonlinear dynamical system (5-3) associated with an analytical closed-from objective function and the nonlinear dynamical system (5-6) associated with a black-box objective function into the following general form.
{dot over (x)}=FU(x) (5-7)
Several basic definitions and facts about the nonlinear dynamical system (5-7) are presented as follows [2][4].
The solution of (5-7) starting from xεRn at t=0 is called a trajectory, denoted by φt(x): R→Rn.
A state vector x*εRn if is called an equilibrium point of system (5-7) if FU(x)=0. We say that an equilibrium point x*εRn of (5-7) is hyperbolic if the Jacobian matrix of F(·) at x* has no eigenvalues with a zero real part. If the Jacobian of the hyperbolic equilibrium point has exactly k eigenvalues with positive real part, we call it a type-k equilibrium point. It can be shown that for a hyperbolic equilibrium point, it is a (asymptotically) stable equilibrium point if all the eigenvalues of its corresponding Jacobian have a negative real part and an unstable equilibrium point if all the eigenvalues of its corresponding Jacobian have a positive real part. For a type-k hyperbolic equilibrium point x*, its stable and unstable manifolds Ws(x*), Wu(x*) are defined as follows:
where the dimension of Wu(x*) and Ws(x*) is k and n−k, respectively.
A set K in Rn is called to be an invariant set of (5-7) if every trajectory of (5-7) starting in K remains in K for all tεR. A dynamical system is called completely stable if every trajectory of the system converges to one of its equilibrium points.
The stability region (or region of attraction) of a stable equilibrium point xs is the collection of all the points whose trajectories all converge to be defined as:
The quasi-stability region Ap(xs) of a stable equilibrium point xs is defined as:
Ap(xs)=int{overscore ((A(xs)))}
where {overscore (A(xs))} is the closure of the stability region A(xs), and int{overscore ((A(xs)))} is the interior of {overscore (A(xs))}. From a topological point of view, A(xs) and Ap(xs) are open, invariant, and path-connected sets; moreover, they are diffeomorphic to Rn.
The following theorem [3], together with the above two theorems, presents the theoretical foundation for the dynamical methods developed in this invention for solving the unconstrained continuous optimization problem (4-1) by utilizing its associated nonlinear dynamical system (5-7) satisfying conditions (LC1) through (LC3).
We develop a hybrid local search method, by combining a trajectory-based method and one effective local method, for reliably finding local optimal solutions. The hybrid local search method will be then incorporated into the dynamical methods developed in this invention.
A hybrid local search method for reliably obtaining a local optimal solution for the unconstrained continuous optimization problem (4-1) starting from an initial point is presented below:
Given a local optimal solution of the unconstrained continuous optimization problem (4-1) (i.e. a stable equilibrium point (s.e.p.) of the associated nonlinear dynamical system (5-7)), say xs, and a pre-defined search path starting from the s.e.p., we develop a method for computing the exit point of the nonlinear dynamic system (5-7) associated with an optimization problem (4-1), termed herein the method for computing exit point, presented below:
Given a local optimal solution of the unconstrained continuous optimization problem (4-1) (i.e. a stable equilibrium point (s.e.p.) of the associated nonlinear dynamical system (5-7)), say xs, and a pre-defined search path starting from the s.e.p., we develop a method for computing the DDP of the associated nonlinear dynamical system (5-7) with respect to the local optimal solution xs, which is also a s.e.p. of (5-7), and with respect to the pre-defined search path, termed herein the as method for computing dynamical decomposition point (DDP), as follows:
For the optimization problem (4-1), this invention develops an exit-point-based (EP-based) method for finding a local optimal solution starting from a known local optimal solution.
For the optimization problem (4-1), this invention develops a DDP-based method for finding a local optimal solution starting from a known local optimal solution.
Before presenting the methods, we first present the definitions for the tier-one local optimal solutions (i.e. stable equilibrium points) and tier-N local optimal solutions (i.e. stable equilibrium points) with respect to a known local optimal solution in the continuous solution space of the optimization problem (4-1) (i.e. the state space of the associated nonlinear dynamical system (5-7)) as follows.
For the optimization problem (4-1), this invention develops two groups of methods for finding all tier-one local optimal Solutions. The first group is EP-based while the second group is DDP-based.
EP-based Method
For the optimization problem (4-1), this invention develops two dynamical methods for finding all local optimal solutions. The first method is an EP-based method while the second method is a DDP-based method.
EP-based Method
is empty, continue with step (6); otherwise set j=j+1 and go to step (3).
If it does, go to step (viii); otherwise set
and proceed to the next step.
where ε is a small number.
We present in this section dynamical methods for solving unconstrained discrete optimization problem (4-3).
The task of finding a globally optimal solution to the given instance (4-3) can be prohibitively difficult, but it is often possible to find a local optimal solution which is best in the sense that there is nothing better in its neighborhood. For this purpose, we need a neighborhood that can be identified. The neighborhood function N: S→2S is a mapping which defines for each solution y, a set of solutions, N(y), that are in some sense close to y. The set N(y) is called a neighborhood of solution y. We say x* is a local optimal solution with respect to a neighborhood N(x*) (or simply a local optimal solution whenever N(x*) is understood by context) if UC(x)≧UC(x*) for all xεN(x*).
Many discrete optimization problems of the form (4-3) are NP-hard. It is generally believed that NP-hard problems cannot be solved optimality within polynomial bounded computation time. There are three approaches available to tackle the discrete optimization problems of the form (4-3). The first one is the enumerative method that is guaranteed to find the optimal solution at the expense of tremendous computation efforts. Another one uses an approximation algorithm that runs in polynomial time to find a ‘near’ optimal solution. The other approach resorts to some type of heuristic technology or iterative improvement technique, called local search or local improvement, without any a priori guarantee in terms of solution quality or running time.
Local search is often the approach of choice to solve many discrete optimization problems. Factors such as problem size or lack of insight into the problem structure often prohibit the application of enumerative methods to practical discrete optimization problems. On the other hand, polynomial-time approximation algorithms, in spite of their performance bounds, may give unsatisfactory solutions. Nevertheless, local search provides a robust approach to obtain, in reasonable time, reasonable or perhaps high-quality solutions to practical problems. A basic version of local search algorithms is iterative improvement, which starts with some initial solution and searches its neighborhood for a solution of lower value of cost function (in the case of minimization). If such a solution is found, it replaces the current solution, and the search continues; otherwise, the current solution is a local optimal solution and the local search terminates.
We next present a basic version of local search algorithms called iterative improvement for the discrete optimization problem (4-3):
Iterative Local Improvement (Local Search) Algorithm
For discrete optimization problems (4-3), a local search algorithm proceeds from a current solution, and attempts to improve xk by looking for a superior solution in an appropriate neighborhood N(xk) around xk using the following steps.
The task of finding efficient neighborhood functions for obtaining high-quality local optimal solutions is a great challenge of local search algorithms. No general rules for finding efficient neighborhood functions are currently available. Even for the same problem, there are several neighborhood functions available and different definition of neighborhood yields different results from the same local search algorithm. The design of good neighborhood functions often take advantage of the discrete structure of the study problem and are typically problem dependent. To address issue (i), clearly, an appropriate discrete neighborhood must be large enough to include some discrete variants of the current solution and small enough to be surveyed within practical computation. For notational simplicity, we will assume that all variables are subject to 0–1 constraints.
For variety, the neighborhood functions can be switched during the search process.
In this invention, we also use the following three definitions of neighborhood.
Immediate neighborhood: A point, x′, is said to lie in the immediate neighborhood of x, if and only if the difference of x′ and x can be expressed as e1=[u0,u1, . . . ,un], where only one ui has a value of one, the rest of e are zero. So the difference of a point x and its immediate neighbor x′ is 1.
One-Way Extended Neighborhood (OWEN): A point x′ is said to lie in the one-way extended neighborhood of x, if and only if the difference between x′ and x, i.e. x′−x or x−x′ can be expressed as e2=[u0,u1, . . . ,un], where ui could be either 0, or 1.
Full neighborhood: A point x′ is said to lie in the full neighborhood of x, if and only if the difference of x′ and x i.e. x′−x or x−x′ can be expressed as e2=[u0,u1, . . . , un], where ui could be 0, 1, or −1.
We say a point is a local optimal solution with respect to its immediate neighborhood in the discrete space S if the point has a minimal objective function with respect to its immediate neighborhood. We say a point is a local optimal solution with respect to one-way extended neighborhood in the discrete space S if the point has a minimal objective function with respect to its one-way extended neighborhood. We say a point is a local optimal solution with respect to its full neighborhood in the discrete space S if the point has a minimal objective function with respect to its full neighborhood. The following facts are consequences from the above definitions.
For the discrete optimization problem (4-3), by convention, a neighborhood is referred to as an immediate neighborhood; a local optimal solution is referred to as a local optimal solution with respect to its immediate neighborhood.
For resolving the issue (ii) related to local search algorithms, there are two general iterative improvement local search algorithms to determine xk+1 (i.e. the choice rule in step 2 of local search),
Best Neighborhood Search Method
The best neighborhood search method always chooses the point in a neighborhood with the greatest descent in objective function as xk+1. This method is deterministic in obtaining a local optimal solution.
Better Neighborhood Search Method
Instead of choosing the point in a neighborhood with the greatest descent as xk+1, the better neighborhood search method takes the first point in a neighborhood that leads a descent in objective function as xk+1. Clearly, the better neighborhood search method is fast; however, depending on the order that the points in a neighborhood are examined, this local search method may not reach unique results; especially when xk is far away from a local optimal solution. It is obvious that this local search method has a stochastic nature.
Over the past three decades, a great number of local search algorithms have been developed and applied to a wide range of discrete optimization problems. Many of these algorithms are rather specific and tailored to certain types of discrete optimization problems. Moreover, a majority of them only effectively find local optimal solutions, but not the global one. During their search process, these algorithms usually get trapped at a local optimal solution and cannot move to another local optimal solution. To remedy this drawback and yet maintain the paradigm of neighborhood search, several modifications have been proposed but without much success. A straightforward extension of local search is to run a local search algorithm a number of times using different starting solutions and to keep the best solution found as the final solution. No major successes have been reported on this multi-start approach.
The drawback of iterative improvement local search algorithms has motivated the development of a number of more sophisticated local search algorithms designed to find better solutions by introducing some mechanisms that allow the search process to escape from local optimal solutions. The underlying ‘escaping’ mechanisms use certain search strategies that accept cost-deteriorating-neighborhood to make an escape from local optimal solutions possible. These are several sophisticated local search algorithms; in which we mention simulated annealing, genetic algorithm, Tabu search and neural networks.
However, it has been found in many studies that these sophisticated local search algorithms, among other problems, require intensive computational efforts and usually cannot find the globally optimal solution.
In the present invention, we develop a new systematic methodology, which is deterministic in nature, to find all the local optimal solutions of general discrete optimization problems. In addition, two dynamical methods for solving unconstrained discrete optimization problems (4-3) are developed in this invention:
We develop in this invention several dynamical methods based on two different approaches; namely discrete-based dynamical approach and continuous-based dynamical approach. In addition, several effective methods are developed in this invention for addressing the following two important and challenging issues in the course searching for the globally optimal solution:
To solve the unconstrained discrete optimization problem (4-3), we consider the following associated nonlinear discrete dynamical system which is described by a set of autonomous ordinary difference equations of the form
xk+1=f(xk) (6-5)
where xεS, a discrete state space. Equation (6-5) is known as a map or iteration. We refer to such a map as explicit since xk+1 is given by an explicit formula in terms of xk. The solution of (6-5) starting from x0εS is called a system trajectory. A point x*εS is called a fixed point of the discrete dynamical system (6-5) if x*=f(x*). A fixed point is stable if the system trajectory can be forced to remain in any neighborhood of the fixed point by choice of initial point x0 sufficiently close to that fixed point.
We propose the following sufficient conditions for such nonlinear discrete dynamical systems whose certain trajectories can be employed to locate all the local optimal solutions.
All local optimal solutions of discrete optimization problems (4-3) can be systematically computed via certain dynamical trajectories of the associated discrete dynamical system (6-5). From these local optimal solutions, the global optimal solution can be easily obtained.
6.1.1 Hybrid Local Search Method
A discrete-based hybrid local search method for computing a local optimal solution of a discrete optimization problem described by (4-3) starting from an initial point is presented below:
We next present methods for computing the exit point of a search path starting from a local optimal solution xs. We note that a search path ψt(xs) passing through xs is a curve parameterized by the variable t with ψ0(xs)=xs and ψt(xs) lies in the search space (i.e. solution space) for all t.
A method for computing the exit point of a discrete dynamical system associated with optimization problems (4-3) starting from a know local optimal solution, termed herein the an discrete-based method for computing exit point, is presented below:
Starting from a known local optimal solution, say xs, move along the pre-defined search path to detect said exit point, which is the first local maximum of the objective function of the optimization problem (4-3) along the pre-defined search path.
6.1.3. Discrete-based Method for Computing DDP
Given a local optimal solution, say xs, of the optimization problem (4-3), we develop a discrete-based dynamical method for computing the dynamical decomposition point (DDP) with respect to the corresponding stable fixed point, which is xs, of system (6-5) and with respect to a pre-defined search path as follows:
Given a local optimal solution, say xs, of the optimization problem (4-3), we develop a discrete-based exit-point-based method for finding a new local optimal solution starting from a known local optimal solution as follows:
For the optimization problem (4-3), this invention develops a DDP-based method for finding a local optimal solution starting from a known local optimal solution. This DDP-based method, deterministic in nature, is presented as follows:
For the optimization problem (4-3), we develop in the following a dynamical method for finding all the tier-one local optimal solutions, starting from a known local optimal solution.
We develop in the following a dynamical discrete-based method for finding all the local optimal solutions of a discrete optimization problem (4-3), starting from a given initial point.
is empty, continue with step (6); otherwise set j=j+1 and go to step (3).
In this embodiment of the invention, we develop dynamical methods using a continuous-based dynamical approach to solve the unconstrained discrete optimization problem (4-3) with an analytical closed-form objective function. The basic idea of the dynamical method is to solve an equivalent optimization problem first in the continuous state space and then, if necessary, solve it in the discrete state space. For the optimization problem (4-3) with a black-box objective function, the applicability of the continuous-based approach is conditional and is presented in the following section.
Equivalent Approach
We propose to find transformations to convert the unconstrained discrete optimization problem (4-3) into the following equivalent continuous optimization problem defined in the Euclidean space
min UCeq(x)xεRn (6-6)
where UCeq(x) is a continuous function defined in the Euclidean space Rn, n is the number of the degree of freedom of the original state space S. We propose that the transformation must satisfy the following guidelines:
(G1) Both the original optimization problem (4-3) and the equivalent optimization problem (6-6) share the same global optimal solution (i.e. xεS is the global optimal solution of UC(x) if and only if xεRn is the global optimal solution of UCeq(x))
(G2) Both the original optimization problem (4-3) and the equivalent optimization problem (6-6) share as many the same local optimal solutions as possible.
With the equivalent approach, we find the global optimal solution of (4-3) via the global optimal solution of the equivalent nonlinear optimization problem (6-6). Hence, the techniques developed for finding the global optimal solution and all the local optimal solutions of an optimization problem of the form (6-6) can thus be applied to solving discrete optimization problem (4-3).
We first develop techniques to construct transformations that satisfy the above two guidelines as described below.
Transformation by Appending a Modulator
An equivalent objective function UCeq(x) can be constructed as follows.
where the coefficient
and L is the Lipschitz constant of the original objection function UC(x).
The transformation technique is instrumental in transforming a discrete optimization problem (4-3) into a continuous optimization problem (6-7). This transformation has the advantage of being differentiable all over the state space if UC(x) is differentiable. The computation procedure is easy and straightforward, and a theoretical analysis of the transformation is possible. It can be shown that UCeq(x) is a continuous function, and that UCeq(x) has the same global optimal point as the original problem (4-3). However, the transformation may introduce new local optimal solutions that do not belong to the original problem. This disadvantage causes considerable extra computation burden in finding the global optimal solution via computing local optimal solutions of the transformed problem (6-7). In addition, this transformation is applicable only for the class of discrete optimization problems (4-3) which has an analytical closed-form objective function.
In this embodiment of the invention, we develop another new transformation that meets the guidelines (G1) and (G2) proposed above.
Piecewise Linear Transformation
Given the original problem (4-3), this invention develops the following transformation:
where: dist(x, y) is the distance between x and y. In most cases, the Euclidean distance is used. x=(x1,x2 . . . ,xn)εRn, ┌x┐=(┌x1┐, ┌x2┐, . . . ,┌xn┐) and ┌xi┐ denotes the smallest integer that are greater than or equal to xi, └x┘=(└x1┘,└x2┘, . . . ,└xn┘) and └xi┘ denotes the largest integer that are smaller than or equal to xi
This transformation has several desired properties described as follows.
The three properties derived above assert that our developed transformation can meet the guidelines (G1) and (G2). Property 2 states that the transformed problem has no more local optimal solutions than the original problem. In fact, the local optimal solutions of the transformed problem are that of the original problem with respect to one-way extended neighborhood instead of immediate neighborhood. The transformed problem generally has fewer local optimal solutions than the original problem (4-3). This is a desirable property, which can reduce the computation efforts required in the search for the global optimal solution by skipping shallow local optimal solutions. This transformation gives us the leverage to apply our DDP-based method to effectively solve discrete optimization. Property 3 states that both the transformed problem and the original problem share the same global optimal solution. This transformation applies to the optimization problem (4-3) which has an analytical closed-form objective function. It also applies to the optimization problem (4-3) which has a black-box objective function, but the distance between any two points in the state space is well defined.
Extended Approach
We have so far focused on the continuous-based dynamical approach using the transformation. Another continuous-based approach is developed by employing a direct extension of the discrete solution space of the optimization problem (4-3) into continuous solution space but maintaining the same form of objective function. The resulting problem, termed the extended continuous optimization problem, becomes
Minimize UC(x) (6-9)
where the objective function UC: Rn→R has a closed-form expression, n is the number of the degree of freedom of the original solution space S. It is obvious that the optimal solutions of the original problem (4-3) and that of the extended optimization problem (6-9) are likely to be different, but may not be far apart from each other.
One can solve the original optimization problem (4-3) via solving an approximated optimization problem, such as the extended optimization problem (6-9), in the first stage, and then in the second stage solving the original optimization problem (4-3) using the results obtained from the first stage as initial conditions. The solutions obtained from this two-stage approach are local optimal solutions of the original optimization problem (4-3).
We note that with the introduction of a transformation on the objective function, the equivalent approach transforms the discrete optimization problem (4-3) into the form of (6-6) that has the same solution space as that of the original problem (4-3). With the introduction of the extended solution space (i.e. a continuous solution space) with the same objective function, the extended approach transforms the discrete optimization problem (4-3) into the form of (6-9). In addition, for clarity of explanation, we put the equivalent continuous optimization problem (6-6) as well as the extended continuous optimization problem (6-9) into the following general form:
Minimize UC(x) (6-10)
where the objective function UC: Rn→R has a closed-form expression, n is the number of the degree of freedom of the original solution space S. The solution space becomes continuous.
The transformed optimization problem (6-10) belongs to the unconstrained continuous optimization problem described by (4-1). A nonlinear dynamical system associated with the optimization problem (6-10) satisfying conditions (LC1)–(LC3) of section 5 can be constructed, and put it into the following general form:
{dot over (x)}FU(x) (6-11)
We will develop dynamical methods to systematically compute all the local optimal solutions of the discrete optimization problem (4-3) by using the continuous approach to solve either the equivalent optimization problem (6-6) or the extended optimization problem (6-9), which is put into the general form (6-10).
6.2.1 Hybrid Local Search Method
A hybrid local search method for obtaining a local optimal solution for an unconstrained discrete optimization problem (4-3) in the continuous solution space starting from an initial point is presented below:
A continuous-based method for computing the exit point of the continuous dynamical system (6-11) associated with the an optimization problem (4-3) starting from a known local optimal solution, termed herein the method for computing exit point, is presented below:
Starting from a known local optimal solution, say xs, move along the pre-defined search path to detect said exit point, which is the first local maximum of the continuous objective function (6-10) along the pre-defined search path.
Another method for computing exit point: move along the search path starting from xs, and at each time-step, compute the inner-product of the derivative of the search path and the vector field of the nonlinear dynamic system (6-11). When the sign of the inner-product changes from positive to negative, say between the interval [t1,t2], then either the point which is the search path at time t1 or the point which is the search path at time t2 can be used to approximate the exit point.
6.2.3 Method for Computing DDP
Given a local optimal solution of the optimization problem (4-3), say xs, we develop a continuous-based method for computing the DDP of the associated nonlinear dynamical system (6-11) with respect to the stable equilibrium point xs of system (6-11) and with respect to a pre-defined search path is presented as follows:
For the optimization problem (4-3), this invention develops an exit-point-based (EP-based) method for finding a local optimal solution starting from a known local optimal solution.
For the optimization problem (4-3), this invention develops a DDP-based method for finding a local optimal solution starting from a known local optimal solution.
For the discrete optimization problem (4-3), this invention develops two groups of continuous-based methods for finding all tier-one local optimal Solutions. The first group is continuous-EP-based while the second group is continuous-DDP-based.
Continuous-EP-based Method
For the discrete optimization problem (4-3), this invention develops two dynamical methods for finding all the local optimal solutions. The first method is a continuous-EP-based method while the second method is a continuous-DDP-based method.
Continuous-EP-based Method
has been found before, i.e.
If not found, set
is empty, continue with step (6); otherwise set j=j+1 and go to step (3).
and the set of found dynamic decomposition points
say xsJ, find its all tier-one local optimal solutions by the following steps.
check whether it belongs to the set
If it does, go to step (viii); otherwise set
and proceed to the step (iii).
where ε is a small number.
at which an effective hybrid local search method outperforms the transitional search.
to a discrete point, say
and apply the best neighborhood search method designed for problem (4-3) to obtain an interface point
where an effective hybrid local search method outperforms the best neighborhood search method.
apply the effective hybrid local search method, chosen in step (v), to the discrete optimization problem (4-3) for finding the corresponding local optimal solution, denoted as
has been found before, i.e.
If not found, set
is empty, continue with step (6); otherwise set j=j+1 and go to step (3).
We present, in this section, dynamical methods for solving the constrained discrete optimization problem (4-4).
The level of difficulty in solving the constrained discrete optimization problem (4-4) depends on the structure of the objective function C(x), the solution space S and the feasible set FS. Specifically, it depends on the difficulty to express the element in FS, the size of FS, and the difficulty to systematically enumerate the elements of FS.
For a discrete optimization problem defined by (4-4), a local search algorithm proceeds from a current solution, xkεFS, and attempts to improve xk by looking for a superior solution in an appropriate neighborhood N(xk) around xk using the following steps.
The first two issues are similar to that of unconstrained discrete optimization problem (4-3). To handle constraints, the best neighborhood search method is modified as a constrained best neighborhood search method which only chooses from among a set of feasible neighbors (the neighbors that satisfy the constraints) the one with the greatest descent in objective function as xk+1 Likewise, the better neighborhood local search method is modified as a constrained better neighborhood local search method which only chooses as xk+1, the first feasible neighbor (the neighbor that satisfies the constraints) that leads a descent in objective function. Hence, by applying either constrained best neighborhood search or constrained better neighborhood search, one can obtain xk+1εFS.
To address the issue (iii), when no feasible solution at all is known, we may adopt the following approach to find a feasible point.
For solving constrained discrete optimization problem (4-4), we develop dynamical methods based on the following constrained solution-space approach, which is composed of three stages:
We term this approach, constrained solution-space approach because the search for all the local optimal solutions is performed only in the feasible components of the solution-space.
In this section, we develop a discrete-based dynamical methods to solve constrained discrete optimization problems that satisfy equation (4-5), in which the objective function C: S→R can be either a closed-form expression or a black-box expression, and the constraint function vector G: S→Rm has closed-form expression. Stage 2 of the discrete-based dynamical methods, we develop a discrete-based approach to find all the local optimal solutions located in each feasible component. Stage 3 of the dynamical methods, we develop a continuous-based approach to find all the connected feasible components.
Without loss of generality, we consider the following optimization problem with equality constraints:
Minimize C(x) (7-1)
Subject to
hi(x)=0,iεI={1,2, . . . l} (7-2)
Under some generic conditions, it can be shown that the following feasible set, or termed feasible component
M={xεRn;H(x){circumflex over (=)}(h1(x),h2(x), . . . ,hl(x))T=0
is a smooth manifold. In general, the feasible set M can be very complicated with several isolated (path-connected) feasible components; in other words, the feasible set M can be expressed as a collection of several isolated feasible components
where each Mk is a path-connected feasible component which may contain multiple local optimal solutions of the constrained optimization problem (7-1).
We develop a discrete-based approach to find all the local optimal solutions located in each feasible component. Similar to the discrete-based approach developed in this invention for solving unconstrained discrete optimization problem (4-3), we consider the following associated nonlinear discrete dynamical system:
xk+1=f(xk) (7-3)
When xkεS, the solution space of (7-3) is discrete. The discrete dynamical system satisfies the three conditions (LD1)–(LD3) stated in Section 6. We will develop dynamical methods to systematically compute all local optimal solutions of discrete optimization problems (4-5) inside a feasible component via certain dynamical trajectories of the associated discrete dynamical system (7-3).
7.1 Methods for Finding a Feasible Point
In this patent, we will use a dynamical trajectory-based method described in copending application Ser. No. 09/849,213, filed May 4, 2001, and entitled “DYNAMICAL METHOD FOR OBTAINING GLOBAL OPTIMAL SOLUTION OF GENERAL NONLINEAR PROGRAMMING PROBLEMS”, Inventor Dr. Hsiao-Dong Chiang, which is incorporated herein by reference, to perform the task of systematically locating all the feasible components of (7-2) based on some of the trajectories of the nonlinear dynamical system described by the following general form:
{dot over (x)}=F(x) (7-4)
To construct nonlinear dynamical systems for performing such task, we propose the following guidelines:
One example of nonlinear dynamical systems satisfying conditions (g1) and (g2) for locating all the feasible components of (7-3) is the following:
{dot over (x)}=−DH(x)TH(x) (7-5)
Two methods have been developed for finding a feasible point. The first one is trajectory-based method while the second one is a hybrid local search method.
Trajectory-based Method for Finding a Feasible Point
We next present the algorithm for locating a feasible component of (7-2) starting from an infeasible point as follows.
Integrate the dynamic system (7-4) starting from x0 and the resulting trajectory will converge to a point, located in a stable equilibrium manifold, which is a feasible point of (7-2).
Hybrid Method for Finding a Feasible Point
We next present a hybrid method, incorporating the global convergence property of the trajectory-based method and the speed of a local search method, for finding a feasible point of constraint equations (7-3).
A Lagrange-based method for obtaining a local optimal solution of a discrete optimization problem (4-5) starting from an infeasible point is developed.
By introducing the slack variable y to convert the inequality constraints to the equality constraints, it follows:
g(x, y)=G(x)+y2=0 (7-6)
A Lagrangian function can be constructed as follows.
L(x,λ)=c(x)+λTg(x, y) (7-7)
It is known that if (x*,λ*) constitutes a saddle point of the Lagrange function (7-7), then x* is a local optimal solution of the discrete optimization problem (4-5). In this case, a gradient operator ΔxL(x,λ) is defined as follows.
Definition. A difference gradient operator ΔxL(x,λ) is defined as follows:
i.e. at most one δi is non-zero, and L(x−ΔxF(x,λ))≦L(x,λ) is satisfied. Furthermore, define ΔxL(x,λ)=0 if for any x′ that differs from x by at most value 1 of one variable, L(x,λ)≦L(x′,λ).
Hence, ΔxL(x,λ) always points to the direction in which the objective function descends, and ΔxL(x,λ) moves x to its one neighbor, say x′, that has a lower value in L(x,λ). If such a x′ does not exist, then ΔxL(x,λ)=0.
With this definition, the difference gradient operator ΔxL(x,λ) can be represented as:
We note that ΔxL(x,λ) is not unique, as long as ΔxL(x,λ) points to a better neighborhood direction. One may use either greedy search direction (best neighborhood search direction) or gradient decent direction as the direction for ΔxL(x,λ).
We then form a discrete Lagrange-based update rule:
xk+1=xk+ΔxL(xk,λk)
λk+1=λk+U(k) (7-9)
U(k) is defined as follows.
where: λi(K) is a non-negative adjustment factor that determines the force driving the search process away from the previous encountered violations. λi(K) can be set, for example, to be 1.
The slack variable y is determined as follows.
We next present a discrete Lagrange-based method for finding a local optimal solution of the constrained discrete optimization problem (4-4).
A hybrid constrained local search method for obtaining a local optimal solution of the constrained discrete optimization problem (4-5) starting from a feasible initial point is presented below:
A method for computing the feasible exit point of a discrete dynamical system (7-3) associating with a constrained discrete optimization problem (4-5) along a search path starting from a know local optimal solution, termed herein the a discrete-based method for computing a feasible exit point (i.e. exit point satisfying all the constraints), is presented below:
Starting from a known local optimal solution, say xs, move along the pre-defined search path to detect said exit point, which is the first local maximum of the objective function (4-5) along the pre-defined search path. Check constraint violations during the process. If a constraint is violated, then stop the search process.
7.5 Methods for Computing Feasible DDP
For the constrained discrete optimization problem (4-5), we develop a dynamical method for computing the feasible dynamical decomposition point (DDP) with respect to a local optimal solution and with respect to a pre-defined search path as follows:
For the constrained discrete optimization problem (4-5), we develop an exit-point-based method for finding a new local optimal solution starting from a known local optimal solution as follows:
For the constrained discrete optimization problem (4-5), a DDP-based method for finding a local optimal solution starting from a known local optimal solution, say xs, is developed as follows:
We develop in the following one dynamical discrete-based method for finding all the tier-one local optimal solutions, lying within a feasible component, of the constrained discrete optimization problem starting from a known local optimal solution.
We develop in the following a discrete-based dynamical method for finding all the local optimal solutions lying within a feasible component of the constrained discrete optimization problem (4-5), starting from a given initial point.
say, xsJ, find its all tier-one local optimal solutions.
and proceed to the next step; otherwise, go to the step (iv).
has been found before, i.e.
If not found, set
is empty, continue with step (6); otherwise set j=j+1 and go to step (3).
We develop in the following a dynamical method for finding an adjacent isolated feasible components starting from a known feasible component for the
We develop in the following dynamical discrete-based methods for finding all the local optimal solutions of the constrained discrete optimization problem (4-5), starting from a given initial point. If all the local optimal solutions have been found, then the global optimal solution is, thus, obtained; otherwise, the best solution among all the found local optimal solutions is obtained.
located in an adjacent feasible component;
apply an effective (hybrid) local search method to find a local optimal solution, say
Check if
has been found before, i.e.
If not found, set
and
A VLSI system is partitioned at several levels due to its complexity: (1) system level partitioning, in which a system is partitioned into a set of sub-systems whereby each sub-system can be designed and fabricated independently on a single PCB; (2) board level partitioning while the a PCB is partitioned into VLSI chips; (3) chip level partitioning where a chip is divided into smaller sub-circuits. At each level, the constraints and objectives of the partitioning are different. The report will concentrate on the chip level partitioning. The results can be expended to other partitioning level with some effort.
The partitioning problem can be expressed more naturally in graph theoretic terms. A hypergraph G=(V, E) representing a partitioning problem can be constructed as follows. Let V={v1,v2, . . . ,vn} be a set of vertices and E={e1,e2, . . . ,em} be a set of hyperedges. Each vertex represents a component. There is a hyperedge joining the vertices whenever the components corresponding to these vertices are to be connected. Thus, each hyperedge is a subset of the vertex set i.e., ei⊂V,i=1,2, . . . ,m. In other words, each net is represented by a set of hyperedges. The area of each component is denoted as a(vi),1≦i≦n. The partitioning problem is to partition V into V1,V2, . . . ,Vkn where
Vi∩Vj=φ, i≠j
∪i=1kVi=V
Partition is also referred to as a cut. The cost of partition is called the cut-size, which is the number of hyperedges crossing the cut. Let Cij be the cut-size between partitions Vi and Vj. Each partition Vi has an area
and a terminal count Count(Vi). The maximum and the minimum areas, that a partition Vi can occupy, are denoted as
respectively. The maximum number of terminals that a partition Vi can have is denoted as Ti. Let P={p1,p2, . . . ,pm} be a set of hyperpaths. Let H(pi) be the number of times a hyperpath pi is cut.
The constraints and the objective functions for the problem at chip level partitioning are described as follows.
The partition problem belongs to the category where no closed-form objective function is available. And it appears that no distance can be defined to fill the continuous space between two instances of partitions. So only the discrete version of the solution algorithm can be developed.
We have applied the embodiment of our invention in section 6 to develop a discrete-based DDP-based method to solve the partition problem. As to the hybrid local search, the industry standard FM method is used. We have implemented the developed methods and evaluated it on several benchmark problems. The test results shown in the table 4 are obtained using the minimum cutsets of 50 runs from different initial points under 45–55% area balance criterion and real cell sized. The simulation results using the standard FM method are also shown as a comparison.
By incorporating the DDP-based method with the traditional FM method, the quality of the solution is significantly improved, which is shown in table 4. The improvements range from 9.5% to 268%, where large circuits achieve a big improvement and small circuits achieve a small improvement. Overall, the DDP-based method is significantly better than the FM method. In some cases, the DDP-based method outperforms the results of current leading-edge multi-level partitioning approach. Our recent work shows that, by incorporating the multi-level partition approach as the hybrid local search method, the DDP-based method is also able to achieve a significant improvement over the results of the multi-level partition approach.
The other improvement of the DDP-based method is that the solution deviation is significantly smaller compared to the same indices of the FM method. This improvement means that the DDP-based method needs much fewer runs than the FM method to get a good solution.
References:
Number | Name | Date | Kind |
---|---|---|---|
5471408 | Takamoto et al. | Nov 1995 | A |
6076030 | Rowe | Jun 2000 | A |
6490572 | Akkiraju et al. | Dec 2002 | B1 |
6606529 | Crowder et al. | Aug 2003 | B1 |
6826549 | Marks et al. | Nov 2004 | B1 |
20020111780 | Sy | Aug 2002 | A1 |
20020183987 | Chiang | Dec 2002 | A1 |
Number | Date | Country | |
---|---|---|---|
20030220772 A1 | Nov 2003 | US |