1. Field of the Invention
The invention pertains to the field of modeling and optimization. More particularly, the invention pertains to methods for solving nonlinear optimization problems. Practical applications include finding optimal power flow in smart grids and short-term load forecasting systems.
2. Description of Related Art
Optimization technology has practical applications in almost every branch of science, business, and technology. Indeed, a large variety of quantitative issues such as decision, design, operation, planning, and scheduling can be perceived and modeled as either continuous or discrete nonlinear optimization problems. These problems are bounded in practical systems arising in the sciences, engineering, and economics. Typically, the overall performance (or measure) of a system can be described by a multivariate function, called the objective function. According to this generic description, one seeks the best solution of a nonlinear optimization problem, often expressed by a real vector, in the solution space that satisfies all stated feasibility constraints and minimizes (or maximizes) the value of an objective function. The vector, if it exists, is termed the global optimal solution.
The process of finding the global optimal solution, namely, the process of global optimization, has many industrial applications in different areas. The optimal power flow (OPF) problem in electric power systems is one example, where the target is to minimize the system total production cost or the system total power losses, and the decision variables are quantities associated with the devices of the power network that can be adjusted, such as the power outputs by generators, the voltage settings at system nodes, the amount of shunt capacitors deployed, and the tap positions of transformers. A tank design for a multi-product plant in chemical engineering is another example, where the target is to minimize the sum of the production cost per ton per product produced and the decision variables are quantities of products. As yet another example in the power industry, training artificial neural networks (ANN) to forecast system power demands, the inter-area interchanged energy, and renewable energy (wind, solar, biomass, etc.) generation, where the target is to minimize the differences between the outputs produced by the ANN and the actual quantities, and the decision variables are the structure of the ANN (i.e., the number of layers and the number of nodes at different layers) and its connection weights.
For practical applications, the underlying objective functions are often nonlinear and depend on a large number of variables. This makes the task of searching the solution space for the global optimal solution very challenging. The primary challenge is that, in addition to the high dimensionality of the solution space, there are many local optimal solutions in the solution space where a local optimal solution is optimal in a local region of the solution space, but not the global solution space. The global optimal solution is just one solution and yet, both the global optimal solution and local optimal solutions share the same local properties. In general, the number of local optimal solutions is unknown and can be quite large. Furthermore, the objective function values at the local optimal solutions and the global optimal solution may differ significantly. Hence, there are strong motivations to develop effective methods for finding the global optimal solution.
One popular method for solving nonlinear optimization problems is to use an iterative local improvement search procedure, which can be described as follows: start from an initial vector and search for a better solution in its neighborhood. If an improved solution is found, repeat the search procedure using the new solution as the initial point; otherwise, the search procedure will be terminated. However, such local improvement search methods usually get trapped at local optimal solutions and are unable to escape from these local optimal solutions. In fact, a great majority of existing nonlinear optimization methods for solving optimization problems produce only local optimal solutions but not the global optimal solution. Some popular local methods include Newton's method, the Quasi-Newton method, the trust-region search method, the quadratic programming method, and the interior point method.
The drawback of iterative local improvement search methods has motivated the development of more sophisticated local search methods designed to find better solutions via introducing special mechanisms that allow the search process to escape from local optimal solutions. The underlying “escaping” mechanisms use certain search strategies, accepting a cost-deteriorating neighborhood to make an escape from a local optimal solution possible. These sophisticated global search methods, which are also called metaheuristic methods, include simulated annealing, genetic algorithm, Tabu search, evolutionary programming, and particle swarm operator methods. However, these sophisticated global search methods require intensive computational effort and usually, still cannot find the globally optimal solution.
In the present invention, two popular metaheuristic methods, namely, the particle swarm optimization (PSO) method and the genetic algorithm (GA), are of special interest. It needs to be mentioned that the methods presented in this invention are also applicable to other metaheuristic methods, such as simulated annealing, the genetic algorithm, Tabu search, evolutionary programming, and differential evolution.
Particle swarm optimization (PSO) is a metaheuristic evolutionary computation technique developed by Eberhart and Kennedy (“Particle swarm optimization”, Proceedings IEEE International Conference on Neural Networks, Piscataway, N.J., pp. 1942-1948, 1995). This technique is a form of swarm intelligence in which the behavior of a biological social system, like a flock of birds, is simulated. Particle Swarm Optimization (PSO) methods play an important role in solving nonlinear optimization problems. Significant R&D efforts have been spent on PSOs and several variations of PSOs have been developed. However, PSO has several drawbacks in searching for the global optimal solution. One drawback, which is common to other stochastic search methods, is that PSO is not guaranteed to converge to the global optimal solution and can easily converge to a local optimal solution. Another drawback is that PSO is computationally demanding and has slow convergence rates.
The genetic algorithm (GA) is another search metaheuristic that mimics the process of natural evolution and is used to generate useful solutions to optimization and search problems (see, for example, Mitchell, An Introduction to Genetic Algorithms, MIT Press, Cambridge, Mass., 1996). The algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm randomly selects individuals, or search instances, from the current population and uses them as parents to produce the offspring for the next generation. Over successive generations, the population evolves toward an optimal solution. GA exploits historical information to direct the search into the region of better performance (better fitness) within the search space. It follows the principles of “survival of the fittest” in nature, that competition among individuals, or search instances, for scanty resources results in the fittest individuals dominating over the weaker ones.
The term TRUST-TECH used herein is an acronym for “TRansformation Under STability-reTaining Equilibria Characterization”. The TRUST-TECH methodology is a dynamical method for obtaining a set of local optimal solutions of general optimization problems, including the steps of first finding, in a deterministic manner, one local optimal solution starting from an initial point, and then finding another local optimal solution starting from the previously found one until all of the local optimal solutions are found, and then finding the global optimal solution from the local optimal solutions.
Wang and Chiang (“ELITE: Ensemble of Optimal Input-Pruned Neural Networks Using TRUST-TECH”, IEEE Transactions on Neural Networks, Vol. 22, pp. 96-109, 2011) disclose an ensemble of optimal input-pruned neural networks using a TRUST-TECH (ELITE) method for constructing a high-quality ensemble through an optimal linear combination of accurate and diverse neural networks.
Lee and Chiang (“A dynamical trajectory-based methodology for systematically computing multiple optimal solutions of general nonlinear programming problems”, IEEE Transactions on Automatic Control, Vol. 49, pp. 888-899, 2004) disclose a dynamical trajectory-based methodology for systematically computing multiple local optimal solutions of general nonlinear programming problems with disconnected feasible components satisfying nonlinear equality/inequality constraints.
In the above-cited 2004 Lee and Chiang paper, the TRUST-TECH method for finding, starting from a local optimal solution, a set of local optimal solutions is described as follows, which is shown in the flowchart of
Another version of the TRUST-TECH method for finding, starting from a local optimal solution, a set of local optimal solutions, also set out in the 2004 paper, is described as follows, which is shown in the flowchart of
Note: Given a local optimal solution of a general unconstrained continuous optimization problem (i.e., a stable equilibrium point (SEP) of the associated nonlinear dynamical system and a predefined search path starting from the SEP, we describe a method for computing the exit point of the nonlinear dynamic system associated with the optimization problem.
The method is as follows: starting from a known local optimal solution, say xs, move along a predefined search path to compute said exit point, which is the first local maximum of the objective function of the optimization problem along the predefined search path.
Chiang and Chu (“Systematic search method for obtaining multiple local optimal solutions of nonlinear programming problems”, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, Vol. 13, pp. 99-109, 1996) disclose systematic methods to find several local optimal solutions for general nonlinear optimization problems.
All of the above-mentioned references are hereby incorporated by reference herein.
A method determines a global optimal solution of a system defined by a plurality of nonlinear equations. The method includes the first stage of applying a metaheuristic method to cluster a plurality of search instances into at least one group or “promising region” for the plurality of nonlinear equations. The method also includes the second stage of selecting a center point and a plurality of top points from the search instances in each promising region and applying a local method, starting from the center point and top points for each group, to find a local optimal solution for each group in a tier-by-tier manner. The method further includes the third stage of applying a TRUST-TECH methodology to each local optimal solution to find a set of tier-1 optimal solutions and identifying a best solution among the local optimal solutions and the tier-1 optimal solutions as the global optimal solution. The method further includes applying a TRUST-TECH methodology to each tier-1 optimal solution to find a set of tier-2 optimal solutions and identifying a best solution among the local optimal solutions and the tier-1 and tier-2 optimal solutions as the global optimal solution. In some embodiments, the metaheuristic method is a particle swarm optimization methodology. In other embodiments, the metaheuristic method is a genetic algorithm methodology.
In some embodiments, to overcome the limitations of metaheuristic methods, the present methodology uses a metaheuristic-guided TRUST-TECH methodology, which is highly efficient and robust, to solve global unconstrained optimization problems. The methodology preferably has the following goals in mind:
In some embodiments, the present methods are automated. At least one computation of the present methods is performed by a computer. Preferably all of the computations in the present methods are performed by a computer. A computer, as used herein, may refer to any apparatus capable of automatically carrying out computations based on predetermined instructions in a predetermined code, including, but not limited to, a computer program.
In some embodiments, the present methods are executed by one or more computers following the program instructions of a computer program product on at least one computer-readable, tangible storage device. The computer-readable, tangible storage device may be any device readable by a computer within the spirit of the present invention.
Referring to
The present methods are efficient and robust methods for solving global unconstrained optimization problems. In one embodiment, the present methods are termed herein as metaheuristic-guided TRUST-TECH methods. Referring to
The premises for the present methodology to find high-quality local optimal solutions preferably include the following:
1) All of the search instances of the metaheuristic method have reached a high level of consensus by forming several groups. Each group contains a number of instances (large or small) that lie close to each other in the search space.
2) Each group of instances reveals that high-quality local optimal solutions, even the global optimal solution, are located in the region ‘covered’ by the search instances and are close to the search instances.
3) From the high-quality local optimal solutions obtained by the metaheuristic method, the TRUST-TECH methodology effectively finds all of the tier-1 and tier-2 local optimal solutions located in the covered region of the search space.
4) The set of all the tier-0, tier-1, and tier-2 local optimal solutions obtained by the TRUST-TECH methodology contains a set of high-quality local optimal solutions or even the global optimal solution.
The only reliable way to find the global optimal solution of an unconstrained optimization problem is to first find all the high-quality local optimal solutions and then, from them, find the global optimal solution. The TRUST-TECH methodology is a dynamical method for obtaining a set of local optimal solutions of general optimization problems that includes the steps of first finding, in a deterministic manner, one local optimal solution, starting from an initial point, and then finding another local optimal solution, starting from the previously found one until all the local optimal solutions are found, and then finding the global optimal solution from the local optimal solutions. The TRUST-TECH methodology framework is illustrated in solving the following unconstrained nonlinear programming problem.
Without loss of generality, an n-dimensional optimization problem can be formulated:
minxεR
where C(x): Rn→R is a function bounded below and possesses only finite local optimal solutions. It is noted that maximization problems are also readily covered by (1) since
maxxεR
is equivalent to
minxεR
Therefore, only minimization will be considered in the following description of the optimization problem. A focus of solving this problem is to locate all or multiple local optimal solutions of C(x). The TRUST-TECH methodology solves this optimization problem by first defining a dynamical system:
{dot over (x)}(t)=−∇C(x),xεRn. (2)
Moreover, the stable equilibrium points (SEPs) in the dynamical system (2) have one-to-one correspondence with local optimal solutions of the optimization problem (1). Because of this transformation and correspondence, we have the following results.
First, a local optimal solution of the optimization problem (1) corresponds to a stable equilibrium point of the gradient system (2).
Second, the search space for the optimization problem (1) of computing multiple local optimal solutions is then transformed to the union of the stability regions in the defined dynamical system, each of which contains only one distinct SEP.
Third, an SEP can be computed using a trajectory method or using a local method, with a trajectory point lying in its stability region as the initial point.
Finally, this transformation allows each local optimal solution of the problem (1) to be located via each stable equilibrium point of the gradient system (2).
The task of selecting proper search directions for locating another local optimal solution from a known local optimal solution of the unconstrained optimization problem in an efficient way is very challenging. Starting from a local optimal solution (i.e., an SEP), there are several possible search directions that may be chosen as a subset of dominant eigenvectors of the objective Hessian at the SEP. However, computing Hessian eigenvectors, even dominant ones, is computationally demanding, especially for large-scale problems. Another choice is to use random search directions, but they need to be orthogonal to each other to span the search space and maintain a diverse search. It appears that effective directions in general have a close relationship with the structure of the objective function (and the feasible set for constrained problems). Hence, exploitation of the structure of the objective under study proves fruitful in selecting search directions.
By exploring the TRUST-TECH methodology's capability of escaping from local optimal solutions in a systematic and deterministic way, it becomes feasible to locate multiple local optimal solutions in a tier-by-tier manner. As a result, multiple high-quality local optimal solutions are obtainable.
According to the characteristics of the TRUST-TECH method and metaheuristic methods, the present methods are developed as a metaheuristic-guided TRUST-TECH methodology for solving general nonlinear optimization problems of the form (1). Referring to
The metaheuristic method preferably guides each search instance to promising regions that may contain the global optimal solution. However, since each search instance has different information regarding the location of the global optimal solution, these search instances hold different views of the location of the global optimal solution; therefore, all search instances may gather at several different regions of the search space. In other words, these search instances tend to form groups of instances as they progress. They preferably reach an “equilibrium state” for consensus that meets both of the following conditions, including 1) the number of groups of instances is not changing, and 2) the members in each group are not changing.
Different search instances will settle down in different locations, forming several different groups in the research space; therefore, the instances do not form only one group. In addition, it should be noted that the largest group, i.e., the group containing the greatest number of search instances, does not necessarily indicate the region with members of search instances that will settle down to the global optimal solution. In some cases, distinct search instances with outstanding performance move towards the region containing the global optimal solution.
In addition, the number of search instances in each group and the quality of the fitness value of each instance do not necessarily reveal information regarding the quality of local optimal solutions lying in the region. Consequently, the region in which each group of instances settles down is preferably exploited by the TRUST-TECH method in a tier-by-tier manner to obtain high-quality local optimal solutions. Therefore, all groups are preferably explored to make sure the global optimal solution is obtained.
To make the assistance more efficient, stage I clusters all of the search instances using effective supervised and unsupervised grouping schemes, such as an Iterative Self-Organizing Data Analysis Techniques Algorithm (ISODATA), to identify the groups after certain iterations. It should be noted that ISODATA is an unsupervised clustering method, and a user needs to provide threshold values to determine the number of groups and the members in each group. In view of the results of clustering, the stopping criterion (i.e., the consensus condition) of stage I is reached when all search instances have reached a consensus. If not, the metaheuristic process continues the exploration stage until the stopping criterion is met.
Referring to
Referring to
After stage I, the methodology preferably enters stage II, which is the guiding stage. This stage serves as the interface between the metaheuristic method and the TRUST-TECH method. Referring to
1) The groups or clusters of search instances formed in stage I are the input (block 501).
2) Top few search instances and the center search instance in each group are selected as initial points for an effective local method (block 502). A search instances is determined as a top one if it results in the best objection function value. The center instance is determined as the one that is closest to the centroid of the group.
3) Starting from these initial points, the local method is applied to search for corresponding local optimal solutions (block 503). The local method can be, but not limited to, Newton's method, the quasi-Newton method, the trust-region search method, the quadratic programming method, or the interior point method.
The outputs 504 of this stage are the local optimal solutions obtained from each group. The number of local optimal solutions from each group is no more than the number of initial points.
Stage II is shown schematically in
The TRUST-TECH method plays an important role in stage III, which helps the local optimal method to escape from one local optimal solution and move toward another local optimal solution. The TRUST-TECH method preferably exploits all of the local optimal solutions in each “covered” region in a tier-by-tier manner.
Referring to
It is interesting to note that the search space of stage III is the union of the stability region for the seed local optimal solutions from stage II, the stability region of each tier-one local optimal solution from stage III, and the stability region of each tier-two local optimal solution from stage III. The exploitation procedure starts from the local optimal solutions obtained at stage II located in each group, i.e., the seed local optimal solutions. The top few local optimal solutions from all of the tier-one local optimal solutions, or some of tier-two local optimal solutions, are the outputs of this stage.
Referring to
Referring to
Theoretically speaking, the TRUST-TECH methodology may continue to find the set of tier-3 local optimal solutions at the expense of considerable computational efforts. From experience, however, in the set of tier-1 local optimal solutions, there usually exists a very high-quality local optimal solution, if not the global optimal solution. Hence, the exploitation process is terminated after finding all the first-tier local optimal solutions. If necessary, the tier-2 local optimal solutions may be obtained in stage III.
The TRUST-TECH methodology may search all of the local optimal solutions in a tier-by-tier manner and then search for the high-quality optimal solution among them. If the initial point is not close to the high-quality optimal solution, then the task of finding high-quality optimal solutions may take several tiers of local optimal solution computations. Hence, an important aim of stage I is to reduce the number of tiers required to be computed at stage III. All of the search instances of the metaheuristic stage are preferably grouped into no more than a few groups of search instances when all the search instances have reached a consensus. More preferably, all of the search instances of the metaheuristic method are grouped into no more than three groups. It is likely that local optimal solutions in these regions contain the high-quality optimal solution.
There is no theoretical proof that the locations of the top few selected local optimal solutions are close to the high-quality optimal solution. However, from experience, all of the high-quality optimal solutions were obtained in all numerical studies. Selecting the top-performance search instances from each group as initial points in the guiding stage allows the scheme embedded in the stage III to be effective.
In summary, a three-stage metaheuristic-guided TRUST-TECH methodology preferably proceeds in the following manner:
Use a metaheuristic method to solve the optimization problem. After a certain number of iterations, apply a grouping scheme (e.g., ISODATA) to all search instances to form the groups. In some embodiments, the number of iterations is predetermined. In other embodiments, the number of iterations is based on meeting a predetermined criterion. When the search instances in each group and the number of groups do not change with further iterations, this implies that all search instances have reached a consensus. Then, the stopping condition is met and stage I is completed.
Select the top few search instances in terms of their objection function value and the center particle from each group. In a preferred embodiment, the top three search instances are selected. Starting from each selected search instance, apply a local optimization method to find the corresponding local optimal solution. These local optimal solutions are then used as guidance for the TRUST-TECH methodology to search for the corresponding tier-one local optimal solutions during stage III.
Starting with each obtained (tier-0) local optimal solution, apply the TRUST-TECH methodology to intelligently move away from this local optimal solution and find the corresponding set of tier-1 local optimal solutions. After finding the set of tier-1 local optimal solutions, the TRUST-TECH method continues to find the set of tier-2 local optimal solutions, if necessary. Finally, identify the best local optimal solution among tier-0, tier-1, and tier-2 local optimal solutions.
In one embodiment, the following Particle Swarm Optimization (PSO)-guided TRUST-TECH methodology is used for solving the general unconstrained optimization problem of the form (1).
There are several variants of PSO methods to which the present methodology is applicable. As an illustration, the traditional PSO methodology is used in the following presentation. A search instance is also called a particle of the PSO method. In the initialization phase of PSO, the positions and velocities of all particles are randomly initialized. The fitness value, which is the objective function value, is calculated at each initialized position. These fitness values, respectively, are the pbest of each particle, which implies the optimal fitness of each particle thus far. Among these fitness values, the best one is the initial gbest which is the optimal fitness value among all of the particles thus far.
In each step, PSO relies on the exchange of information between particles of the swarm. This process includes updating the velocity of a particle and then its position. The former is accomplished by the following equation:
v
i
k+1
=wv
i
k
+c
1
r
1(pibest−xik)+c2r2(gbest−xik), (3)
where vik is the velocity of the ith particle at the k-th step, xik denotes the position of the i-th particle at the k-th step, w is the inertia weight that is used to seek a balance between the exploitation and exploration ability of particles, c1 and c2 are constants that say how much the particle is directed towards good positions and both are typically set to a value of 2.0, and r1 and r2 are elements drawn from two uniform random sequences in the range (0,1).
The velocity updating equation (3) indicates that the PSO search procedure preferably consists of three parts. The first part represents the inertia of a particle itself. The second represents the next search direction in which each particle should move: its own previous best position. The third part indicates that each particle should move towards the best position of all particles thus far.
The new position of each particle is calculated using:
x
i
k+1
=x
i
k
+v
i
k+1. (4)
To achieve an update for each particle's velocity, the new fitness value is preferably calculated at the new position to replace the previous pbest or gbest if a better fitness value is obtained. This procedure is repeated until the stopping criterion is met.
There are also several improved variants of the PSO method, such as designing a new mathematical model of PSO by using other methods or combining with different mutation strategies to enhance their search performance. Despite these improvements, PSO-based methods still suffer from several disadvantages. First, these methods usually do not converge to the global optimal solution and can easily be entrapped in a local optimal solution, which affects the convergence precision or even results in divergence and calculation failure. Additionally, their computational speed can be very slow. Furthermore, they lack the scalability to find the global optimal solution of large-scale optimization problems as compared to small-scale problems with a similar topological structure.
According to the characteristics of the TRUST-TECH method and the PSO method mentioned above, the present method is developed as a PSO-guided TRUST-TECH method for solving general nonlinear optimization problems of the form (1). Referring to
Referring to
Referring to
After stage I, the methodology preferably enters stage II, which is the guiding stage. This stage serves as the interface between the PSO method and the TRUST-TECH method. Referring to
The outputs 504 of this stage are the local optimal solutions obtained from each group. The number of local optimal solutions from each group is no more than the number of initial points.
Stage II is shown schematically in
The TRUST-TECH method plays an important role in stage III, which helps the local optimal method to escape from one local optimal solution and move toward another local optimal solution. The TRUST-TECH method preferably exploits all of the local optimal solutions in each “covered” region in a tier-by-tier manner.
In summary, a three-stage PSO-guided TRUST-TECH methodology preferably proceeds in the following manner:
Use a PSO or an improved PSO method to solve the optimization problem. After a certain number of iterations, apply a grouping scheme (e.g., ISODATA) to all the particles to form the groups. In some embodiments, the number of iterations is predetermined. In other embodiments, the number of iterations is based on meeting a predetermined criterion. When the members in each group and the number of groups do not change with further iterations, this implies that all the particles have reached a consensus. Then, the stopping condition is met and stage I is completed.
Select the top few particles in terms of their objection function value and the center particle from each group. In a preferred embodiment, the top three particles are selected. Starting from each selected particle, apply a local optimization method to find the corresponding local optimal solution. These local optimal solutions are then used as guidance for the TRUST-TECH methodology to search for the corresponding tier-one local optimal solutions during stage III.
Starting with each obtained (tier-0) local optimal solution, apply the TRUST-TECH methodology to intelligently move away from this local optimal solution and find the corresponding set of tier-1 local optimal solutions. After finding the set of tier-1 local optimal solutions, the TRUST-TECH method continues to find the set of tier-2 local optimal solutions, if necessary. Finally, identify the best local optimal solution among tier-0, tier-1, and tier-2 local optimal solutions.
In an alternative embodiment, the following Genetic Algorithm (GA)-guided TRUST-TECH methodology is used for solving general unconstrained optimization problems.
The genetic algorithm preferably contains the following steps.
According to the characteristics of the TRUST-TECH method and the GA method mentioned above, the present method is developed as a GA-guided TRUST-TECH method for solving general nonlinear optimization problems of the form (1). Referring to
Referring to
Referring to
After stage I, the methodology preferably enters stage II, which is the guiding stage. This stage serves as the interface between the GA method and the TRUST-TECH method. Referring to
The outputs 504 of this stage are the local optimal solutions obtained from each group. The number of local optimal solutions from each group is no more than the number of initial points.
Stage II is shown schematically in
The TRUST-TECH method plays an important role in stage III, which helps the local optimal method to escape from one local optimal solution and move toward another local optimal solution. The TRUST-TECH method preferably exploits all of the local optimal solutions in each “covered” region in a tier-by-tier manner.
In summary, a three-stage GA-guided TRUST-TECH methodology preferably proceeds in the following manner:
Use a GA or an improved GA method to solve the optimization problem. After a certain number of iterations, apply a grouping scheme (e.g., ISODATA) to all the individuals to form the groups. In some embodiments, the number of iterations is predetermined. In other embodiments, the number of iterations is based on meeting a predetermined criterion. When the individuals in each group and the number of groups do not change with further iterations, this implies that all the particles have reached a consensus. Then, the stopping condition is met and stage I is completed.
Select the top few individuals in terms of their objection function value and the center particle from each group. In a preferred embodiment, the top three individuals are selected. Starting from each selected individuals, apply a local optimization method to find the corresponding local optimal solution. These local optimal solutions are then used as guidance for the TRUST-TECH methodology to search for the corresponding tier-one local optimal solutions during stage III.
Starting with each obtained (tier-0) local optimal solution, apply the TRUST-TECH methodology to intelligently move away from this local optimal solution and find the corresponding set of tier-1 local optimal solutions. After finding the set of tier-1 local optimal solutions, the TRUST-TECH method continues to find the set of tier-2 local optimal solutions, if necessary. Finally, identify the best local optimal solution among tier-0, tier-1, and tier-2 local optimal solutions.
The methods of the present invention are first evaluated on five 1000-dimensional benchmark functions. These benchmark functions include
The advantages of using this methodology are clearly manifested, as illustrated by the results in the following five cases. Stage I uses a traditional PSO method. The number of particles of PSO is set to be 30, and the maximum iteration number is set to be 1000.
Stage I provides the covered search region and the locations of optimal solutions after the particles have reached a consensus, while Stage II provides the corresponding tier-0 local optimal solutions from the three best particles and the center point of each region. Stage III searches for the tier-1 or tier-2 local optimal solutions, starting from these tier-0 local optimal solutions, and obtains a set of high-quality optimal solutions, preferably including the global optimal solution.
Numerical results on these benchmark functions show that, at stage I, the behavior of the best particle objective function value does not sharply decline after a certain number of iterations. This means that all particles have reached a consensus at which the number of groups of particles and the members in each group do not change upon further iterations. At stage II, according to their positions in the search space, all particles were congregated into three groups, and the regions they cover may contain the global optimal solution. The three best particles and the center point of each group and each region were subjected to a local optimization method. Starting from these points, the local optimal method obtained a few local optimal solutions in each group, which formed the tier-0 local optimal solution in each group. At stage III, the TRUST-TECH method led the local method to exploit all the local optimal solutions lying within each region in a tier-by-tier manner. The best top local optimal solutions were then identified. It is observed that the average degree of improvement of stage III over the stage II result in each group ranges from 11% to 71%.
To further compare the performance of the present methodology to a PSO method, the five testing functions were solved by the PSO method for a total of 20,000 iterations. It can be easily noted that the present methodology outperforms the PSO with 20,000 iterations for solving general high dimensional optimization problems. The PSO-guided TRUST-TECH method of the present invention obtains better local optimal solutions than the PSO with much shorter computation time. In summary, the present PSO-guided TRUST-TECH method of the present invention can significantly improve the performance of PSO in solving large-scale optimization problems.
The method of the present invention is then applied to a practical application, namely, short-term load forecasting (STLF) in power systems.
Load forecasting is a key component of the daily operation and planning of an electric utility, such as generation scheduling, scheduling of fuel purchase, maintenance scheduling, and security analysis. Short-term load forecasting, which aims to produce forecasts for a few minutes, hours, or days ahead, in particular, has become increasingly important since the rise of a competitive energy market and the increasing penetration of renewable energies. Despite its importance, accurate load forecasting is a difficult task. First, the load series is complex and exhibits several levels of seasonality. Second, there are many important factors, especially weather-related ones, that must be considered in the forecasts. The relationship between these factors and the load forecast has been found to be highly nonlinear. Researchers showed that it is relatively easy to construct a forecaster whose performance is about 10% in terms of the mean absolute percent error (MAPE); however, the error costs are too high to be acceptable. A much tighter operations load forecast performance is required for practical usage by electric utilities.
Referring to
During training the ANN 1003, the comparator 1005 compares the ANN outputs 1004 with the historical (actual) outputs 1002. In one embodiment of the present method, during training the ANN for load forecasting, the optimization problem (1) can be expressed as finding the best weights to minimize the mean squared error (MSE) between the ANN outputs and the actual loads, which is defined as
where Xi=(x1, x2, . . . , xn) is the i-th historical input data vector, Yi=(y1, y2, . . . , ym) is the i-th historical output data vector, the parameter values w is the vector of weights connecting the nodes of the ANN, N is the number of samples in the historical dataset, and F(Xi;w) is the output of the ANN given the i-th input vector Xi and is the forecast for K. The objective function C(w) of training an ANN is usually a nonlinear and nonconvex function of the parameter values w and can have many local optimal solutions. Considering that there are 4324 weights in the ANN, the optimization problem (10) of training the ANN for load forecasting is therefore a 4324-dimensional optimization problem.
To solve the optimization problem (10) to find the global optimal solution, that is, the global optimal parameters for the ANN 1003, the present PSO-guided TRUST-TECH method 100 of this invention is applied. Referring to
Referring to
After stage I, the method preferably enters stage II, which is the guiding stage. Referring to
The outputs 504 of this stage are the local optimal solutions obtained from each group. Each local optimal solution corresponds to a local optimal set of weights of the ANN. The number of local optimal solutions from each group is no more than the number of initial points.
In this stage, the TRUST-TECH method preferably exploits all of the local optimal solutions in each “covered” region in a tier-by-tier manner
After applying the present PSO-guided TRUST-TECH method, the global optimal solution, which is the global optimal parameters for the ANN, is obtained. The ANN realized with the global optimal parameters is termed a trained ANN.
Once the ANN has been trained, it can be used in real-time environment to produce load forecasts for a future time, for example the next day, using currently available data. More specifically, real-time input data 1006 is organized as an input vector with the same component and order as that in the training stage and fed to the trained ANN 1003. The ANN then outputs the 24 hourly load forecasts 1007 for next day. This process can be carried out repeatly, for instance, once a day.
The present load forecaster is applied to a utility-provided dataset. The dataset covers a four-year time period, from Mar. 1, 2003 to Dec. 31, 2006. The data for the first three years is used for training, and the data for the remaining one year is used for testing. Performance of ANNs trained with the method of the present invention is compared with that of several other methods, including the naïve ANN, the similar day-based wavelet neural network (SIWNN), the strategic seasonality-adjusted support vector regression model (SSA-SVR), and the Gaussian process (GP) method. The results show that the forecaster built with the method of the present invention produces the closest match between forecasts and the actual loads.
Numerically, the forecasting performance is represented by the mean absolute percent error (MAPE), which is evaluated as follows:
where N is the number of total days in the dataset, Lij and {circumflex over (L)}ij are the actual and forecasted loads at the j-th hour on the i-th day, respectively. The results show that the MAPE by the forecaster built with the method of the present invention is 1.28%. In contrast, the MAPE by the naïve ANN is 2.03%, the MAPE by SIWNN is 1.71%, the MAPE by GP is 1.37%, and the MAPE by SSA-SVR is 1.31%. In other words, the method of the present invention is able to improve the forecasting performance produced by the naïve ANN by a significant rate of 36.95%, SIWNN by a rate of 25.15%, GP by a rate of 6.57%, and by SSA-SVR by a rate of 2.29%.
Embodiments of the techniques disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. In one embodiment, the methods described herein may be performed by a processing system. A processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. One example of a processing system is a computer system.
Referring back to
In one embodiment, the processor device 1104 is coupled, via one or more buses or interconnects 1108, to one or more memory devices such as: a main memory 1105 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM), a secondary memory 1106 (e.g., a magnetic data storage device, an optical magnetic data storage device, etc.), and other forms of computer-readable media, which communicate with each other via a bus or interconnect. The memory devices may also different forms of read-only memories (ROMs), different forms of random access memories (RAMs), static random access memory (SRAM), or any type of media suitable for storing electronic instructions. In one embodiment, the memory devices may store the code and data of the load forecasting function unit 1000. In the embodiment of
The computer system 1100 may further include a network interface device 1107. A part or all of the data and code of the load forecasting function unit 1000 may be transmitted or received over a network 1102 via the network interface device 1107. Although not shown in
In one embodiment, the load forecasting function unit 1100 can be implemented using code and data stored and executed on one or more computer systems (e.g., the computer system 1100). Such computer systems store and transmit (internally and/or with other electronic devices over a network) code (composed of software instructions) and data using computer-readable media, such as non-transitory tangible computer-readable media (e.g., computer-readable storage media such as magnetic disks; optical disks; read only memory; flash memory devices as shown in
The operations of the methods and/or processes of
While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, and can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
This is a continuation-in-part of co-pending patent application Ser. No. 13/791,982, entitled “PSO-GUIDED TRUST-TECH METHODS FOR GLOBAL UNCONSTRAINED OPTIMIZATION”, which was filed Mar. 9, 2013. The aforementioned application is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 13791982 | Mar 2013 | US |
Child | 15081027 | US |