The present invention relates generally to electric power networks, and more particularly to optimizing the power flows in the networks.
An electric power network includes buses connected to transmission lines. The buses are locally connected to generators and loads. Optimal power flow (OPF) analysis is often used for monitoring and controlling the operation of the network. The power flow depends, in part, on voltage magnitudes and phase angles. Power flows and voltage levels on the buses are optimized by minimizing an objective function subject to constraints, such as the magnitudes, phases, power transferred, generator capacity, thermal losses, and the like.
Most conventional OPF optimizations:
Some conventional methods for distributing the optimization problem:
Thus, there remains a need to optimize power flows in electric power networks in an efficient and expedient manner by appropriately distributing the computations.
U.S. Pat. No. 6,625,520 describes a system and method for operating an electric power system that determines optimal power flow and available transfer capability of the electric power system based on the optimal power flow. The system derives data associated with an initial phase angle and maximum electric power value of a generator by determining mechanical output and electrical output of a generator, including a generator phase angle defined by a time function with a constraint condition that the generator phase angle does not exceed a preset value.
The embodiments of the invention provide methods for optimizing power flows in electric power networks using a decomposition and coordination procedure. The decomposition procedure distributes the optimization problem into a set of smaller disjoint parameterized optimization problems that are independent of each other. The coordination procedure modifies the parameter associated with the individual problems to ensure that a solution of the entire problem is attained.
The methods are based on dualizing coupled constraints to obtain the set of smaller decoupled optimization problems. In one embodiment of the method, the theory of semi-smooth equations is used in the coordination procedure. The semismooth equation theory ensures that superlinear convergence can be theoretically guaranteed in a neighborhood of the solution. Further, the theory allows for using a merit function to ensure global convergence to a solution using initial parameters that are not near the solution.
In one embodiment, the theory of smoothing based methods is used to solve the decomposed problems. A monotonic decrease of the smoothing parameter is used to ensure that superlinear convergence can be theoretically guaranteed in the neighborhood of a solution. Further, the theory allows for using a merit function to ensure global convergence to a solution even when the initial parameters are far from optimal.
Electrical Power Network Topology and Representative Graph
The network includes buses 10 locally connected to loads (L) 12 and generators (G) 14. The buses are interconnected by transmission lines 20, also known as branches (B). Some of the transmission lines can be connected to transformers (T) 22.
The generators supply active power (measured in, e.g., Mega Watts (MW)), and reactive power (measured in Mega Volt Ampere Reactive (MVar)). The loads consume the power. The power is defined by voltage magnitude and phase angle.
The parameters for the optimization include, but are not limited to, an admittance matrix based on the branch impedance and bus fixed shunt admittance, and the flow capacity ratings, i.e., the maximal total power flow constrained by thermal ratings.
The topology of the network can be represented by a graph G of nodes (generators and connected loads) 30 connected by edges (transmission lines) 31.
Input
Input to the method optimization includes the following:
Output
Output of the method includes complex valued voltages Vi∀i∈N at the buses, and active and reactive power levels PiG,QiG∀i∈N of the generators.
The optimization uses a decision function f(PG,QG,V) that depends on active power generation variables PG=(P1G, . . . , P|N|G), reactive power generation variables QG=(Q1G, . . . , Q|N|G), and the complex valued voltages V=(V1, . . . , V|N|) at the buses.
Optimal Power Flow
The form of the decision function f is quadratic and strictly increasing:
where c indicates constants, with c2i,c1i≧0∀i∈N.
To model the limits of feasible operation of the network, equality constraints, inequality constraints, and bounds on the decision variables are used. The operation of the electrical network can be modeled by the equality constraints
hn(PG,QG,V)=0∀n=1, . . . , Ne,
where Ne indicates the number of equality constraints.
The constraints on the limits on power generated by generators, limits on voltage magnitudes at the buses, power transferred on the lines and thermal losses ensuring feasible operation are modeled as inequality constraints
gn(PG,QG,V)≦0∀n=1, . . . , Ni,
where Ni indicates the number of inequality constraints.
To determine the voltages at the buses and the powers produced by the generators, the following optimization problem is solved for global optimality:
minimize f(PG,QG,V)
subject to hn(PG,QG,V)=0∀n=1, . . . , Ne
gn(PG,QG,V)≦0∀n=1, . . . , Ni (1)
where hn represents equality constraints, and gn represents equality constraints.
Optimal Power Flow-Constraints
In the preferred embodiment, the equality constraints
hn(PG,QG,V)=0∀n=1, . . . , Ne
are represented as
where Sij=Pij+jQij denotes the complex valued power transferred from bus i to bus j, Sji=Pji+jQji denotes the complex valued power transferred from bus j to bus i, (Vi)* denotes the complex conjugate of the complex valued variable, SiG=PiG+jQiG denotes the complex valued power produced by the generators and SiD=PiD+jQiD denotes the complex valued power demands. The variables representing power flow on the lines are used for convenience.
In the preferred embodiment, the inequality constraints
gn(PG,QG,V)=0∀n=1, . . . , Ni
are represented as
Conventional Dual Decomposition Based Optimization
min f1(x1)+f2(x2)
h1(x1)=0
subject to h2(x2)=0
A1x1+A2x2=b
x1,x2≧0.
Observe that the optimization problem can be decomposed to separate x1,x2, but for the equality constraint A1x1+A2x2=b. These coupling constraints can be removed by dualizing 210 the constraints in the objective function using multipliers ξ as in equation (3) below:
min f1(x1)+f2(x2)+ξT(A1x1+A2x2−b)
h1(x1)=0
subject to h2(x2)=0
x1,x2≧0.
The optimization problem in equation (3) can be decomposed 220 to separate the variables x1 221 and x2 222 into a set of disjoint parametrized optimization problems, and corresponding constraints, and then solve the set optimization problems x1,x2, independently. In the examples used herein the set includes two disjoint problems x1 and x2. However, it is understood, that each problem can be further decomposed to a liner granularity as necessary.
Denote by (ξ) 231 and
(ξ) 232 the optimal solution to each of the decomposed problems. The correct choice of multipliers ξ is rarely known perfectly. As a consequence, the method iterates 270 until convergence 250 using a convergence test 240, and otherwise updating 260 the multipliers ξ using only
(ξ),
(ξ), until the constraints A1
(ξ)+A2
(ξ)=b are approximately satisfied.
The updating can be performed using a line search procedure to find a local minimum. The line search finds a descent direction along which the objective function is reduced, and a step size. The descent direction can be computed by various methods, such as gradient descent, Newton's method and the quasi-Newton method.
Alternatively, a trust region can be used to optimizate a subset of the region a (quadratic) model function. If an adequate model of the objective function is found within the trust region, then the region is expanded, otherwise the region is contracted.
Fast Dual Decomposition Based Optimization
The key difference is as follows. When solving each of the optimization, the invented method obtains the optimal solution (ξ),
(ξ) 231-232 as before, but also sensitivities of the solution to the choice of ξ, ∇ξ
(ξ),∇ξ
(ξ) 331-332.
The sensitivity measures the variation in the solutions (ξ),
(ξ) to changes in the multiplying parameter ξ.
The updating is also different, in that now the updating 360 uses
The sensitivities can be obtained by a solution of the following linear complementarity problem. In the subproblem for x1 the sensitivities are obtained as:
∇x,
,
;ξ)∇ξ
(ξ)+∇x
)∇ξ
−∇ξ
=−A1T
h1()+∇x
)T∇ξ
(ξ)=0
(ξ)+∇ξ
(ξ)≧0⊥
(ξ)+∇ξ
(ξ)≧0. (4)
where λ and ν are respectively the multipliers corresponding to the inequality constraints and bounds, superscript T is the transpose operator and L1(x1,λ1,ν1;ξ) is a Lagrangian function defined as
L1(x1,λ1,ν1;ξ)=f1(x1)+λ1Th1(x1)−ν1Tx1+ξTA1x1.
The sensitivity for x2 can be obtained in a similar manner. The system in equation (4) can be reduced to a solution of linear equations when the solution satisfies: (ξ)+
(ξ)>0, also called a strict complementarity.
Using the sensitivity to the multiplier ξ, a search direction dξ can be determined using the linearization of the dualized constraints as:
A1∇ξ(ξ)dξ+A2∇ξ
(ξ)dξ=b−A1
(ξ)−A2
(ξ). (5)
This constitutes taking a Newton-like direction for the dualized constraints, and this is precisely why the convergence is accelerated. The Newton step is known to converge locally superlinearly when in the neighborhood of the solution. The conventional dual decomposition method does not have this rapid local convergence property.
Further, this approach allows one to define a merit function
Φ(ξ)=∥A1(ξ)+A2
(ξ)−b∥22,
to measure the progress of the method towards solving the optimization problem 201 of equation (2), where ∥ ∥2 represent the Euclidean vector norm.
The merit function influences the choice of the next multipliers through the sufficient decrease requirement where the step length α∈(0,1] is selected to satisfy:
Φ(ξ+αdξ)≦Φ(ξ)+βαΦ′(ξ;dξ), (6)
where β>0 is usually a small constant, and Φ′(ξ;dξ) is the directional derivative of the merit function at the point ξ along the direction dξ.The directional derivative is mathematically defined by the following limit:
Fast Dual Decomposition Based Optimization for Optimal Power Flow
Consider a partition of the edges in the network into Ng smaller set of edges (E1, E2, . . . , ENg) where the sets are disjoint and their union is the set of all edges E. The set of buses in edge set Ek is denoted by Nk. Utilizing this set of smaller networks G(Nk, Ek) the optimal power flow problem can be equivalently formulated as described in the following. Further, denote by Ki the set of networks to which the node i belongs.
The objective function for each sub-network can be posed as:
where PiG,k,QiG,k,Vik denotes the real power, reactive power and voltage at the node i in node set Nk. The constants are selected as:
where ni is the number of sub-networks k in which the node i occurs.
The constraints modeling the operation of the electrical network by the equality constraints:
hnk(PG,k,QG,k,Vk)=0∀n=1, . . . , Nek,k=1, . . . , Ng
where Nek indicates the number of equality constraints in the sub-network k.
We model the limits on power generated by generators, limits on voltage magnitudes at the buses, constraints on the power transferred on the lines and thermal losses ensuring feasible operation as inequality constraints
gnk(PG,k,QG,k,Vk)≦0∀n=1, . . . , Nik,k=1, . . . , Ng,
where Nik indicates the number of inequality constraints in the sub-network k.
Constraints are also imposed on power generation and voltage magnitudes at the buses.
To determine the voltages at the buses and the powers produced by the generators, the following optimization problem is solved to global optimality:
where the last set of constraints equates the generator power and voltages for nodes that are shared by different sub-networks k. The above formulation is identically equivalent to the optimal power flow formulation.
For ease of exposition, the notation xk=(PG,k,QG,k,Vk) is used in the following and xik=(PiG,k,QiG,k,ViG,k) for some i∈Nk. With this notation, the problem in equation (2) can be reformulated as:
The above problem equation is decomposed into smaller optimization problems by removing the set of the equality constraints in the optimization and replacing it in the objective function as:
where, ξikl,k=min(Ki),k≠l are the multipliers for equality constraints that equate the copies of the power generation and voltage variables for nodes that are shared across sub-networks. This procedure is called the dualization of the coupling constraints. This renders the optimization problem decoupled by sub-networks.
The optimization for each sub-network is:
where the objective function for k=min(Ki) is:
and for l≠min(Ki) is:
The decomposition step solves each of the optimization problems for the sub-networks k=l, . . . , Ng for a given choice of the multipliers ξ. Denote by (ξ) the optimal solution to the problem corresponding to the subnetwork k. Further, denote by ∇ξ
(ξ) the sensitivity of the optimal solution to the sub-network k for the given choice of multipliers. The sensitivity can be obtained as follows. The first order optimality conditions for the sub-network k is:
∇x
hnk(xk)=0∀n=1, . . . , Nek
gnk(xk)'0⊥λng,k≧0∀n=1, . . . , Nik, (9)
where the Lagrangian function is defined as:
where the notation λnh,k represent the multipliers for the equality constraints hnk; λng,k represent the multipliers for the inequality constraints gnk.
A solution (ξ) to the optimal solution to the problem corresponding to the sub-network k. will necessarily satisfy the first order optimality conditions listed above. The Sensitivity of the optimal solution to the multipliers ξ can be obtained by solving the following linear complementarity problem obtained by differentiating the first order conditions with respect to variables xk and multipliers (λh,k,λg,k):
The sensitivity computations are used to compute a search direction dξ for the multipliers by solving the following equations:(ξ)+∇ξ
(ξ)Tdξ=
(ξ)+∇ξ
(ξ)Tdξ;k=min(Ki),l∈Ni,k≠l,i∈N. (11)
Using this search direction a new set of multipliers ξ+=ξ+αdξ where 0<α≦1 is selected as described below. Define the function Φ(ξ) termed the merit function as
The merit function is the Euclidean norm of the residual of the dualized constraints that couple the different sub-networks. This measures the degree to which the original problem in equation (1) has been solved. When Φ(ξ)≈0, the method terminates with (ξ) as the solution for equation (1).
The step for multipliers is selected so that the merit function is decreased as prescribed the following condition:
Φ(ξ+αdξ)≦Φ(ξ)+βαΦ′(ξ;dξ) (13)
where Φ′(ξ;dξ) is the directional derivative of the merit function at the point ξ along the direction dξ and is mathematically defined by the following limit:
Using the new multiplier ξ+, the decomposed optimization in equation (3), sensitivity computation (4) and multiplier step computation (5) are repeated until the merit function Φ(ξ) is close to zero.
A description of the methodic steps is provided in
Fast Dual Decomposition with Smoothing for Optimal Power Flow
The fast decomposition approach is based on the computation of sensitivities with respect to the multipliers ξ for the dualized constraints. However, the sensitivity may not exist under some conditions. For instance in optimization problem (3) when (ξ)+
(ξ)≯0 does not hold. In such instances only a directional derivative can be obtained.
To rectify this situation consider the modification of the problem solved for each of the sub-networks k as follows. Consider modifying the stationary conditions (9) as,
∇x
hnk(xk)=0∀n=1, . . . , Nek
ψ(−gnk(xk),λng,k;τ)=0∀n=1, . . . , Nik, (14)
where the function ψ(a,b;τ) is a smoothing function for the complementarity constraints satisfying the property that:
The second property ensures that the for all τ>0 the optimization problems are smooth and continuously differentiable. While the first property ensures that one can recover a solution to equation (9) by solving a sequence smoothed problem when τ→0. There exist several choices for such a function:
In the third choice the nonnegativity of a and b have to enforced explicitly. This is precisely what interior point methods do.
Using this modification the method where solution to original problem (1) is obtained can be restated by first solving for fixed τ>0 the problem in equation (14) to a certain tolerance and then decreasing τ→0 to obtain a solution to equation (1) in the limit.
Denote by ()(ξ;τ) the solutions to equation (14). In the case of smoothing, the sensitivity of solutions to the multipliers ξ can be computed as the solution of the following linear equations:
Observe these are linear equations as opposed to linear complementarity equations as in equation (10).
The sensitivity computations are used to compute a search direction dξ for the multipliers by solving the following equations:(ξ,τ)+∇ξ
(ξ,τ)Tdξ=
(ξ,τ)+∇ξ
(ξ,τTdξk=min(Ki),l∈Ni,k≠l,i∈N (16)
Using this search direction a new set of multipliers ξ+=ξ+αdξ where 0<α≦1 is selected as described below. Define the function Φ(ξ) termed the merit function as
The merit function is the Euclidean norm of the residual of the dualized constraints that couple the different sub-networks. This measures the degree to which the original problem (1) has been solved. When Φ(ξ;τ)≈τ, the iterations for particular smoothing parameter value and decrease the parameter can be terminated. When Φ(ξ;0)≈0, terminate the method terminates with (ξ) as the solution for equation (1).
The step for multipliers is selected so that the merit function decreases as prescribed the following condition:
Φ(ξ+αdξ;τ)≦Φ(ξ;τ)+βα∇ξΦ(ξ;τ)Tdξ, (18)
where ∇ξΦ(ξ;τ) is the gradient of the merit function at the given point ξ. A full derivative can be defined, instead of a directional derivative as in equation (13). If the smoothing parameter is selected to be decreased superlinearly, then fast local convergence in the neighborhood of the solution to equation can be obtained (1).
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
6625520 | Chen et al. | Sep 2003 | B1 |
7489989 | Sukhanov et al. | Feb 2009 | B2 |
7660649 | Hope et al. | Feb 2010 | B1 |
7979255 | Hanke et al. | Jul 2011 | B2 |
8126685 | Nasle | Feb 2012 | B2 |
8359124 | Zhou et al. | Jan 2013 | B2 |
20130238148 | Legbedji et al. | Sep 2013 | A1 |
Entry |
---|
Almeida et al.,“Optimal Power Flow Solutions Under Variable Load Conditions”, Nov. 2000, IEEE Transactions on Power Systems, vol. 15, No. 4, pp. 1204-1211. |
Moyano et al.,“Adjusted Optimal Power Flow Solutions via Parameterized Formulation” , 2010, Elsevier, Electric Power Systems Research 80, pp. 1018 -1023. |