METHOD FOR INTELLIGENTLY ADJUSTING POWER FLOW BASED ON Q-LEARNING ALGORITHM

Information

  • Patent Application
  • 20210367426
  • Publication Number
    20210367426
  • Date Filed
    October 11, 2020
    3 years ago
  • Date Published
    November 25, 2021
    2 years ago
Abstract
A method for intelligently adjusting a power flow based on a Q-learning algorithm includes: converting a variable, an action, and a goal in a power grid to a state, an action, and a reward in the algorithm, respectively; selecting an action from an action space, giving an immediate reward based on a result of power flow calculation, and correcting a next state; forwardly observing a next exploration action based on a strategy in the Q-learning algorithm; updating a Q value in a corresponding position in a Q-value table based on the obtained reward; if a final state is not reached, going back to step 2; otherwise, increasing the number of iterations by 1; if the number of iterations does not reach predetermined value K, that is, Episode
Description
TECHNICAL FIELD

The present invention relates to the field of power adjustment, and in particular, to a method for intelligently adjusting a power flow based on a Q-learning algorithm.


BACKGROUND

At present, in order to adjust a power flow, an operator usually adjusts the adjustable load power output based on experience to adjust a non-convergent power flow to a convergent power flow. In this method, adjustment is manually conducted completely based on the experience of the operator. Therefore, the method is random, and is inefficient and ineffective. In addition, the method imposes extremely high theoretical and practical requirements for the operator.


SUMMARY

In view of this, the present invention provides a strategy for intelligently adjusting a power flow based on a Q-learning algorithm. Through continuous trying and rule learning, a best unit combination is selected, and a non-convergent power flow is adjusted to a convergent power flow. This minimizes the loss of a power grid, overcomes blindness of a conventional method relying on human experience, and improves efficiency and accuracy of power flow adjustment.


The present invention provides a method for intelligently adjusting a power flow based on a Q-learning algorithm. The method includes:


step 1: converting a variable, an action, and a goal in a network grid to a state, an action, and a reward in the algorithm, respectively;


step 2: selecting an action from an action space, giving an immediate reward based on a result of power flow calculation, and correcting a next state;


step 3: forwardly observing a next exploration action based on a strategy in the Q-learning algorithm;


step 4: updating a Q value in a corresponding position in a Q-value table based on the obtained reward;


step 5: if a final state is not reached, going back to step 2, or if a final state is reached, increasing the number of iterations by 1;


step 6: if the number of iterations does not reach predetermined value K, that is, Episode<K, going back to step 2, or if the number of iterations reaches predetermined value K, that is, Episode=K, outputting the Q-value table; and


step 7: outputting an optimal unit combination.


Optionally, the intelligent adjustment method includes:


establishing an expression of a state space representing a combination of states of all units, as shown in formula (1):






S={Power output of unit 1,Power output of unit 2, . . . ,Power output of unit N}  (1),


where


S denotes the state space, and a value of N is a positive integer;


establishing an expression of an action space in which powering-on or powering-off of a unit is used as an action, as shown in formula (2):






A={Powering on of unit 1,Powering off of unit 1, . . . ,Powering off of unit N}  (2),


where


A denotes the action space, and a value of N is a positive integer; and


reward design:


establishing expression R of a reward design for adjusting power flow from non-convergent to convergent and simultaneously minimizing a network loss of the power grid, as shown in formula (3):









R
=

{









A











large





penalty





value






(


such





as





-
999

)


,







non­convergent















power





flow

;















A











small





penalty





value






(


such





as





-
1

)


,










power





output





of





a





balancing





machine





exceeds







an





upper





or





a





lower





limit

;



















A











large





reward





value






(

such





as





100

)


,





the





goal













is





achieved

;













A





small





reward





value






(

such





as





1

)


,





a





network





loss







load





percentage





decreases









0
,






the





network





loss





load





percentage





increases

;










(
3
)







Optionally, the intelligent adjustment method includes:


limiting values of parameters ε, α, and γ in the Q-learning algorithm, where:


ε denotes a probability of an action selection in an ε-greedy strategy, and ε is a positive number initially set to maximum value 1, and is gradually decreased as the number of iterations increases; a denotes a learning rate, and 0<α<1; and


γ denotes a discount factor of an attenuation value of a future reward, and is set to a value close to 1.


The beneficial effects of the present invention include:


1) In a conventional method, manual adjustment is performed repeatedly based on human experience, which leads to a bad effect and low efficiency of power flow adjustment. The present invention resolves this problem and improves accuracy and efficiency of power flow adjustment.


2) In actual engineering, unit power output is usually fixed. A non-convergent power flow is adjusted to a convergent power flow by adjusting a unit combination, which delivers better practicability.


3) It is only necessary to convert performance indexes in the power grid to evaluation indexes in reinforcement learning based on actual needs. A reward in reinforcement learning may be given based on loss of the power grid, power output of a balancing machine, and balance of unit power output, etc. This allows the indexes to meet requirements, and improves flexibility of power flow adjustment.





BRIEF DESCRIPTION OF THE DRAWINGS

To describe the technical solutions in the embodiments of this application more clearly, the accompanying drawings required to describe the embodiments are briefly described below.


Apparently, the accompanying drawings described below are only some embodiments of the present invention. A person of ordinary skill in the art may further obtain other accompanying drawings based on these accompanying drawings without creative effort.



FIG. 1 is a schematic flowchart of a method for adjusting a power flow based on a Q-learning algorithm according to an embodiment of this application.





DETAILED DESCRIPTION

The present invention is further described below with reference to the accompanying drawings.


A method for adjusting a power flow based on a Q-learning algorithm mainly includes the following seven steps:


1) State space design: In this specification, it is assumed that unit power output remains unchanged, and each unit has only two states: on and off, that is, unit power output is only initial power output or 0. Therefore, a state space is a combination of states of all units:






S={Power output of unit 1,Power output of unit 2, . . . ,Power output of unit N}  (1),


where


S denotes the state space.


2) Action space design: Using powering-on or powering-off of a unit as an action, an action space is:






A={Powering on of unit 1,Powering off of unit 1, . . . ,Powering off of unit N}  (2),


where


A denotes the action space.


When a certain state is reached, selecting a certain action may not change the state. For example, unit 1 is on. If the action “powering on unit 1” is selected, the state remains unchanged, and the action is meaningless. To reduce unnecessary repetition and save running time, each time a state is reached, an action corresponding to the unit state is removed from the action space. Remaining actions form an action subspace. An action is selected only from the action subspace.


3) Reward design: A reward function is particularly important in the Q-learning algorithm, which depends on a current state, an action just conducted, and a next state. Properly setting a reward helps faster make a better decision. An optimization objective of this specification is to adjust a non-convergent power flow to a convergent power flow while minimizing the loss of the power grid. Therefore, the reward may be set as follows:









R
=

{









A











large





penalty





value






(


such





as





-
999

)


,







non­convergent





power





flow

;















A











small





penalty





value






(


such





as





-
1

)


,










power





output





of





a





balancing





machine





exceeds







an





upper





or





a





lower





limit

;



















A











large





reward





value






(

such





as





100

)


,





the





goal













is





achieved

;













A





small





reward





value






(

such





as





1

)


,





a





network





loss







load





percentage





decreases









0
,






the





network





loss





load





percentage





increases

;










(
3
)







4) Parameter design: In the Q-learning algorithm, the following three parameters have greater impact on the performance of the algorithm: ε, α, and γ. Functions and settings of the three parameters are as follows:


(1) ε (0<ε<1): It denotes a probability of an action selection in an ε-greedy strategy and is a positive number. An agent has a probability of 1−ε to select an action based on an optimal value in a Q-value table, and has a probability of ε to randomly select an action, so that the agent can jump out of the local optimum. In addition, all values in the Q-value table are initially 0, which requires a lot of exploration through random action selection. Therefore, ε is initially set to maximum value 1, and is gradually decreased as the number of iterations increases.


(2) Learning rate α (0<α<1): A larger value of a indicates that less previous training effects are retained. Therefore, it is usually set to a small value.


(3) Discount factor γ (0<γ<1): It denotes an attenuation value of a future reward. A larger value of γ indicates that past experience is more valued. A smaller value of γ indicates that only a reward from a current action is considered. Therefore, γ is usually set to a value close to 1.


For a power system including N units, a procedure of adjusting a power flow based on a Q-learning algorithm is shown in FIG. 1.


Step 1: Convert a variable, an action, and a goal in the power grid to a state, an action, and a reward in the algorithm, respectively. A reward function may be given based on a result of power flow calculation and formula (3) by using active power output of a unit as the state and powering-on or powering-off of the unit as the action. An initial state may be read.


Step 2: Select an action from the action space based on the ε-greedy strategy. Number r may be randomly selected from range [0,1]. If r<ε, an action may be randomly selected from the action space. Otherwise, an action may be selected based on an optimal value in a current state in the Q-value table.


Step 3: Conduct power flow calculation, obtain an immediate reward based on formula (3), and correct the state S=S′.


Step 4: Update a Q value based on the obtained immediate reward:









Q
=



(

1
-
α

)

*

Q


(

S
,
A

)



+

α


[

R
+

γ
*


max
a



Q


(


S


,
a

)





]







(
4
)







where


R denotes the immediate reward,







max
a



Q


(


S


,
a

)






denotes an optimal value of all actions a in the next state S′ in a previous learning process, α denotes the learning rate, γ denotes the discount factor, and Q(S, A) denotes a Q value in the current state in the Q-value table.


Step 5: Determine whether S′ is a final state; and if no, go back to step 2, or if yes, increase the number of iterations by 1: Episode=Episode+1.


Step 6: Determine whether the number of iterations reaches predetermined value K. If yes, Episode=K, and the Q-value table is updated. If no Episode<K, and go back to step 2 for a next round of learning.


Step 7: Obtain an optimal unit combination, which is the current state.


Through continuous trying and rule learning, the best unit combination may be selected, and a non-convergent power flow may be adjusted to a convergent power flow. This may minimize the loss of the power grid, overcome blindness of a conventional method relying on human experience, and improve efficiency and accuracy of power flow adjustment.


In the strategy, the variable, action, and goal in the power grid are converted to the state, action, and reward in the algorithm by using the Q-learning method based on a reinforcement learning theory. The unit output remains unchanged. The best unit combination is selected. The non-convergent power flow is adjusted to the convergent power flow. In addition, the loss of the power grid is minimized, and the efficiency and accuracy of power flow adjustment are improved.









TABLE 1







Comparison of indexes before and after adjustment












Before
After



Index
adjustment
adjustment







Whether a power flow is convergent
No
Yes



Network loss (MW)
/
51.1344.



Network loss load percentage (%)
/
0.49










First, in the strategy, the variable, action, and goal in the power grid are converted to the state, action, and reward in the algorithm by using the Q-learning algorithm based on the reinforcement learning theory, which delivers great flexibility.


Secondly, unit power output is usually fixed in actual projects, and only a combination of powered-on units is adjusted. In this specification, the best combination of powered-on units is selected by powering on or off units without adjusting the unit power output. This delivers strong practicability.


Finally, power flow adjustment is conducted for the IEEE 39-bus standard test system and a certain actually operating system.


The content described in the embodiments of this specification is merely an enumeration of the implementations of the inventive concept, and the claimed scope of the present invention should not be construed as being limited to the specific forms stated in the embodiments. Equivalent technical means that come into the minds of a person skilled in the art in accordance with the inventive concept also fall within the claimed scope of the present invention.

Claims
  • 1. A method for intelligently adjusting a power flow based on a Q-learning algorithm, comprising: step 1: converting a variable, an action, and a goal in a power grid to a state, an action, and a reward in the algorithm, respectively;step 2: selecting an action from an action space, giving an immediate reward based on a result of power flow calculation, and correcting a next state;step 3: forwardly observing a next exploration action based on a strategy in the Q-learning algorithm;step 4: updating a Q value in a corresponding position in a Q-value table based on the obtained reward;step 5: if a final state is not reached, going back to step 2, or if a final state is reached, increasing the number of iterations by 1;step 6: if the number of iterations does not reach predetermined value K, that is, Episode<K, going back to step 2, or if the number of iterations reaches predetermined value K, that is, Episode=K, outputting the Q-value table; andstep 7: outputting an optimal unit combination.
  • 2. The method for intelligently adjusting a power flow based on a Q-learning algorithm according to claim 1, comprising: establishing an expression of a state space representing a combination of states of all units, as shown in formula (1): S={Power output of unit 1,Power output of unit 2, . . . ,Power output of unit N}  (1),
  • 3. The method for intelligently adjusting a power flow based on a Q-learning algorithm according to claim 1, further comprising: Limiting values of parameters ε, α, and γ in the Q-learning algorithm, wherein:ε denotes a probability of an action selection in an ε-greedy strategy, and ε is a positive number initially set to maximum value 1, and is gradually decreased as the number of iterations increases; a denotes a learning rate, and 0<α<1; andγ denotes a discount factor of an attenuation value of a future reward, and is set to a value close to 1.
  • 4. The method for intelligently adjusting a power flow based on a Q-learning algorithm according to claim 2, further comprising: Limiting values of parameters ε, α, and γ in the Q-learning algorithm, wherein:ε denotes a probability of an action selection in an ε-greedy strategy, and ε is a positive number initially set to maximum value 1, and is gradually decreased as the number of iterations increases; a denotes a learning rate, and 0<α<1; andγ denotes a discount factor of an attenuation value of a future reward, and is set to a value close to 1.
Priority Claims (1)
Number Date Country Kind
201911123269.8 Nov 2019 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2020/120259 10/11/2020 WO 00