HIERARCHICAL COMBINATION OF ARTIFICIAL INTELLIGENCE AND OPTIMIZATION FOR THE OPERATION OF POWER SYSTEMS

Information

  • Patent Application
  • 20240029093
  • Publication Number
    20240029093
  • Date Filed
    December 09, 2022
    2 years ago
  • Date Published
    January 25, 2024
    a year ago
Abstract
A computer system for real-time coordinated operation of power distribution systems and electric vehicles identifies a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads. The computer system executes, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network. The critic deep neural network estimates a Q-value of an action for a given state, and the actor deep neural network estimates a best action for the given state. Based upon an output of the DDPG algorithm, the computer system generates a charging schedule for the ES systems and the EVs within the IHR.
Description
BACKGROUND

The increasing inclusion of batteries, solar, wind, and various other relatively newer, green energy sources has introduced several challenges to modern power grid management. A multitude of problems need to be addressed in smart grid power management.


The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.


BRIEF SUMMARY

Disclosed embodiments include a computer systems, methods, and apparatus for hierarchical combination of artificial intelligence and optimization for the operation of power systems. In at least one embodiment, a computer system for real-time coordinated operation of power distribution systems and electric vehicles identifies a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads. The computer system may then execute, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network. The critic deep neural network estimates a Q-value of an action for a given state, and the actor deep neural network estimates a best action for the given state. Based upon an output of the DDPG algorithm, the computer system may generate a charging schedule for the ES systems and the EVs within the IHR.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings described below.



FIG. 1 depicts a schematic diagram of an example system for real-time coordinated operation of power distribution systems.



FIG. 2 depicts another schematic diagram of an example system for real-time coordinated operation of power distribution systems.



FIG. 3 depicts another schematic diagram of an example system for real-time coordinated operation of power distribution systems.



FIG. 4 depicts a map of an example power distribution system divided into IHRs.



FIG. 5 illustrates a flow chart of steps in a method for real-time coordinated operation of power distribution systems.





DETAILED DESCRIPTION

Disclosed embodiments include a computer systems, methods, and apparatus for hierarchical combination of artificial intelligence and optimization for the operation of power systems. In at least one embodiment, a computer system for real-time coordinated operation of power distribution systems and electric vehicles identifies a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads. The computer system may then execute, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network. The critic deep neural network estimates a Q-value of an action for a given state, and the actor deep neural network estimates a best action for the given state. Based upon an output of the DDPG algorithm, the computer system may generate a charging schedule for the ES systems and the EVs within the IHR.



FIG. 1 depicts a schematic diagram of an example computer system 100 for real-time coordinated operation of power distribution systems. The depicted computer system 100 comprises one or more processors 110 and computer-storage media 120. The one or more processors 110 execute instructions that are stored on the computer-storage media 120. The one or more processors 110 and the computer-storage media 120 may be located locally, remotely, or distributed between local and remote systems.


The computer executable instructions stored on the computer-storage media 120 comprise a power systems optimization software application 130. The power systems optimization software 130 includes a DDPG algorithm 140 that is trained to optimize power systems. The DPPG algorithm 140 utilizes a critic deep neural network 142 and an actor deep neural network 144 as explained in greater detail below.


The power systems optimization software application 130 may further include both IHR module(s) 150 and a central controller module 160. The IHR module(s) 150 and the central controller module 160 are configured to interface and/or manage execution of instructions at the central controller and/or IHRs. In at least one embodiment, the instructions for the IHRs and central controller are executed locally. In additional or alternative embodiments, at least a portion of the executable instructions are located at a remote IHR and/or at a remote central controller.


High penetration of distributed energy resources (DERs) and electric vehicles (EVs) are key factors in decarbonizing the power grid and addressing climate change. The supporting policies and regulatory drivers encourage the power system operators to utilize responsive DERs instead of centralized bulk generation. For instance, Federal Energy Regulatory Commission (FERC) order 2222 enables DERs to participate in wholesale energy and ancillary service markets. Further, FERC defines a combination of DERs that are modeled and controlled like a single source as an integrated hybrid resource (IHR). Therefore, the emergent trend of replacing passive load and generation with active and responsive DERs (i.e., energy storage (ES) systems and solar generating units) and electric vehicles restructures the electric power grid paradigm and sheds light on the existing opportunities to deploy distributed energy flexibility via different energy management schemes.


Conventional models to integrate DERs and EVs in power system operation become computationally expensive and intractable as the number of DERs and EVs increases. More specifically, the real-time control and operation of a large number of DERs and EVs integrates the corresponding physical and operational constraints, which makes the existing energy management optimization models complex and obsolete. Therefore, adopting artificial intelligence engines to control the DERs and EVs in a decentralized manner is beneficial. However, the existing data-driven models, e.g., deep reinforcement learning, fall short of considering the physical constraints of the power distribution system to ensure the deliverability of the energy in the real-time operation.


Disclosed embodiments include a hierarchical energy flexibility model to control the dispatch of IHRs in the real-time operation of power distribution systems. A schematic overview of the proposed real-time hierarchical energy flexibility model 200 is illustrated in FIG. 2. FIG. 2 depicts a central controller 210 in communication with multiple IHRs 220a, 220b. The depicted IHRs 220a, 220b comprise inflexible loads 222, EV chargers 224, and distributed energy resources 226. In at least one embodiment, the disclosed system defines a set of ES systems, solar generating units, EVs, and inflexible load as an IHR, which can be controlled locally. The power distribution system is divided into I IHRs 220a, 220b, in which the IHR controller makes decision on charging and discharging of ES systems as well as charging of EV batteries based on the electricity price, solar generation, and inflexible load demand. The IHR controller calculates and sends the net active power as well as the maximum and minimum reactive power of the IHR (based on the energy dispatches of DERs) to central controller. The power distribution system central controller performs a high-level power flow analysis to determine the adjusted active power and reactive power setpoints while ensuring their deliverability in the real-time operation. The adjusted active power and reactive power setpoints are finally redistributed between DERs in each IHR 220a, 220b.


Disclosed embodiments include a hierarchical energy flexibility model for IHRs 220a, 220b to determine the active and reactive dispatch of DERs and EVs in real-time operation of power distribution systems. Additional embodiments include a detailed model for IHR controller that adopts a deep reinforcement learning approach to enable scalable and integrated control of DERs and EVs locally, which eliminates the need for complicated and computationally expensive centralized models. Further embodiments include a computationally efficient central controller to ensure the feasibility and deliverability of the dispatched energy in the local controllers. More specifically, the proposed central controller incorporates physical constraints of the power distribution system and sends adjusted active and reactive power setpoints to IHR controllers. Disclosed embodiments also include a proposed data-driven model for the IHR controller that can be trained online to be adaptive to the changing factors in the real-time operation of power distribution systems.


In at least one embodiment, a real-time hierarchical energy flexibility model is composed of one central and multiple IHR controllers. The power distribution system is divided into multiple IHR zones, where each IHR 220a, 220b contains inflexible loads, DERs, and EV chargers that are geographically close and connected to a set of power distribution buses. The IHR zones are determined such that the voltage deviation between different power distribution buses within an IHR does not exceed 6. Each IHR zone may contain an IHR controller that minimizes the local operation cost considering the energy price, inflexible load, quality of service constraints of EVs, and operational limits of DERs. The central controller, on the other hand, conducts an efficient power flow analysis, where each IHR 220a, 220b is modeled as a single bus, to ensure the deliverability of the power and reliability of the power distribution system operation in real-time.


In at least one embodiment, the IHR controller determines the dispatch of DERs and EVs such that the operation cost of IHR in Equation 1 is minimized. In Equation 1, the terms Ptz and λt denote the net active power of IHR and locational marginal price of electricity, respectively. More specifically, the IHR controller determines the delivered charging power of EVs as well as charging and discharging dispatches of ES systems with respect to operational constraints in Equation 2-Equation 5.









min




t



P
t
z



λ
t







Equation


1







The net active power of IHR is calculated in Equation 2. The terms PtD, Pm,tM, Pv,tV, Pe,tE,c, and Pe,tE,d are respectively the inflexible load, active power generation of solar unit, the delivered charging power of EV, and the charging and discharging power dispatches of ES system.










P
t
z

=


P
t
D

-




m


i




P

m
,
t

M


+




v


i




P

v
,
t

V


+




e


ε
i




(


P

e
,
t


E
,
c


-

P

e
,
t


E
,
d



)







Equation


2







A queuing model is adopted to alter the charging demand of EVs temporally to capture the energy flexibility and minimize the operation cost of EVs such that the EV owner's quality of service is maintained. Let denote the requested charging power of EV v EV at time t. The state equation of the queuing model is delineated in Equation 3 where the accumulation of EV power request that is not served until time t forms the queue backlog. Thus, the queue backlog at time t, Ov,t, is equal to queue backlog at time t−1, plus the requested power minus the delivered power at time t. A deadline-based constraint is proposed in Equation 4 to ensure the EV owners' quality of service, in which tvD is the deadline to meet the charging request of EV v∈V.






O
v,t
=O
v,t−1
+A
v,t
−P
v,t
v
,∀t  Equation 3






O
v,t
=O,t=t
v
D  Equation 4


Solar generating units equipped with smart inverters can alter the active and reactive power such that the voltage and frequency issues of the power distribution system are alleviated and resolved. In Equation 5, the apparent power of the smart inverter of solar generating unit m E M at time t is capped by the maximum apparent power of the inverter, SmM, where Pm,tM and Qm,tM are the active and reactive power dispatches, respectively. In Equation 6, the active power of solar generating unit m∈M at time t is confined by zero and the forecasted value of the solar generation, Pm,tM, Equation 7 ensures that the power factor of solar generating unit m at time t is greater than the minimum acceptable power factor, pfmM.












P

m
,
t


M
2


+

Q

m
,
t


M
2







S
m
M



2


,


m

,
t




Equation


5













0


P

m
,
t

M






P

m
,
t

M

,






m



,
t





Equation


6















pf


m
M




P

m
,
t

M




P

m
,
t


M
2


+

Q

m
,
t


M
2






,


m

,
t




Equation


7







The state equation of the ES system is denoted in Equation 8, where the state of charge (SOC) of ES system e∈ε at time t is equal to SOC of ES system at time t−1, plus the charged energy, and minus the discharged energy at time t. The terms ηc and ηd are the charging and discharging efficiencies of the ES system, respectively.











E

e
,
t


=


E

e
,

t
-
1



+


η
c



P

e
,
t


E
,
c



-


1

η
d




P

e
,
t


E
,
d





,


e

,
t




Equation


8







The reactive power at each IHR i is defined as the summation of reactive power required by the inflexible load and EVs minus the reactive power provided by the ES systems and solar generating units. The upper bound of active power in ES system is determined by its maximum charging and discharging capabilities,








P
e
E



=

max




(



P
e

E
,
c




,


P
e

E
,
d





)

.






In order to ensure the deliverability of reactive power when the active power reaches the maximum threshold in inverter-based resources, the apparent power capacity is considered larger than the maximum active power capacity. Hence, the system calculates the maximum and minimum reactive power thresholds of ES systems and solar generating units in Equation 9 and Equation 10, where the terms QeE, QeE, QmM, and QmM respectively denote the minimum and maximum reactive power of ES systems and solar generating units.












Q
e
E



=



-





S
e
E



2

-



P
e
E



2






Q

e
,
t

E







S
e
E



2

-



P
e
E



2




=


Q
e
E





,


e

,
t




Equation


9














Equation


10













Q
m
M



=



-





S
m
M



2

-



P
m
M



2






Q

m
,
L

M







S
m
M



2

-



P
m
M



2




=


Q
m
M





,


m

,
t




The active power dispatches of ES systems and EVs, as well as the minimum and maximum thresholds of the reactive power at each IHR zone are calculated above. In order to ensure the deliverability of the scheduled dispatches for ES systems and EVs, the central controller of the power distribution system perform a power flow analysis, in which we add index i∈custom-character to represent different IHR zones. The central controller can reduce the requested active power of each IHR i∈custom-character by Pi,tC such that {tilde over (P)}i,tz=Pi,tz−Pi,tC, where {tilde over (P)}i,tz is the adjusted active power. The central controller sends the adjusted active power and reactive power setpoints of the IHR, {tilde over (P)}i,tz and Qi,tz, to the IHR controller. Then, the IHR controller tailors the charging and discharging dispatches of the ES systems, solar generating units, and the delivered charging power to EVs as a response to active and reactive power signals of the central controller.


The central controller in the power distribution system conducts the optimal power flow to ensure the deliverability of the requested power and determines the required reactive power at each IHR. The IHRs and connecting lines in the power distribution system are respectively denoted by I and custom-character, where (k, i, j) ∈custom-character are three consecutive IHRs. The objective function of the central controller in Equation 11 minimizes the cost of power drawn from the upstream transmission system in the first term while penalizing the curtailed power at each IHR by a large factor in the second term, λp.










min





t

T




P
t
G



λ
t




+




t

T






i





P

i
,
t

C



λ
p








Equation


11







While the central controller solves a high-level and efficient optimization problem, IHR controllers solve the problem for multiple DERs and numerous EVs, resulting in a computationally expensive and slow solution. As such, a reinforcement learning (RL) framework may be adopted to articulate the IHR controllers' decision-making process. To this end, the operation of the IHR controller may be modeled by a Markov Decision Process (MDP), which represents the state evolution of the system at the local level.


In at least one embodiment, the MDP is modeled by a tuple (custom-character, γ), which is defined for the proposed problem as follows:


State space: The state representation at time t comprises the state spaces of the ES system and EVs shown by s t E S and is defined as:






s
t=(PtD,PtGt,It,(1−It)Et,It,TD,ItEtr)  Equation 12


where PtD, PtG, represent the vectors of inflexible load and solar generation power, respectively, and λt is the real-time electricity price. In order to form an inclusive state space to contain both ES systems and EVs, here we form an identification (1×X)-vector It=[Ix,t], where X is the total number of ES systems and EVs. The component of the identification vector, Ix,t is equal to 0 if it represents an ES system, and is 1 otherwise. The proposed identification vector modifies the state space such that the agent can differentiate between the ES system and EVs and make decisions accordingly. The vector Et=[Ex,t] represents the SOC of ES systems, and is multiplied element-wise by (1−It) to become zero for EVs. The departure and remaining requested energy of EVs are respectively denoted by TD=[TxD] and Etr=[Ex,tr], and are multiplied element-wise by It which sets the last two state parameters to zero for ES systems.


Action space: The collective action of ES systems and EVs in system state s t forms the action αt=[αt1, αt2, . . . , αtx] ∈custom-character, where αtx is a continuous variable. The first action αt1 represents the output of the ES system and is limited on both ends by maximum discharging and charging capacities. The rest of actions, αtx, ∀x>1, are continuous actions for charging EVs and are limited to the maximum charging capacity of the charging plug at the station.


Reward: The actions taken by the IHR controller for ES systems and EVs, αtcustom-character, reshapes the system state from st to st+1custom-character, and consequently allocate a reward, r t, to the controller. The structure of the reward function may be designed such that actions are better guided towards the optimal direction. The reward function is presented in Equation 13, in which ωi are weighting coefficients, and {tilde over (λ)} is the predicted average price of electricity for the next 24 hours. In the first line in Equation 13 the reward foe ES systems is delineated, where it receives a positive reward for discharging when the energy price is above average and negative reward otherwise. However, no negative reward is allocated when the ES system is charged by local solar generation. In the second line the EVs receive a positive reward if the batteries charge when the electricity price is lower than average, and a negative reward otherwise. Further, the EV agent receives a negative reward proportional to the remaining requested energy before the deadline, and a large positive reward if the agent meets all the requested charging demand by the deadline.










Equation


13











r
t

(


s
t

,

a
t


)

=





e

ε





ω
1

(


max


{

0
,


P

e
,
t


E
,
c


-

P

m
,
t

M



}


-

P

e
,
t


E
,
d



)



(


λ
˜

-

λ
t


)



+




v





ω
2


(


λ
˜

-

λ
t


)



-




ω
3




"\[LeftBracketingBar]"


t


t
D




+


ω
4




"\[LeftBracketingBar]"








Transition: The transition probability characterizes the stochastic dynamic of the system in which the probability of going from state to st∈S to st+1∈S through action αt is defined by custom-character:S×custom-character×S→>[0,1]. The unknown transition probability is obtained through observing voluminous transitions in the reinforcement learning framework.


In at least one embodiment, the model maximizes the reward of all agents (i.e., ES systems and EVs), which minimizes the operation cost of the maximizes the self-sufficiency of the IHR in the power distribution system while maintaining the operational constraints of the ES system and the EV owners' quality of service. To this end, Equation 14 can be maximized as follows:










max


a
t





𝔼
[




t
=
1






γ
t

[


r
t

(


s
t

,

a
t


)

]





"\[LeftBracketingBar]"


s
=

s
0





]





Equation


14







The term γ∈[0, 1] represents the discount factor of the model in which lower values encourage myopic behavior to maximize the short-term rewards, while higher values enable the agents to have a more forward-looking approach.


In additional or alternative embodiments, a deep deterministic policy gradient (DDPG) method may be used. The deep deterministic policy gradient (DDPG) method is a model free actor-critic algorithm which is presented in this subsection. In the DDPG model the actions are taken and evaluated respectively by the actor and critic networks to establish the optimal action policies in continuous action space. The Bellman equation is utilized to recursively estimate the long-term value or Q-value, Q(st, αt), for action αt at state st:










Q

(


s
t

,

a
t


)

=


r

(


s
t

,

a
t


)

+

γ



𝔼
[


max


a

t
+
1





t
+
1





Q

(


s

t
+
1


,

a

t
+
1



)


]







Equation


15







Since the total number of states and actions in the continuous space is infinite, the DDPG algorithm utilizes two deep neural networks, also known as critic and actor networks, to evaluate the long-term values and find the best action. The actor network, μ(st; θμ), is trained to capture a deterministic policy for estimating the best action in states t, while critic network Q(st, αt, θQ) estimates the Q-value of action αt given state st. Given the deterministic policy of actor network, μ(st; θμ), and instantaneous state st the Q-value of the trained networks is estimated in Equation 16 where the terms θμ, θQ are weight vectors of the two networks.






Q(stt)≈r(stt)+γcustom-character[Q(st+1,μ(st+1; θμ); θQ)]  Equation 16


In the simultaneous training process of the actor and critic networks, the actor network makes action based on the sampled system state, st, while the critic network evaluates the given samples, st, αt=μ(st; θμ), and calculates the reward, rt, and consequently improves the actor network.


In at least one embodiment of a Deep Reinforcement Learning model (DRL), the agent needs to make a collective decision αt=[αt1, αt2, . . . , αtx] for all EVs and ES system within an IHR. Assuming each action αtx belongs to the action space custom-characterx, the size of collective action space is Π1x|custom-characterx|, which is the complexity of Q-value computation and hence, a large collective action can make the training too slow and inefficient. To overcome this hurdle, in at least one embodiment, the system reformulates the state evolution by breaking down the collective action of all ES system and EVs into X single actions (X being the total number of EVs and ES systems) and create X−1 intermediate states as (st, αt1), (st, αt1, αt2), . . . ,(st, αt1, . . . , αtx−1). By doing so, the actions are taken sequentially rather than collectively, and each action is taken after its predecessors' actions are known. This reformulation reduces the complexity of Q-value computation from Π1x|custom-characterx| to Σ1x|custom-characterx|. Accordingly, the reward function in Equation 13 will be modified to include the reward of taking action in the intermediate state, however, the total reward will be the same as the original problem. The new reward function is:






r
t(stt)=ω1(max{0,Pe,tE,c−Pm,tM}−Pe,tE,d)({tilde over (λ)}−λt)+ω2Pv,tV({tilde over (λ)}−λt)−ω3Ev,tV|t≠tD4|t=tD,Ev,tv=0V  Equation 17


The DDPG agent, once trained, makes the initial decisions for the charging schedule of ES systems and EVs in its IHR. The interaction of IHR and central controllers are shown as a schematic 300 in FIG. 3, where the trained IHR controller is directly applied to make the initial decision on active power of DERs and EVs, and when the decisions are adjusted by the central controller, distributes them among resources within its zone.


In at least one embodiment, the proposed DRL model is responsive to electricity price, which allows taking advantage of the offered flexibility by ES systems, solar generating units, and EVs to reduce the operation cost of IHRs and consequently the power distribution system. The optimization model integrates all the physical constraints of DERs and EVs to minimize the operation cost of the power distribution system in a centralized manner, which results in the lowest operation cost.


Additionally, in at least one embodiment, the proposed DRL-trained controller defers the requested charging demand of EVs in response to the electricity price to reduce the charging cost. Further, the controller discharges the ES systems when the electricity price is higher than average in pursuit of higher profit, while charging the batteries when the electricity price is lower than average or solar generation is available. The positive and negative components of the delivered power minus the requested power in EV, Fv,t+−Fv,t=Pv,tV−Av,t, as well as the charging minus discharging power of ES system, Fe,t+−Fe,t=Pe,tE,c−Pe,tE,d, are defined as the positive and negative flexibility offered by EVs and ES systems, respectively. The positive flexibility refers to meeting the charging demand of EVs and charging the ES system batteries, while the negative flexibility denotes deferring the charging demand of EVs and discharging the ES system batteries. The EVs and ES systems charge the batteries when the electricity price is low and solar generation is available, while batteries are discharged later at night when the electricity price is high and there is no solar generation. Characterizing the positive and negative flexibility of EVs and DERs by IHR controller enables the power distribution operator to use the offered flexibility to ensure a reliable operation and participate in the wholesale electricity market to make profit.



FIG. 4 depicts a map of an example power distribution system 400 divided into IHRs 220(a-f). As depicted the IHRs 220(a-f) may be linked to each other and to a substation 410. The central controller 210 may be positioned at the substation 410, distributed among processors 110 within the IHRs 220(a-f), or located remotely from the power distribution system 400.


The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed.


Referring now to FIG. 5, a method 500 is illustrated. Method 500 includes various steps within a computer-implemented method, executed on one or more processors, for real-time coordinated operation of power distribution systems. For example, step 510 comprises identifying a set of IHRs. Step 150 further includes identifying a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads. For example, FIG. 4 depicts an example power distribution system 400 divided into IHRs 220(a-f).


Additionally, method 500 comprises an act 520 of executing an DDPG algorithm. Act 520 further includes executing, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network, wherein: the critic deep neural network estimates a Q-value of an action for a given state, and the actor deep neural network estimates a best action for the given state. For example, the computer system 100 of FIG. 1 comprises a DDPG algorithm 140 that utilizes a critic deep neural network 142 and an actor deep neural network 144 for real-time coordinated operation of power distribution systems.


Further, method 500 comprises an act 530 of generating a charging schedule 530. Act 530 further includes based upon an output of the DDPG algorithm, generating a charging schedule for the ES systems and the EVs within the IHR. For example, the system may create a queue of EV charging in order to optimize costs within the power distribution systems.


Further, the methods may be practiced by a computer system including one or more processors and computer-readable media such as computer memory. In particular, the computer memory may store computer-executable instructions that when executed by one or more processors cause various functions to be performed, such as the acts recited in the embodiments.


Computing system functionality can be enhanced by a computing systems’ ability to be interconnected to other computing systems via network connections. Network connections may include, but are not limited to, connections via wired or wireless Ethernet, cellular connections, or even computer to computer connections through serial, parallel, USB, or other connections. The connections allow a computing system to access services at other computing systems and to quickly and efficiently receive application data from other computing systems.


Interconnection of computing systems has facilitated distributed computing systems, such as so-called “cloud” computing systems. In this description, “cloud computing” may be systems or resources for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, etc.) that can be provisioned and released with reduced management effort or service provider interaction. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“laaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).


Cloud and remote based service applications are prevalent. Such applications are hosted on public and private remote systems such as clouds and usually offer a set of web based services for communicating back and forth with clients.


Many computers are intended to be used by direct user interaction with the computer. As such, computers have input hardware and software user interfaces to facilitate user interaction. For example, a modern general purpose computer may include a keyboard, mouse, touchpad, camera, etc. for allowing a user to input data into the computer. In addition, various software user interfaces may be available.


Examples of software user interfaces include graphical user interfaces, text command line based user interface, function key or hot key user interfaces, and the like.


Disclosed embodiments may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below. Disclosed embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: physical computer-readable storage media and transmission computer-readable media.


Physical computer-readable storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage (such as CDs, DVDs, etc.), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.


A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media.


Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer-readable media to physical computer-readable storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system. Thus, computer-readable physical storage media can be included in computer system components that also (or even primarily) utilize transmission media.


Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.


Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.


Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.


The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A computer system for real-time coordinated operation of power distribution systems, comprising: one or more processors; andone or more computer-readable media having stored thereon executable instructions that when executed by the one or more processors configure the computer system to: identify a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads;execute, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network, wherein: the critic deep neural network estimates a Q-value of an action for a given state, andthe actor deep neural network estimates a best action for the given state; andbased upon an output of the DDPG algorithm, generate a charging schedule for the ES systems and the EVs within the IHR.
  • 2. The computer system as recited in claim 1, wherein the executable instructions include instructions that are executable to configure the computer system to: receive, from a central controller, an adjusted active power set point.
  • 3. The computer system as recited in claim 1, wherein the executable instructions include instructions that are executable to configure the computer system to: receive, from a central controller, an adjusted reactive power set point.
  • 4. The computer system as recited in claim 1, further comprising a central controller, wherein the central controller is configured to ensure a feasibility and deliverability of dispatched energy in the IHRs.
  • 5. The computer system as recited in claim 1, wherein the executable instructions include instructions that are executable to configure the computer system to: create, at the IHR, a queue to alter charging of the EVs, wherein the queue is created to minimize an operation cost of EVs.
  • 6. The computer system as recited in claim 1, wherein the executable instructions include instructions that are executable to configure the computer system to: generate, at the IHR, actions of the ES systems and EVs sequentially such that an action is taken after its predecessors' actions are known.
  • 7. The computer system as recited in claim 1, wherein the DDPG algorithm is responsive to electricity price.
  • 8. The computer system as recited in claim 7, wherein the DDPG algorithm defers a requested charging demand of the EVs in response to the electricity price to reduce a charging cost.
  • 9. The computer system as recited in claim 7, wherein the DDPG algorithm discharges the ES systems when the electricity price is higher than average.
  • 10. A computer-implemented method, executed on one or more processors, for real-time coordinated operation of power distribution systems, comprising: identifying a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads;executing, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network, wherein: the critic deep neural network estimates a Q-value of an action for a given state, andthe actor deep neural network estimates a best action for the given state; andbased upon an output of the DDPG algorithm, generating a charging schedule for the ES systems and the EVs within the IHR.
  • 11. The computer-implemented method as recited in claim 10, further comprising: receiving, from a central controller, an adjusted active power set point.
  • 12. The computer-implemented method as recited in claim 10, further comprising: receiving, from a central controller, an adjusted reactive power set point.
  • 13. The computer-implemented method as recited in claim 10, further comprising providing a central controller, wherein the central controller is configured to ensure a feasibility and deliverability of dispatched energy in the IHRs.
  • 14. The computer-implemented method as recited in claim 10, further comprising: creating, at the IHR, a queue to alter charging of the EVs, wherein the queue is created to minimize an operation cost of EVs.
  • 15. The computer-implemented method as recited in claim 10, further comprising: generating, at the IHR, actions of the ES systems and EVs sequentially such that an action is taken after its predecessors' actions are known.
  • 16. The computer-implemented method as recited in claim 10, wherein the DDPG algorithm is responsive to electricity price.
  • 17. The computer-implemented method as recited in claim 16, wherein the DDPG algorithm defers a requested charging demand of the EVs in response to the electricity price to reduce a charging cost.
  • 18. The computer-implemented method as recited in claim 17, wherein the DDPG algorithm discharges the ES systems when the electricity price is higher than average.
  • 19. A computer system for real-time coordinated operation of power distribution systems, comprising: a central controller, wherein the central controller is configured to ensure a feasibility and deliverability of dispatched energy in a set of integrated hybrid resources (IHRs);the set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads;one or more processors; andone or more computer-readable media having stored thereon executable instructions that when executed by the one or more processors configure the computer system to: identify a set of integrated hybrid resources (IHRs), wherein each IHR within the set of IHRs comprises one or more of: energy storage (ES) systems, solar generating units, electric vehicles (EVs), and/or inflexible loads;execute, at an IHR selected from the set of IHRs, a deep deterministic policy gradient (DDPG) algorithm, the DDPG algorithm utilizing a critic deep neural network and an actor deep neural network, wherein: the critic deep neural network estimates a Q-value of an action for a given state, andthe actor deep neural network estimates a best action for the given state; andbased upon an output of the DDPG algorithm, generate a charging schedule for the ES systems and the EVs within the IHR.
  • 20. The computer system as recited in claim 19, wherein the executable instructions include instructions that are executable to configure the computer system to receive, at the IHR and from the central controller, an adjusted active power set point and an adjusted reactive power set point.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of and priority to 1) U.S. Provisional Patent Application Ser. No. 63/389,594 filed on Jul. 15, 2022 and entitled “REAL-TIME COORDINATED OPERATION OF POWER AND AUTONOMOUS ELECTRIC RIDE-HAILING SYSTEMS,” and 2) U.S. Provisional Patent Application Ser. No. 63/394,818 filed on Aug. 3, 2022 and entitled “HIERARCHICAL COMBINATION OF ARTIFICIAL INTELLIGENCE AND OPTIMIZATION FOR THE OPERATION OF POWER SYSTEMS.” The entire contents of each of the aforementioned applications and/or patents are incorporated by reference herein in their entirety.

GOVERNMENT RIGHTS

This invention was made with government support under grant DE-EE0008775 awarded by the Department of Energy. The government has certain rights in this invention.

Provisional Applications (2)
Number Date Country
63389594 Jul 2022 US
63394818 Aug 2022 US