AC electrical machines are used in a large number of applications including, but not limited to, factory automation, wind turbines and electric drive vehicles. Typical AC electric machines include induction machines and synchronous machines.
The performance of an AC electric machine depends on how it is controlled. Conventionally, vector control technologies have been used to control AC electric machines based on proportional-integral-derivative (“PID”) control technology. Recent studies, however, indicate that such control strategies have limitations, particularly when facing uncertainties.
Referring now to
Therefore, what are needed are improved control systems for controlling PMSMs. In particular, systems, methods and devices are desired for controlling PMSMs under unstable and uncertain system conditions.
Methods, systems and devices are described herein that use artificial neural networks to control AC electric machines and motor drives, which enhances the performance, reliability and efficiency of the AC electric machines and motor drives.
An example method for controlling an AC electrical machine can include providing a PWM converter operably connected between an electrical power source and the AC electrical machine and providing a neural network vector control system operably connected to the PWM converter. The neural network vector control system can include a current-loop neural network configured to receive a plurality of inputs. The current-loop neural network can be configured to optimize a compensating dq-control voltage based on the plurality of inputs. The plurality of inputs can be a d-axis current, isd, a q-axis current, isq, a d-axis error signal, a q-axis error signal, a predicted d-axis current signal, a predicted q-axis current signal and a feedback compensating dq-control voltage. The d-axis error signal can be a difference between isd and a reference d-axis current, isd*, and the q-axis error signal can be a difference between isq and a reference q-axis current, isq*. The method can further include outputting a compensating dq-control voltage from the current-loop neural network and controlling the PWM converter using the compensating dq-control voltage.
Optionally, a predicted d-axis current signal can be a difference between isd and a predicted d-axis current, isd′, and a predicted q-axis current signal can be a difference between isq and a predicted q-axis current, isq′. The predicted d- and q-axis current signals, isd′ and isq′, can be computed using a current prediction model. For example, the current prediction model can be based on isd, isq and the compensating dq-control voltage at a previous time step and default parameters for the AC electrical machine.
Additionally, the compensating dq-control voltage can optionally be adjusted by a stabilization matrix that is based on default parameters for the AC electrical machine.
Alternatively or additionally, the plurality of inputs at the current-loop neural network can further include an integral of the d-axis error signal and an integral of the q-axis error signal.
Optionally, the neural network vector control system can further include a speed-loop neural network configured to receive a plurality of inputs. The speed-loop neural network can be configured to optimize a drive torque signal based on the plurality of inputs. The plurality of inputs can be a speed of the AC electrical machine, ωm, a speed error signal, a predicted speed signal and a feedback drive torque signal. The speed error signal can be a difference between ωm and a reference speed, ωm*. The method can further include outputting a drive torque signal, τem, from the speed-loop neural network. Additionally, the drive torque signal, τem, can be converted into the reference q-axis current, isq*.
The predicted speed signal can optionally be a difference between ωm and a predicted speed signal, ωm′, where ωm′ is computed using a speed prediction model. For example, the speed prediction model can be based on ωm and τem at a previous time step and default parameters for the AC electrical machine.
Alternatively or additionally, the drive torque signal, τem, can optionally be adjusted by a drive-torque stabilization matrix that is based on default parameters for the AC electrical machine.
Optionally, the plurality of inputs at the speed-loop neural network can further include an integral of the speed error signal.
Optionally, at least one of the current-loop neural network and the speed-loop neural network can be configured to implement a dynamic programming (“DP”) algorithm.
Additionally, at least one of the current-loop neural network and the speed-loop neural network can be trained to minimize a cost function of a dynamic programming (“DP”) algorithm using a backpropagation through time (“BPTT”) algorithm. For example, at least one of the current-loop neural network and the speed-loop neural network can be trained by randomly generating an initial state, randomly generating a sample reference state, unrolling a trajectory of the neural network vector control system from the initial state and training the current-loop neural network or the speed-loop neural network based on the cost function of the DP algorithm and the BPTT algorithm.
Additionally, at least one of the current-loop neural network and the speed-loop neural network can optionally be a multi-layer perceptron including a plurality of input nodes, a plurality of hidden layer nodes and a plurality of output nodes. Alternatively or additionally, each of the nodes can be configured to implement a hyperbolic tangent function.
Optionally, the AC electrical machine is a permanent magnet synchronous machine or an induction machine.
An example system for controlling an AC electrical machine can include a PWM converter operably connected between an electrical power source and the AC electrical machine and a neural network vector control system operably connected to the PWM converter. The neural network vector control system can include a current-loop neural network configured to receive a plurality of inputs. The current-loop neural network can be configured to optimize a compensating dq-control voltage based on the plurality of inputs. The plurality of inputs can be a d-axis current, isd, a q-axis current, isq, a d-axis error signal, a q-axis error signal, a predicted d-axis current signal, a predicted q-axis current signal and a feedback compensating dq-control voltage. The d-axis error signal can be a difference between isd and a reference d-axis current, isd*, and the q-axis error signal can be a difference between isq and a reference q-axis current, isq*. The current-loop neural network can output a compensating dq-control voltage. The neural network vector control system can control the PWM converter using the compensating dq-control voltage.
Optionally, a predicted d-axis current signal can be a difference between isd and a predicted d-axis current, isd′, and a predicted q-axis current signal can be a difference between isq and a predicted q-axis current, isq′. The predicted d- and q-axis current signals, isd′ and isq′, can be computed using a current prediction model. For example, the current prediction model can be based on isd, isq and the compensating dq-control voltage at a previous time step and default parameters for the AC electrical machine.
Additionally, the compensating dq-control voltage can optionally be adjusted by a stabilization matrix that is based on default parameters for the AC electrical machine.
Alternatively or additionally, the plurality of inputs at the current-loop neural network can further include an integral of the d-axis error signal and an integral of the q-axis error signal.
Optionally, the neural network vector control system can further include a speed-loop neural network configured to receive a plurality of inputs. The speed-loop neural network can be configured to optimize a drive torque signal based on the plurality of inputs. The plurality of inputs can be a speed of the AC electrical machine, ωm, a speed error signal, a predicted speed signal and a feedback drive torque signal. The speed error signal can be a difference between ωm and a reference speed, ωm*. The speed-loop neural network can output a drive torque signal, τem. Additionally, the drive torque signal, τem, can be converted into the reference q-axis current, isq*.
The predicted speed signal can optionally be a difference between ωm and a predicted speed signal, ωm′, where ωm′ is computed using a speed prediction model. For example, the speed prediction model can be based on ωm and τem at a previous time step and default parameters for the AC electrical machine.
Alternatively or additionally, the drive torque signal, τem, can optionally be adjusted by a drive-torque stabilization matrix that is based on default parameters for the AC electrical machine.
Optionally, the plurality of inputs at the speed-loop neural network can further include an integral of the speed error signal.
Optionally, at least one of the current-loop neural network and the speed-loop neural network can be configured to implement a DP algorithm.
Additionally, at least one of the current-loop neural network and the speed-loop neural network can be trained to minimize a cost function of the DP algorithm using a BPTT algorithm. For example, at least one of the current-loop neural network and the speed-loop neural network can be trained by randomly generating an initial state, randomly generating a sample reference state, unrolling a trajectory of the neural network vector control system from the initial state and training the current-loop neural network or the speed-loop neural network based on the cost function of the DP algorithm and the BPTT algorithm.
Additionally, at least one of the current-loop neural network and the speed-loop neural network can optionally be a multi-layer perceptron including a plurality of input nodes, a plurality of hidden layer nodes and a plurality of output nodes. Alternatively or additionally, each of the nodes can be configured to implement a hyperbolic tangent function.
Optionally, the AC electrical machine is a permanent magnet synchronous machine or an induction machine.
It should be understood that the above-described subject matter may also be implemented as a computer-controlled apparatus, a computer process, a computing system, or an article of manufacture, such as a computer-readable storage medium.
Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.
The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. As used in the specification, and in the appended claims, the singular forms “a,” “an,” “the” include plural referents unless the context clearly dictates otherwise. The term “comprising” and variations thereof as used herein is used synonymously with the term “including” and variations thereof and are open, non-limiting terms. The terms “optional” or “optionally” used herein mean that the subsequently described feature, event or circumstance may or may not occur, and that the description includes instances where said feature, event or circumstance occurs and instances where it does not. While implementations will be described for controlling PMSMs used in electric drive vehicles, it will become evident to those skilled in the art that the implementations are not limited thereto, but are applicable for controlling other types of AC electrical machines, including but not limited to, PMSMs used in other environments, induction machines used for factory automation and wind turbines connected to the power grid.
Referring now to
In
Referring now to
A neural network implements the optimal control principle through a dynamic programming (“DP”) algorithm. Therefore, using a neural network is completely different than using the conventional vector control techniques described above. Compared to conventional vector control techniques, the neural network vector control approach produces faster response time, lower overshoot, and, in general, better performance. In addition, since a neural network is trained under variable system parameters, the nested-loop neural network vector control system of
A commonly used PMSM transient model is described by Eqn. (1). Using the motor sign convention, space vector theory yields the stator voltage equation in the form:
where Rs is the resistance of the stator winding, ωe is the rotational speed of the PMSM, and vsd, vsq, isd, isq, ψsd, and ψsq are the d and q components of instant stator voltage, current, and flux. If the d-axis is aligned along the rotor flux position, the stator flux linkages are defined by Eqn. (2).
where Lls is the leakage inductance, Ldm and Lqm are the stator and rotor d- and q-axis mutual inductances, ψf is the flux linkage produced by the permanent magnet. Under the steady-state condition, Eqn. (1) is expressed as Eqn. (3).
If stator winding resistance is neglected, the stator d and q-axis currents are defined by Eqn. (4).
Isq=−Vsd/(ωeLq), Isd=(Vsq−ωeψf)/(ωeLd) (4)
The magnets can be placed in two different ways on the rotor of a permanent magnet (“PM”) motor (e.g., a PMSM). Depending on the placement, the PM motors are called either Surface Permanent Magnet (“SPM”) motors or Interior Permanent Magnet (“IPM”) motors. An IPM motor is considered to have saliency with q axis inductance greater than the d axis inductance (Lq>Ld), while an SPM motor is considered to have small saliency, thus having practically equal inductances in both d- and q-axes (Lq=Ld). The torque of the PM motor is calculated by Eqn. (5) for a SPM motor and by Eqn. (6) for a IPM motor.
τem=pψfisq SPM motor (5)
τem=p(ψfiaq+(Ld−Lq)iadiaq) IPM Motor (6)
where p is pole pairs. If the torque computed from Eqn. (5) or (6) is positive, the motor operates in the drive mode. If the torque computed from Eqn. (5) or (6) is negative, the motor operates in the regenerate mode.
In an EDV, the motor produces an electromagnetic torque. The bearing friction and wind resistance (e.g., drag) can be combined with the load torque opposing the rotation of the PM motor. The net torque, e.g., the difference between the electromagnetic torque τem developed by the motor and the load torque TL, causes the combined inertias Jeq of the motor and the load to accelerate. Thus, the rotational speed of the PM motor is defined by Eqn. (7).
τem=Jeqdωm/dt+Baωm+TL (7)
where ωm is the motor rotational speed, and Ba is the active damping coefficient. The relation between ωm and ωe is defined below, where p is motor pole pairs.
ωe=p·ωm
PMSM Nested-Loop Vector Control Using Artificial Neural Networks
Current-Loop Neural Network Vector Control
Referring again to
For digital control implementations, the discrete equivalent of the continuous system state-space model can be obtained as shown by Eqn. (9).
where Ts represents the sampling period, A is the system matrix, and B is the input matrix. A zero-order-hold discrete equivalent mechanism is used herein to convert the continuous state-space model of the system shown by Eqn. (8) to the discrete state-space model of the system as shown by Eqn. (9). Ts=1 ms has been used in all examples provided herein. This disclosure contemplates using other values for Ts.
The current-loop neural network 410A, also referred to herein as the current-loop action network, is applied to the DC/AC converter 402 through a PWM mechanism to regulate the inverter output voltage vsa,sb,sc applied to the PMSM stator. The current-loop action network, which can be denoted by the function A({right arrow over (x)}(k){right arrow over (w)}), is a fully connected multi-layer perceptron with weight vector {right arrow over (w)}, an input layer with a plurality of input nodes, a plurality of hidden layers with a plurality of hidden layer nodes and an output layer with a plurality of output nodes. The multi-layer perceptron can have shortcut connections between all pairs of layers. Optionally, the multi-layer perceptron includes at least six input nodes, two hidden layers of six hidden layer nodes each and two output nodes. Alternatively or additionally, each of the nodes can be configured to implement a hyperbolic tangent function. It should be understood that the multi-layer perceptron can include the number of input nodes, hidden layer nodes and output nodes needed to implement the control techniques described herein.
The input vector to the current-loop action network is denoted by {right arrow over (x)}(k)={right arrow over (i)}sdq(k),{right arrow over (i)}sdq*(k)−{right arrow over (i)}sdq(k),{right arrow over (i)}sdq(k)−îsdq(k),A({right arrow over (x)}(k−1),{right arrow over (w)})). The current-loop action network can be configured to optimize a compensating dq-control voltage based on the plurality of inputs. The four components or inputs of {right arrow over (x)}(k) correspond, respectively, to (1) presently measured PMSM stator B- and q-axis currents (e.g., d-axis current, isd, and q-axis current, isq), (2) error signals of the d- and q-axis currents (e.g., d-axis error signal and q-axis error signal), (3) predictive input signals (e.g., predicted d-axis current signal and predicted q-axis current signal), and (4) history of the current-loop action network output from a previous time step (e.g., feedback compensating dq-control voltage). The d-axis error signal can be a difference between isd and a reference d-axis current, isd*, and the q-axis error signal can be a difference between isq and a reference q-axis current, isq*.
In the above input vector, î(k) is the predicted current state vector (e.g., the B- and q-axis current signals or the predictive input signals), which can be calculated with a fixed model shown in Eqn. (10).
îsdq(k)=A0·{right arrow over (i)}sdq(k−1)+B0·({right arrow over (v)}sdq(k−1)−{right arrow over (e)}dq) (10)
where A0 and B0 are constant matrices of Eqn. (9) chosen for the default nominal parameters of the AC electrical machine (e.g., PMSM 401). In other words, the predicted d- and q-axis current signals, isd′ and isq′, can be computed using a current prediction model. The current prediction model can be based on isd, isq and the compensating dq-control voltage at a previous time step and default parameters for the AC electrical machine as shown in Eqn. (10). In addition, the predicted d-axis current signal can be a difference between isd and a predicted d-axis current, isd′, and a predicted q-axis current signal can be a difference between isq and a predicted q-axis current, isq. Hence the third component of {right arrow over (x)}(k), i.e., {right arrow over (i)}sdq(k)−îsdq(k), gives the current-loop action network information on how much the current matrices A and B differ from the default parameters A0 and B0. This information allows the current-loop action network to adapt in real time to changing A and B matrices. When the predictive input signals are provided to the current-loop action network, it is more powerful than the conventional model-based predictive control due to the advantage obtained through learning. In addition, A({right arrow over (x)}(k−1),{right arrow over (w)}) is the output of the current-loop action network at a previous time step (e.g., feedback compensating dq-control voltage). This input helps the current-loop action network adapt in real time to changing A and B matrices since it gives feedback on what relative adjustments need making to the previous action which was attempted.
Optionally, the input vector, îdq(k), to the current-loop neural network can further include an integral of the d-axis error signal and an integral of the q-axis error signal. The integrals of the d-axis and the q-axis error signals can provide the current-loop action neural network with a history of the d-axis and q-axis error signals, respectively. The integral terms provide a history of all past errors by summing errors together. For example, if there is an error in a given time step, the error is added to the integral term for the next time step. Thus, the integral term will only stay the same as it was at a previous time step if there is no error in a current time step, which prevents the action neural network from stabilizing at a non-target value. This helps to minimize steady state errors.
To simplify the expressions, the discrete system model Eq. (9) is represented by Eqn. (11).
{right arrow over (i)}sdq(k+1)=A·{right arrow over (i)}sdq(k)+B·({right arrow over (v)}sdq(k)−{right arrow over (e)}dq) (11)
where {right arrow over (e)}dq(0 ωeψf)T·{right arrow over (v)}sdq(k) is the control vector, which is determined from the output of the current-loop action network, A({right arrow over (x)}(k){right arrow over (ω)}) as shown by Eqn. (12).
{right arrow over (v)}sdq(k)=kPWM·A({right arrow over (x)}(k){right arrow over (w)})+W0{right arrow over (i)}sdq(k)+{right arrow over (e)}aq (12)
where W0=B0−1(A0−I) is a constant referred to herein as a stabilization matrix. As discussed herein, the stabilization matrix refers to both W0 and {right arrow over (e)}dq terms in Eqn. (12), which are added to A({right arrow over (x)}(k−1),{right arrow over (w)}). The stabilization matrix acts like an extra weight matrix in the current-loop action network that connects the input layer directly to the output layer. In other words, the compensating dq-control voltage (e.g., the output of the current-loop action network) can optionally be adjusted by the stabilization matrix, which is based on default parameters for the AC electrical machine (e.g., PMSM 401). This provides the current-loop action network with some basic default behavior of being able to hold the system steady more easily. The stabilization matrix also removes many of the local minima from the search space that are classically associated with gradient-descent algorithms applied to recurrent neural networks.
It should be understood that training the current-loop action network (discussed in detail below) can be difficult because every time a component of the weight vector {right arrow over (w)} changes, the actions chosen by Eqn. (12) change at every time step. Each changed action will consequently change the next state that the system passes through as shown by Eqn. (11). And each changed state will further change the next action chosen by Eqn. (12). This creates an ongoing cascade of changes. Hence changing even one component of {right arrow over (w)} even by the tiniest finite amount can completely scramble the trajectory generated by Eqns. (11) and (12). Thus, the cost function (Eqn. (17)) can be overly sensitive to changes in {right arrow over (w)}. In other words, the surface of the cost function in the {right arrow over (w)}-space as shown by
The stabilization matrix is effectively a hand-picked weight matrix which helps the current-loop action network do its job more effectively. It works partly by smoothing out the crinkliness of the cost function, which makes the surface more like
Referring now to
To solve the tracking problem, the task of the current-loop action network can be split into two stages. First, to fight against moving with the arrows in
{right arrow over (i)}sdq=A{right arrow over (i)}sdq+B({right arrow over (u)}sdq−{right arrow over (e)}dq)
{right arrow over (0)}=(A−I){right arrow over (i)}sdq+B({right arrow over (u)}sdq−{right arrow over (e)}dq)
{right arrow over (u)}sdq−{right arrow over (e)}dq=−B−1(A−I){right arrow over (i)}sdq
{right arrow over (u)}sdq=−B−1(A−I){right arrow over (i)}sdq+{right arrow over (e)}dq
where I is the identity matrix. Choosing this action will help keep the AC electrical machine in exactly the same state.
The stabilization matrix is a very useful addition to the neural network vector control system because with the feedback present the current-loop action network is effectively a recurrent neural network, which is challenging to train correctly and consistently. Furthermore, according to the techniques discussed herein, the current-loop action network learns to overcome the challenge of coping with rapidly changing target states and random variation of parameters of the AC electrical machine. Hence the stabilization matrix helps to make the current-loop action network training achieve consistently good results. For example, the stabilization matrix helps prevent the current-loop action network training from getting trapped in suboptimal local minima.
Speed-Loop Neural Network Vector Control
Referring again to
dωm/dt=Baωm/Jeq+(τem−TL)/Jeq (13)
where the system state is ωm and the drive torque τem is proportional to the output of the speed-loop action network. The conversion from the torque to the q-axis current (e.g., the reference q-axis current, isq*) is obtained from Eqn. (5). For digital control implementations, the discrete equivalent of the continuous state-space model can be obtained as shown by Eqn. (14).
ωm(kTs+Ts)=a·ωm(kTs)+b·[τem(kTs)−TL] (14)
The output of the speed-loop neural network 420A, also referred to herein as the speed-loop action network, is applied to the input of the current-loop action network as the reference q-axis current, isq*. Similar to the current-loop action network, the speed-loop action network is a fully connected multi-layer perceptron with weight vector {right arrow over (w)}, an input layer with a plurality of input nodes, a plurality of hidden layers with a plurality of hidden layer nodes and an output layer with a plurality of output nodes. The multi-layer perceptron can have shortcut connections between all pairs of layers. Optionally, the multi-layer perceptron includes at least four input nodes, two hidden layers of six hidden layer nodes each and two output nodes. Alternatively or additionally, each of the nodes can be configured to implement a hyperbolic tangent function. It should be understood that the multi-layer perceptron can include the number of input nodes, hidden layer nodes and output nodes needed to implement the control techniques described herein.
The control signal generated by the speed-loop action network is shown by Eqn. (15).
τem(k)=kτAω({right arrow over (x)}ω(k),{right arrow over (w)}ω)+W0ωωm(k)+TL=kττAem(k)+W0ωωm(k)+TL (15)
where {right arrow over (x)}ω(k)=(ωm(k),ωm*(k)−ωm(k),ωm(k)−{circumflex over (ω)}m(k),τAem(k−1)) contains all the network inputs, and {right arrow over (w)}ω is the weight vector of the speed-loop action network. The speed-loop action network can be configured to optimize a drive torque signal based on the plurality of inputs. Similar to the current-loop action network, the speed-loop action network can use predictive inputs, as well as previous speed-loop control actions. As shown by Eqn. (15), the plurality of inputs can be a speed of the AC electrical machine, ωm, a speed error signal, a predicted speed signal and a feedback drive torque signal (e.g., output of the speed-loop action network at a previous time step). The speed error signal can be a difference between ωm and a reference speed, ωm*. Additionally, {circumflex over (ω)}m(k) is the predicted speed calculated with a fixed model shown by Eqn. (16).
{circumflex over (ω)}m(k)=a0·ωm(k−1)+b0·[τem(k−1)−TL] (16)
where a0 and b0 are the constant values of Eqn. (14) chosen for the default AC electrical machine (e.g., PMSM 401) inertias and damping coefficient. In other words, the predicted speed signal can optionally be a difference between ωm and a predicted speed signal, ωm′, where ωm′ is computed using a speed prediction model. As shown in Eqn. (16), the speed prediction model can be based on ωm and τem at a previous time step and default parameters for the AC electrical machine.
Optionally, the inputs to the speed-loop neural network can further include an integral of the speed error signal. The integral of the speed error signal can provide the speed-loop action neural network with a history of the speed-loop error signals. The integral term provides a history of all past errors by summing errors together. For example, if there is an error in a given time step, the error is added to the integral term for the next time step. Thus, the integral term will only stay the same as it was at a previous time step if there is no error in a speed time step, which prevents the action neural network from stabilizing at a non-target value. This helps to minimize steady state errors.
Also similar to the current-loop action network, the drive torque signal, τem, can optionally be adjusted by a drive-torque stabilization matrix that is based on default parameters for the AC electrical machine (e.g., PMSM 401). The use of a stabilization matrix is discussed in detail above and is therefore not discussed in further detail below.
Training Neural Networks Based Upon Dynamic Programming
DP employs the principle of optimality and is a very useful tool for solving optimization and optimal control problems. Action neural networks (e.g., current-loop neural network 410A and/or speed-loop neural network 420A of
Training the Current-Loop Neural Network
The objective of the current-loop control (e.g., using current-loop neural network 410A of
where m is some constant power (e.g., m=0.5 in the examples), |•| denotes the modulus of a vector, and γε[0, 1] is a constant “discount factor”. The current-loop action network was trained separately to minimize the DP cost in Eqn. (17), by using the BPTT algorithm. The BPTT algorithm was chosen because it is particularly suited to situations where the model functions are known and differentiable, and also because BPTT has proven stability and convergence properties since it is a gradient descent algorithm—provided the learning rate is sufficiently small. In general, the BPTT algorithm consists of two steps: a forward pass which unrolls a trajectory, followed by a backward pass along the whole trajectory, which accumulates the gradient descent derivative. For the termination condition of a trajectory, a fixed trajectory length corresponding to a real time of 1 second is used (e.g., a trajectory had 1/Ts=1000 time steps in it). γ=1 is for the discount factor in Eqn. (17).
To train the current-loop action network, the system data associated with Eq. (8) are specified. The training procedure for the current-loop action network includes: (1) randomly generating a sample initial state isdq(j), (2) randomly generating a changing sample reference dq current time sequence, (3) unrolling the trajectory of the neural network vector control system from the initial state, (4) training the current-loop action network based on the DP cost function in Eqn. (17) and the BPTT training algorithm, and (5) repeating the process for all the sample initial states and reference dq currents until a stop criterion associated with the DP cost is reached. The weights were initially all randomized using a Gaussian distribution with zero mean and 0.1 variance. The training also considers variable nature of the AC electrical machine (e.g., PMSM) resistance and inductance. Training used Resilient backpropagation (“RPROP”) to accelerate learning. RPROP was allowed to act on multiple trajectories simultaneously (each with a different start point and isdq*).
Example algorithms for BPTT for PMSM vector control with and without the stabilization matrix are provided below in Tables 1 and 2, respectively.
Generation of the reference current can consider the physical constraints of a practical PMSM. These include the rated current and converter PWM saturation constraints. From the power converter standpoint, the PWM saturation constraint represents the maximum voltage that can be generated and applied to the PWM circuit. From the current-loop action network standpoint, the PWM saturation constraint stands for the maximum positive or negative voltage that the current-loop neural network can output. Therefore, if a reference dq current requires a control voltage that is beyond the acceptable voltage range of the current-loop neural network, it is impossible to reduce the cost (e.g., Eqn. (17)) during the training of the action network.
The following two strategies are used to adjust randomly generated reference currents. If the rated current constraint is exceeded, the reference dq current is modified by keeping the q-axis current reference isq* unchanged to maintain torque control effectiveness (e.g., Eq. (5)) while modifying the d-axis current reference isd* to satisfy the d-axis control demand as much as possible as shown by Eqn. (18).
isd_new*=sign(isd*)·√{square root over ((isd_max*)2−(isq*)2)} (18)
If the PWM saturation limit is exceeded, the reference dq current is modified by Eqn. (19).
vsd*=−isq*ωeLq vsq*=√{square root over (vsdq_max*)2−(vsd*)2)}
isd*=(vsq*−ωeψf)/(ωeLd) (19)
which represents a condition of keeping the d-axis voltage reference vsd* unchanged so as to maintain the torque control effectiveness (e.g., Eqns. (4) and (5)) while modifying the q-axis voltage reference vsq* to meet the d-axis control demand as much as possible.
Referring now to
Training the Speed-Loop Neural Network
The objective of the speed-loop control (e.g., using speed-loop neural network 420A of
To train the speed-loop action network, the system data associated with Eq. (14) are specified. The training procedure includes: (1) randomly generating a sample initial state ωm, (2) randomly generating a changing sample reference speed time sequence, (3) unrolling the motor speed trajectory from the initial state, (4) training the speed-loop action network based on the DP cost function of Eqn. (20) and the BPTT training algorithm, and (5) repeating the process for all the sample initial states and reference speeds until a stop criterion associated with the DP cost is reached. Speed-loop training also used RPROP. The generation of the reference speed considers the speed changing range from 0 rad/s to the maximum possible motor rotating speed. The training considers variable nature of the inertia and the damping coefficient and the limitation of maximum acceptable torque. Example algorithms for BPTT for PMSM vector control with and without the stabilization matrix are provided below in Tables 1 and 2, respectively.
Performance Evaluation of Nested-Loop Neural Network Controller
An integrated transient simulation of a complete PMSM system is developed by using power converter average and detailed switching models in SIMPOWERSYSTEMS made by MATHWORKS of NATICK, Mass. A block diagram illustrating a neural network vector control system for a PMSM used for the simulations is shown in
Two approaches are used to prevent high motor current. First, the speed reference applied to the speed-loop controller is processed through a ramp limit, which is very effective to prevent rapidly-changing high current from being applied to the motor. Second, if the increase of speed reference causes the current reference generated by the speed-loop controller to go beyond its rated current, any speed reference increment will be blocked.
Ability of the Neural Network Controllers in Current and Speed Tracking
Comparison of Neural Network Controller with Conventional Vector Control Method
vsd=(Rsisd+Lddisd/dt)−ωeLqisq
vsq=(Rsisq+Lqdisq/dt)+ωeLdisd+ωeψf
The gains of the speed-loop PI controller are designed based on the transfer function of Eqn. (7). Then, for digital control implementation of the PI controllers at the sampling rate of Ts=1 ms, the controller gains for both the speed and current loops are retuned until the controller performance is acceptable. Tuning of the PI controllers is a challenging task, particularly for a low sampling rate, such as Ts=1 ms. The comparison shown by
Performance Evaluation Under Variable Parameters of a PMSM
PMSM stability is an issue to consider. In general, studies primarily focus on the motor performance under uncertain system parameter variations. These include changes of motor resistance and inductance from its nominal values or changes of fraction coefficient and combined inertia. Those changes affect the performance of the current- or speed-loop controller.
The stability of the nested-loop neural control technique is evaluated for two variable system parameter conditions, namely, 1) variation of motor resistance and inductance, and 2) deviation of motor drive parameters associated with the torque-speed of Eqn. (7).
Performance Evaluation in Power Converter Switching Environment
PMSM control is achieved through power electronic converters, which operate in a highly dynamic switching environment. This causes high order harmonics in the three-phase PMSM stator voltage and current. This means that in the dq reference frame, large oscillations would appear in stator voltage and current. Since these oscillation impacts are not considered during the training stage of the neural networks, the behavior of the neural network controller is investigated in the power converter switching environment.
The figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present invention. In this regard, each block of a flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The implementation was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various implementations with various modifications as are suited to the particular use contemplated.
Any combination of one or more computer readable medium(s) may be used to implement the systems and methods described hereinabove. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to implementations of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims priority to U.S. Provisional Patent Application No. 61/862,277 filed on Aug. 5, 2013, which is fully incorporated by reference and made a part hereof.
This invention was made with Government support under Grant Nos. ECCS 1102038 and ECCS 1102159 awarded by the National Science Foundation. The Government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
20030218444 | Marcinkiewicz | Nov 2003 | A1 |
20050184689 | Maslov et al. | Aug 2005 | A1 |
20080315811 | Hudson | Dec 2008 | A1 |
20110006711 | Imura | Jan 2011 | A1 |
20110031907 | Takahashi | Feb 2011 | A1 |
20140362617 | Li | Dec 2014 | A1 |
Entry |
---|
Barnard, E., “Temporal-Difference Methods and Markov Models,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 23, No. 2, 1993, pp. 357-365. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, 1995, 495 pages (submitted as two documents—Part 1 and Part 2). |
Carrasco, J.M., et al., “Power-Electronic Systems for the Grid Integration of Renewable Energy Sources: A Survey,” IEEE Transactions on Industrial Electronics, vol. 53, No. 4, 2006, pp. 1002-1016. |
Chan, C.C., “The State of the Art of Electric and Hybrid Vehicles,” Proceedings of the lEEE, vol. 90, No. 2. 2002, pp. 224-279. |
Dannehi, J., et al., “Limitations of Voltage-Oriented PI Current Control of Grid-Connected PWM Rectifiers with LCL Filters,” IEEE Transactions on Industrial Electronics, vol. 56, No. 2, 2009, pp. 380-388. |
Fairbank, M., et al., “An Adaptive Recurrent Neural Network Controller using a Stabilization Matrix and Predictive Inputs to Solve the Tracking Problem under Disturbances,” Neural Networks, vol. 49, 2013, 35 pages. |
Fairbank, M., et al., “The Divergence of Reinforcement Learning Algorithms with Value-Iteration and Function Approximation,” Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN'12), IEEE Press, 2012, pp. 3070-3077. |
Feldkamp, L.A., et al., “A Signal Processing Framework Based on Dynamic Neural Networks with Application to Problems in Adaptation, Filtering, and Classification,” Proceedings of the IEEE, vol. 86, No. 11, 1998, pp. 2259-2277. |
Figueres, E., et al., “Sensitivity Study of the Dynamics of Three-Phase Photovoltaic Inverters with an LCL Grid Filter,” IEEE Transactions on Industrial Electronics, vol. 56, No. 3, 2009, pp. 706-717. |
Hochreiter, S., et al., “Long Short-Term Memory,” Neural Computation, vol. 9, No. 8, 1997, pp. 1735-1780. |
Kirk, D.E., “Optimal Control Theory: An Introduction,” Chapters 1-3, Prentice-Hall, Englewood Cliffs, NJ, 1970, 471 pages. |
Li, S., et al., “Control of HVDC Light System Using Conventional and Direct Current Vector Control Approaches,” IEEE Transactions on Power Electronics, vol. 25, No. 12, 2010, pp. 3106-3118. |
Li, S., et al., “Conventional and Novel Control Designs for Direct Driven PMSG Wind Turbines,” Electric Power System Research, vol. 80, Issue 3, 2010, pp. 328-338. |
Li, S., et al., “Direct-current Vector Control of Three-Phase Grid-Connected Rectifier-Inverter,” Electric Power Systems Research, vol. 81, No. 2, 2011, pp. 357-366. |
Li, S., et al., “Nested-Loop Neural Network Vector Control of Permanent Magnet Synchronous Motors,” The 2013 International Joint Conference on Neural Network, Dallas, Texas, 2013, 8 pages. |
Li, Y., et al., “The Comparison of Control Strategies for the Interior PMSM Drive used in the Electric Vehicle,” The 25th World Battery, Hybrid and Fuel Cell Electric Vehicle Symposium & Exhibition, Shenzhen, China, 2010, 6 pages. |
Li, S., et al. “Vector Control of a Grid-Connected Rectifier/Inverter Using an Artificial Neural Network,” Proceedings of the IEE International Joint Conference on Neural Netowrks (IJCNN'12), IEEE World Congress on Computational Intelligence, Brisbane, Australia, 2012, pp. 1783-1789. |
Luo, A., et al., “Fuzzy-PI-Based Direct-Output-Voltage Control Strategy for the STATCOM Used in Utility Distribution Systems,” IEEE Transactions on Industrial Electronics, vol. 56, No. 7, 2009, pp. 2401-2411. |
Mullane, A., et al., “Wind-Turbine Fault Ride-Through Enhancement,” IEEE Transactions on Power Systems, vol. 20, No. 4, 2005, pp. 1929-1937. |
Park, J-W., et al., “New External Neuro-Controller for Series Capacitive Reactance Compensator in a Power Network,” IEEE Transactions on Power Systems, vol. 19, No. 3, 2004, pp. 1462-1472. |
Pena, R., et al., “Doubly fed induction generator using back-to-back PWM converters and its application to variable-speed wind-energy generation,” Electric Power Applications, IEEE Proceedings, vol. 143, Issue 3, 1996, pp. 231-241. |
Prokhorov, D.V., et al., “Adaptive Behavior with Fixed Weights in RNNs: An Overview,” Proceedings of the 2002 International Joint Conference on Neural Networks, (IJCNN'02), vol. 3, IEEE Press, 2002, pp. 2018-2022. |
Prokhorov, D., et al., “Adaptive Critic Designs,” IEEE Transactions on Neural Networks, vol. 8, No. 5, 1997, pp. 997-1007. |
Qiao, W., et al., “Coordinated Reactive Power Control of a Large Wind Farm and a STATCOM Using Heuristic Dynamic Programming,” IEEE Transactions on Energy Conversion, vol. 24, No. 2, 2009, pp. 493-503. |
Qiao, W., et al., “Fault-Tolerant Optimal Neurocontrol for a Static Synchronous Series Compensator Connected to a Power Network,” IEEE Transactions on Industry Applications, vol. 44, No. 1, 2008, pp. 74-84. |
Qiao, W., et al., “Optimal Wide-Area Monitoring and Nonlinear Adaptive Coordinating Neurocontrol of a Power System with Wind Power Integration and Multiple FACTS Devices,” Neural Networks, vol. 21, No. 2, 2008, pp. 466-475. |
Qiao, W., et al., “Real-Time implementation of a STATCOM on a Wind Farm Equipped With Doubly Fed Induction Generators,” IEEE Transactions on Industry Applications, vol. 45, No. 1, 2009, pp. 98-107. |
Rabelo, B.C., et al., “Reactive Power Control Design in Double Fed Induction Generators for Wind Turbines,” IEEE Transactions on Industrial Electronics, vol. 56, No. 10, 2009, pp. 4154-4162. |
Riedmiller, M., “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm,” Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, 1993, pp. 586-591. |
Venayagamoorthy, G.K., et al., “Comparison of Heuristic Dynamic Programming and Dual Heuristic Programming Adaptive Critics for Neurocontrol of a Turbogenerator,” IEEE Transactions on Neural Networks, vol. 13, No. 3, 2002, pp. 764-773. |
Venayagamoorthy, G.K., et al., “Implementation of Adaptive Critic-Based Neurocontrollers for Turbogenerators in a Multimachine Power System,” IEEE Transactions on Neural Networks, vol. 14, No. 5, 2003, pp. 1047-1064. |
Wang, C., et al., “Short-Time Overloading Capability and Distributed Generation Applications of Solid Oxide Fuel Cells,” IEEE Transactions on Energy Conversion, vol. 22, No. 4, 2007, pp. 898-906. |
Wang, F-Y., et al., “Adaptive Dynamic Programming: An Introduction,” IEEE Computational Intelligence Magazine, vol. 43, No. 2, 2009, pp. 39-47. |
Werbos, P.J., “Backpropagation Through Time: What it Does and How to Do it,” Proceedings of the IEEE, vol. 78, No. 10, 1990, pp. 1550-1560. |
Werbos, P.J., “Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities,” Automatic Differention: Applications, Theory and Implementations, Bücker, H., et al., Lecture Notes in Computational Science and Engineering, Springer, 2005, pp. 15-34. |
Werbos, P.J., “Neural Networks, System Identification, and Control in the Chemical Process Industries,” Handbook of Intelligence Control, Chapter 10, Section 10.6.1-10.6.2, White, Sofge, eds., New York, Van Nostrant Reinhold, New York, 1992, pp. 283-356, www.werbos.com. |
Werbos, P.J., “Stable Adaptive Control Using New Critic Designs,” eprint arXiv:adap-org/9810001, Sections 77-78, 1998. |
Werbos, P.J., “Approximate Dynamic Programming for Real-Time Control and Neural Modeling,” Handbook of Intelligent Control, Chapter 13, White, Sofge, eds.,New York, Van Nostrand Reinhold, 1992, pp. 493-525. |
Xu, L., et al., “Dynamic Modeling and Control of DFIG-Based Wind Turbines Under Unbalanced Network Conditions,” IEEE Transactions on Power Systems, vol. 22, No. 1, 2007, pp. 314-323. |
International Search Report, dated Nov. 6, 2014, in corresponding International Application No. PCT/US2014/049724. |
Number | Date | Country | |
---|---|---|---|
20150039545 A1 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
61862277 | Aug 2013 | US |