This disclosure relates generally to air handling systems for engines, and more specifically to systems with real-time self-learning air handling controls.
Recently, there has been an increased demand for engine systems with internal combustion engines to meet criteria such as improved fuel economy and reduced emissions, all the while maintaining optimal performance for the user, which led to the development of technologies such as fuel injection systems, turbocharging, and exhaust gas recirculation that made the engines much more environmentally-friendly without sacrificing satisfactory user experience. As a result, more emphasis is placed on the optimization of multiple criteria, which includes balancing fuel economy, emissions, and engine performance to achieve as much as possible in all criteria at the same time, by controlling variables within the engine system in a stochastic environment, a process generally referred to as engine tuning.
Specifically, it is desirable to control an air handling system of an internal combustion engine, particularly during transient events, to provide for a responsive air handling system capable of responding appropriately to transient operating conditions. As such, the internal combustion engine, which uses a turbocharger and an exhaust gas recirculation (EGR) system to control the air flow inside the cylinder, requires efficient engine tuning to fully utilize the available components and achieve optimal performance.
Prior art techniques of engine tuning include model-based air handling controllers which employ model predictive controllers (MPC). A block diagram of such MPC system is illustrated in
Other prior art techniques include for example engine mapping. This technique conducts a series of tests on the engine and the program which controls it, and implements steady-state engine response as control variables to determine the inputs to the engine, which establishes the operating limits of the engine and sets control input bias with respect to the operating point, known as a steady-state calibration. Then, these input settings are graphically represented in the form of a characteristic map, which shows the performance curves that represent performance of the engine when there is a change in certain parameters such as speed, load, air-fuel ratio, as well as engine/ambient temperature. Most of the calibration techniques utilized rely on a person to perform off-line calibration and optimization and subsequently plug in values in the engine control module (ECM) for engine operation. These techniques apply post-processing to data collected in a controlled environment for calibration and optimization. However, off-line calibration requires a lot of statistics and data to prepare the engine for actual use, during which the engine will likely encounter situations and states that are not covered by the initial static dataset used for off-line calibration. As such, because real operating conditions can be drastically different from the conditions during calibration, such techniques are not adequate in adapting the engine to real conditions as it operates. Similar maps are designed for transient states that are tuned via trial-and-error processes where the calibrator runs different duty cycles and calibrate to meet the expected performance. Because it is not possible to run all the duty cycles in practice, such processes may lead to suboptimal performance for some cycles. Furthermore, because the calibration techniques are performed to model the engine behavior only in steady state, during the transient state the engine is controlled to meet a specific objective such as smoke or torque response, and thus other variables such as fuel consumption are typically given less weight when considering such factors during engine operation.
Therefore, there is a need to provide a more computationally efficient, real-time engine tuning technique which allows for a more accurate prediction of the actual engine behavior in a dynamic style to enable optimization of the air handling control within the engine, all the while reducing the dependency on the accuracy of the initial prediction model and frequent calibrations.
Various embodiments of the present disclosure relate to a deep reinforcement learning for air handling control of an engine system, particularly to an internal combustion engine system. In one embodiment, the engine system includes an air handling control unit which controls a plurality of air handling actuators responsible for maintaining air and exhaust gas flow within the engine system. The engine system has a plurality of sensors coupled to it such that the sensor signals from these sensors at least partially define a current state of the engine system. The air handling control unit includes a controller which controls the air handling actuators of the engine system as well as a processing unit coupled to the sensors and the controller. The processing unit includes an agent which learns a policy function that is trained to process the current state, determines a control signal to send to the controller by using the policy function after receiving the current state as an input, and outputs the control signal to the controller. Then, the agent receives a next state and a reward value from the processing unit and updates the policy function using a policy evaluation algorithm and a policy improvement algorithm based on the received reward value. Subsequently, the controller controls the air handling actuators in response to receiving the control signal. In one aspect of the embodiment, the control signal is a command signal for the air handling actuators.
For example, the current state can be defined by one or more of the following parameters: speed value, load value or torque demand of the engine, and air handling states of the engine system. In another example, the current state can also have past values of these parameters, such as the speed value at a previous time step, the load value at a previous time step, and so on. Also, the air handling states can be determined by charge flow value and exhaust gas recirculation (EGR) fraction value. Also, the current state includes one or more of the following commands: charge flow command, EGR fraction commands, EGR flow commands, fresh air flow command, and intake manifold pressure command. Reward values are a weighted summation of the tracking accuracy, overshoot, and response time for EGR and charge flows in the engine.
In some embodiments, the agent is made of a plurality of function approximators. The function approximators used in these cases can be deep neural networks (DNN), support vector machines (SVM), regression-based methods, and decision trees. Also, DNN can include long short-term memory (LSTM) networks and convolution neural networks. Furthermore, the function approximators are trained to learn the initial policy function and improved using an optimization technique. Examples of such optimization technique are q-learning and policy gradients. Furthermore, in some embodiments, the function approximators imitate one or more preexisting air handling controllers using an imitation learning technique such as a dataset aggregation (DAGGER) algorithm.
While multiple embodiments are disclosed, still other embodiments of the present disclosure will become apparent to those skilled in the art from the following detailed description, which shows and describes illustrative embodiments of the disclosure. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive.
The embodiments will be more readily understood in view of the following description when accompanied by the below figures and wherein like reference numerals represent like elements. These depicted embodiments are to be understood as illustrative of the disclosure and not as limiting in any way.
While the present disclosure is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the present disclosure to the particular embodiments described. On the contrary, the present disclosure is intended to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the present disclosure is practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present disclosure, and it is to be understood that other embodiments can be utilized and that structural changes can be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents.
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Similarly, the use of the term “implementation” means an implementation having a particular feature, structure, or characteristic described in connection with one or more embodiments of the present disclosure, however, absent an express correlation to indicate otherwise, an implementation may be associated with one or more embodiments. Furthermore, the described features, structures, or characteristics of the subject matter described herein may be combined in any suitable manner in one or more embodiments.
Furthermore, the engine system 300 incorporates a high-pressure EGR system in which the EGR actuator 306 recirculates exhaust gas between the two high-pressure points, i.e. the exhaust manifold and the inlet manifold. In another embodiment shown in
Activation of the EGR actuator 306 and turbine 320 help to increase the speed of the engine, but they must be controlled to achieve optimal efficiency within the system. In other words, it is desirable for the engine to maintain some of these components in a deactivated state when there is no need for an increase in the speed of the engine, such as, if the engine system is incorporated into a car, when the user is driving on a road with a lower speed limit than a freeway or the driving style of the user indicates that he or she tends to drive at a more moderate speed. As such, a current state of the engine system may be used in determining whether such activation is necessary. In the reinforcement learning technique of
Measurements from the sensors 336 and 338 are sent as sensor signals 337 and 339, respectively, to the processing unit 324 which uses these data to determine the next actions to be taken by the controller 326. The processing unit 324 includes the agent 202 from
The function approximators act to approximate how the engine behaves under different conditions using a reinforcement learning technique as explained in
In one example, the states of the internal combustion engine system can include one or more of: the engine speed, the engine load (torque output of the engine), torque demand for the engine, and the air handling states. The air handling states can include one or more of: the charge flow of the engine (the sum of air flow into the intake manifold of the engine) and the EGR fraction (the fraction of charge flow attributable to recirculated exhaust gas from the engine). Additionally, the air handling states can also include one or more of: prior EGR flow commands, fresh air flow command, and intake manifold pressure command as previously sent by the controller. Furthermore, although
A detailed explanation of the reinforcement learning technique is described below in view of the engine systems illustrated in
The agent 202 has a policy π which is the starting function for the learning technique. A policy π is a function which considers the current state xt to output a corresponding action ut, expressed as ut=π(xt). As such, the policy π determines the initial action u0 and sends a command signal 340 to the controller 326. The controller 326 then sends the appropriate command signals 342 and 344 to the EGR actuator 306 and the turbine 320, respectively, based on the command signal 340. The command signal 340 can include the target EGR value (such as how much of the exhaust gas should be recirculated back into the engine and how much pressure should be in the intake manifold), the target turbocharger value (such as a target boost value that the turbocharger needs to provide to the engine), and the target intake temperature (the temperature of the intake air such that the incoming air is not too hot for the engine). In one example, the turbocharger is a variable geometry turbocharger.
After the command signals are applied, the engine system (i.e. the environment) enters the next state, after which the sensors provide new measurements to the processing unit 324, which uses these updated sensor signals to calculate the new current state x1 of the environment and sends the data to the agent 202, along with a first reward r0 which is a scalar value. The processing unit 324 stores a program which calculates the reward, i.e. a reward function R such that rt=R(ut, xt, xt+1), to send to the agent 202. For example, the reward is a weighted summation of the tracking accuracy, overshoot, and response time of EGR flows and a charge flow as outputted by the engine system. Once the agent 202 receives the first reward r0, the agent 202 determines the next action u1 by using the policy π based on the current state x1, i.e. u1=π(x1).
To evaluate the quality of the policy π, a value function V is calculated such that
V(π(xN))=Σt=0N(γtrt) (1)
for a time horizon from t=0 to t=N. When the N value approaches infinity (i.e. the system runs for a prolonged period of time) the value function V can represent a receding horizon problem, which is useful in understanding the global stability properties of any local optimization that is determined by the policy. In the function, γ is the discount factor between 0 and 1 which denotes how much weight is placed on future rewards in comparison with the immediate rewards. The discount factor γ is necessary to make the sum of rewards converge, and this denotes that future rewards are preferred at a discounted rate with respect to the immediate reward. The policy π must act in a way to increase as much reward gained as possible, therefore the goal of the agent 202 is to find a policy π that maximizes a sum of the reward over the time horizon, i.e. max Σt=0N (γtrt).
The policy π is also constantly improved using policy evaluation and policy improvement algorithms. During a policy evaluation process, the value function V(π) is calculated for some, or all, of the states x based on a fixed policy π. Then, during a policy improvement process which follows, the policy π is improved by using the value function V(π) obtained in the policy evaluation step such that a value function V(7e) calculated using the new policy π′ is greater than or equal to the value function V(π) calculated using the original policy π. These two processes are repeated one after another until either (a) the policy π remains unchanged, (b) the processes continue for more than a predetermined period of time, or (c) the change to the value function V is less than a predetermined threshold.
Numerous different approaches can be taken to achieve the goal of maximizing the sum of reward. Some of the approaches are model-based (an explicit model of the environment is estimated and an optimal policy is computed for the estimated model) and model-free (the optimal policy is learned without first learning an explicit model, such as value-function based learning that is related to the dynamic programming principles). One example of a model-free approach is an optimization technique known as “q-learning”. The q-learning technique develops and updates a map Q(xt, ut) which is similar to a value function that gives an estimate sum of rewards rt for a pair of a given state xt and action ut. This map is initialized with a starting value and successively updated by observing the reward using an update function, as explained below. The map function is described by the following equation:
Q(xt,ut)←(1−α)Q(xt,ut)+α(rt+γmaxuQ(xt+1,u)) (2)
where Q(xt,ut) is the old value, α is a learning rate between 0 and 1, maxuQ(u, xt+1) is the estimate of the optimal future value, and (rt+γmaxuQ(xt+1,u)) is the learned value. As such, the old value is replaced by a new value, which is the old value transformed using the learning rate and the learned value as shown in the equation (2). The q-learning is an off-policy value-based learning technique, in which the value of the optimal policy is learned independently of the agent's actions chosen for the next state, in contrast to on-policy learning techniques like “policy gradient” which can also be used as a learning technique for the engine system as described herein. Advantages of q-learning technique includes being more successful at finding a global optimum solution rather than just a local maximum.
A policy gradient technique is a direct policy method which starts with learning a map from state to action, and adjusts weights of each action by using gradient descent with the feedback from the environment. For any expected return function J(θ), the policy gradient technique searches for a local maximum in J(θ) so the expected return function
J(θ)=E{Σk=0Hakrk} (3)
is optimized where ak denotes time-step dependent weighting factors often set to ak=γk for discounted reinforcement learning, by ascending the gradient of the policy with respect to the parameter θ, i.e.
Δθ=α∇θJ(θ) (4)
where ∇θJ(θ) is the policy gradient and a is a step-size parameter, the policy gradient being:
The policy gradient can then be calculated or approximated using methods such as finite difference methods and likelihood ratio methods. As such, the policy gradient technique guarantees that the system will converge to reach a local maximum for the expected returns. Furthermore, other model-free algorithms, such as SARSA (state-action-reward-state-action) algorithm, deep Q network (DQN) algorithm, deep deterministic policy gradient (DDPG) algorithm, trust region policy optimization (TRPO) algorithm, and proximal policy optimization (PPO) algorithm, can also be used.
In another example, a neural network model that is initially trained to imitate a known engine control can be used to achieve the goal of maximizing the sum of reward such that the model and the reward function is provided in which the objective is to identify the best policy for the system. One approach to train the model is by using imitation learning, which imitates an expert (or teacher) that provides a set of demonstration trajectories, which are sequences of states and actions. As such, the agent (or learner) needs to determine a policy whose resulting state/action trajectory distribution matches that of the expert. One example of imitation learning is “data aggregation”, also abbreviated as DAGGER, algorithm. The process of the DAGGER algorithm is explained in the below steps:
Because reinforcement learning shares a structure similar to a traditional control system, advantages of using such a technique includes the ability to capture non-linearity of the model to a high precision, resulting in improved performance. Current calibration techniques model engine behaviors in steady state and involve having technicians perform the calibration and optimization off-line and plug in the values for engine operation. Reinforcement learning can reduce such a calibration effort because the reinforcement learning technique can optimally meet all performance indexes for the engine system. Furthermore, the reinforcement learning utilizes on-line optimization with data collected in real conditions (such as when the engine is in operation) to calibrate and optimize the parameters within the engine, which allows the engine to adapt to changes in the operating condition without needing to recalibrate. As such, due to the adaptive nature of the reinforcement learning technique, even when the engine is running in a non-calibrated condition, the engine can learn or calibrate relevant parameters on its own to deliver similar levels of performance.
Furthermore, the aforementioned air handling control can be used for other types of engines besides the internal combustion engines as described above. For example, the engine of a plug-in hybrid electric vehicles combines a gasoline or diesel engine with an electric motor with a rechargeable battery, such that the battery initially drives the car and the conventional engine takes over when the battery runs out. In such an instance, the air handling control can be programmed to activate after the vehicle switches its power source from the battery to the conventional engine. Also, electric vehicles which uses only electric motors or traction motors for propulsion can include such air handling controls. In this case, the air handling control can be replaced by a climate control system for the interior of the car, such as a power heating and air conditioning system, for example.
The present subject matter may be embodied in other specific forms without departing from the scope of the present disclosure. The described embodiments are to be considered in all respects only as illustrative and not restrictive. Those skilled in the art will recognize that other implementations consistent with the disclosed embodiments are possible.
Number | Name | Date | Kind |
---|---|---|---|
4409948 | Hasegawa | Oct 1983 | A |
4640257 | Kodama | Feb 1987 | A |
5351193 | Poirier | Sep 1994 | A |
6705301 | Dollmeyer | Mar 2004 | B2 |
7150266 | Nakayama et al. | Dec 2006 | B2 |
8151567 | Rollinger et al. | Apr 2012 | B2 |
8527182 | Minami et al. | Sep 2013 | B2 |
8612107 | Malikopoulos | Dec 2013 | B2 |
9243575 | Ando | Jan 2016 | B2 |
9567930 | Sakayanagi et al. | Feb 2017 | B2 |
9644549 | Saito | May 2017 | B2 |
9644572 | Nagasawa | May 2017 | B2 |
9664129 | Surnilla et al. | May 2017 | B2 |
20020129799 | Wang | Sep 2002 | A1 |
20090030587 | Yonezawa | Jan 2009 | A1 |
20130199177 | Holberg | Aug 2013 | A1 |
20140000554 | Tsuyuki | Jan 2014 | A1 |
20150152804 | Sakayanagi | Jun 2015 | A1 |
20150198105 | Marlett | Jul 2015 | A1 |
20160025028 | Vaughan | Jan 2016 | A1 |
20160254768 | Falkowski | Sep 2016 | A1 |
20170038122 | Lu | Feb 2017 | A1 |
20170159554 | Wang et al. | Jun 2017 | A1 |
20170335780 | Dixon | Nov 2017 | A1 |
Number | Date | Country | |
---|---|---|---|
20200063676 A1 | Feb 2020 | US |