The invention relates to an agent learning apparatus, method and program. More specifically, the invention relates to an agent learning apparatus, method and program for implementing the rapid and highly adaptive control for non-linear or non-stationary targets or physical system control such as industrial robots, automobiles, and airplanes with high-order cognitive control mechanism.
Examples of the conventional learning scheme include a supervised learning scheme for minimizing an error between model control path by the time-series representation given by an operator and predicted path (Gomi. H. and Kawato. M., Neural Network Control for a Closed-Loop System Using Feedback-Error-Learning, Neural Networks, Vol. 6, pp. 933-946, 1933). Another example is a reinforcement learning scheme, in which optimal path is acquired by iterating try and error process in given environment for control system without model control path (Doya. K., Reinforcement Learning In Continuous Time and Space, Neural Computation, 2000).
However, since the environment surrounding the control system is changing constantly in real world, it is difficult for the operator to keep giving model control path to the control system and therefore such a supervised learning scheme cannot be applied. In the latter learning scheme, there is a problem that it takes much time for the control system to acquire the optimal path by iterating try and error process. Thus, it is difficult to employ the aforementioned learning schemes for controlling an object (e.g. a helicopter) which requires to be controlled rapidly and precisely responsive to the environment.
On the other hand, recent research on human control mechanism proves that the human control mechanism focuses on time-series “smoothness” of behavior outputs determined by non-linear approximations of control system based on sensory inputs and symmetric nature of behavior outputs in statistical normal distribution, and acquires control path statistically and very rapidly for minimizing the variance of the behavior outputs by selecting the sensory inputs to be paid attention (Harris. M. C., Signal-dependent noise determines motor planning, Nature, Vol. 394, 20 August, 1998).
In the cognitive science field, it is considered that human being has a mechanism to realize rapid and efficient control by consciously selecting necessary information out of massive sensory information. It has been suggested that this mechanism should be applied to engineering, but no concrete model has been proposed to apply this mechanism to engineering.
Therefore, it is an objective of the invention to provide an agent learning apparatus, method and program for acquiring optimal control path rapidly.
According to the invention, a selective attention mechanism is devised for creating non-observable information (attention classes) by learning and for associating sensory inputs with the attention classes. With this mechanism, optimal control path for minimizing the variance of the behavior outputs may be acquired rapidly.
An agent learning apparatus according to the invention comprises a sensor for capturing external environmental information for conversion to sensory inputs, and a behavior controller for supplying behavior outputs to a controlled object based on results of learning performed on said sensory inputs. The apparatus further comprises a behavior status evaluator for evaluating behavior of the controlled object caused by said behavior outputs. The apparatus further comprises a selective attention mechanism for storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation, computing probabilistic models based on the behavior outputs stored in said columns, calculating confidence for each column by applying newly given sensory inputs to said probabilistic models, and outputting, as said results of learning, behavior outputs in association with said newly given sensory inputs in the column having largest confidence. The probabilistic model is probabilistic relationship that a sensory input belongs to each column.
By such configuration, the agent learning apparatus may be applied to initiate controlling an object without advance learning. In this case, the instability of controlled object is large before computing the probabilistic models and the object may be damaged or so by unexpected motion of the object. Therefore, range of behavior outputs given to the object by the behavior controller is preferably limited forcefully for a predetermined period.
Instead of selecting a column having largest confidence for a given sensory input, a column containing the behavior outputs having largest evaluation by the behavior status evaluator may be always selected and a behavior output in association with newly given sensory inputs in that column may be outputted.
The computing probabilistic model comprises representing behavior outputs stored in columns as normal distribution by using Expectation Maximization algorithm, using said normal distribution to compute a priori probability that a behavior output is contained in each column, and using said a priori probability to compute said probabilistic model by supervised learning with neural network. The probabilistic model is probabilistic relationship between any sensory input and each column. Specifically, the probabilistic model may be conditional probabilistic density function p(Ii(t)|Q1).
The confidence may be calculated by applying the a priori probability and the probabilistic model to Bayes' rule. The confidence is the probability that a sensory input belongs to each attention class (column).
As described above, controlling the object may be initiated without advance learning. However, it is preferable that data sets of relationship between sensory inputs and behavior outputs are prepared and probabilistic models are computed in advance by performing advance learning with the data sets. After computing the probabilistic models, confidence is calculated using the probabilistic model for newly given sensory inputs. In this case, probabilistic models same with those computed in advance learning stage are continued to be used. Therefore, the object may be stabilized more rapidly. When performing advance learning, sensory inputs are converted into behavior outputs by a behavior output generator based on the data sets and supplied to the object.
First, preliminary experiment is described using a radio-controlled helicopter (hereinafter simply referred to as a “helicopter”) shown in
To realize a stable control for various controlled objects, attention should be paid on symmetric nature of such normal distribution of the behavior outputs of the controlled objects. This is because most frequent behavior outputs on the normal distribution may be expected to be heavily used for realizing stability of the controlled object. Therefore, through the use of the symmetric nature of the normal distribution, behavior outputs to be supplied to the controlled object under ever-changing environment may be statistically predicted.
When selecting behavior outputs supplied to the controlled object based on sensory inputs captured by a sensor or the like, the number of selectable behavior outputs may be unlimited. However, if learning is performed such that the variance of behavior result of the controlled object caused by supplied behavior output (hereinafter simply referred to as “behavior result”) is decreased over time, the range of selectable behavior outputs based on the captured sensory inputs come to be limited, and resultantly the controlled object will be stabilized. In other words, by minimizing the variance for a normal distribution of behavior outputs, stable control with lowest width or rate of fluctuation is realized.
The agent learning apparatus according to the invention is characterized in that both statistical learning schemes based on such preliminary experiment and conventional supervised learning scheme are employed synthetically. Now, preferable embodiments of the invention will be described with reference to FIGS. 1 to 12.
The agent learning apparatus 100 according to the invention performs learning with prepared data sets, for example. Such process is referred to as “advance learning stage” herein.
To the sensory inputs captured by the sensor 301, the behavior output generator 302 generates behavior outputs based on the data sets and supplies them to a controlled object 308. The behavior status evaluator 303 evaluates the behavior result of the controlled object 308 to generate rewards for behavior outputs one by one. The selective attention mechanism 304 distributes the behavior outputs to one of columns according to each reward to create probabilistic models described later. Creating probabilistic models in advance enables high-accurate control.
After the advance learning stage is completed, the agent learning apparatus 100 performs a process which is referred to as “behavior control stage” herein.
It should be noted that advance learning stage is not necessary. Operation of the agent learning apparatus in such case without advance learning will be described later.
All or part of the behavior output generator 302, the behavior status evaluator 303, the selective attention mechanism 304 and the behavior controller 307 may be implemented by, for example, executing on a general purpose computer a program configured to realize functionality of them.
Features of each functional block and operation of the agent learning apparatus 100 in advance learning stage is described referring to flowcharts in
External environment information is captured by a sensor 301 at given interval and converted into signals as sensory inputs Ii(t) (i=1, 2, . . . , m), which are supplied to a behavior output generator 302. The behavior output generator 302 generates behavior outputs Qi(t) corresponding to the supplied sensory inputs Ii(t) and supplies them to a behavior status evaluator 303 and a controlled object 308. The transformation between the sensory inputs Ii(t) and the behavior outputs Qi(t) is represented by the following mapping f.
f: Ii(t)Qi(t) (1)
The mapping f is, for example, a non-linear approximation transformation using well-known Fourier series or the like.
In advance learning according the embodiment, the mapping f corresponds to preparing random data sets which includes the mapping between sensory inputs Ii(t) and behavior outputs Qi(t). In other words, the behavior output generator 302 generates behavior outputs Qi(t) one by one corresponding to each sensory input Ii(t) based on these data sets (step S401 in
Generated behavior outputs Qi(t) are supplied to the behavior status evaluator 303 and the controlled object 308. The controlled object 308 will work in response to the supplied behavior outputs Qi(t). The result of this work is supplied to the behavior status evaluator 303 (step S402 in
The behavior status evaluator 303 then evaluates the result of this work (for example, behavior of the controlled object gets stable or not) with predetermined evaluation function and generates reward for every behavior outputs Qi(t) (step S403 in
The evaluation function herein is a function that yields reward “1” if the controlled object gets stable by the supplied behavior output Qi(t) or yields reward “2” otherwise. Type of rewards may be selected in consideration of behavior characteristics of the controlled object 308 or the required control accuracy. When using the helicopter noted above, reward “1” or reward “2” is yielded according to whether the helicopter is stable or not, which may be judged on, for example, its pitching angle detected with gyro-sensor on the helicopter.
Evaluation function is used to minimize the variance σ of the behavior outputs Qi(t). In other words, by using the evaluation function, sensory inputs Ii(t) unnecessary for stable control may be removed and necessary sensory inputs Ii(t) are reinforced. Finally, reinforcement learning satisfying σ(Q1)<σ(Q2) is attained. Q1 is a group of the behavior outputs Qi(t) which are given reward “1” and Q2 is a group of the behavior outputs Qi(t) which are given reward On receiving rewards from the behavior status evaluator 303, the selective attention mechanism 304 creates a plurality of columns 1, 2, 3, . . . , m in response to the type of the rewards. Then the selective attention mechanism 304 distributes the behavior outputs Qi(t) to each column (step S404 in
The selective attention mechanism 304 performs expectation maximization (EM) algorithm and supervised learning with neural network, both will be described later, to calculate conditional probabilistic distribution (that is, probabilistic model) p(Ii(t)|Ωl) (steps S405-S408 in
The attention class Ωl is used to select noticeable sensory inputs Ii(t) from among massive sensory inputs Ii(t). More specifically, attention class Ωl is a parameter used for modeling the behavior outputs Ωi(t) stored in each column using the probabilistic density function of the normal distribution of behavior outputs Qi(t). Attention classes Ωl are created as many as the number of the columns storing these behavior outputs Qi(t). Calculating the attention class Ωl corresponding to the behavior outputs Qi(t) stored in each column is represented by the following mapping h.
h: Qi(t)Ωl(t) (2)
Processes in steps S405-S408 in
First, Expectation Maximization algorithm (EM algorithm) in step S405 is described.
The EM algorithm is an iterative algorithm for estimating parameter θ which takes maximum likelihood when observed data is viewed as incomplete data. As noted above, since it is considered that the behavior outputs Qi(t) stored in each column appear to be the normal distribution, the parameter θ may be represented as θ(μl,Σl) with mean μl and covariance Σl. EM algorithm is initiated with appropriate initial values of θ(μl, Σl). Then the parameter θ(μl, Σl) is updated one after another by iterating Expectation (E) step and Maximization (M) step alternately.
On the E step, conditional expected value φ(θ|θ(k)) is calculated by following equation.
Then on the M step, parameters μl and Σl for maximizing φ(θ|θ(k)) is calculated by following equation and are set to a new estimated value θ(k+1).
θ(k+1)=arg maxθφ(θ,θ(k)) (4)
By partial differentiating the calculated φ(θ|ƒ(k)) on θ(k) and letting a result equal to zero, parameters μl and Σl may be finally calculated. More detailed explanation will be omitted because this EM algorithm is well known in the art.
Thus, behavior outputs Qi(t) stored in each column may be represented by normal distribution (step S405 in
A priori probability {overscore (p)}(Qil(t)|Ωl(t)) of attention class Ωl is calculated by following equation with the calculated parameters μl and Σl (step S406 in
where N is dimension of the behavior outputs Qi(t).
Supervised learning with neural network is described below. In this learning, conditional probabilistic density function p(Ii(t)|Ωl) is calculated with the attention class Ωl, which has been calculated as a posteriori probability, as supervising signal (step S407 in
λ shown in
With such supervised learning using neural network, conditional probabilistic density function p(Ii(t)|Ωl), which is probabilistic relationship between sensory inputs Ii(t) and attention class Ωl, may be computed.
As noted above, learning within the selective attention mechanism 304 on advance learning stage proceed in feed back fashion. After the conditional probabilistic density function p(Ii(t)|Ωl) is calculated, the probability may be determined of which attention class Ωl a new sensory input Ii(t) belongs to without calculating the mapping h·f at each time for that sensory input.
Processes in steps S401 to S407 are performed on every pair of sensory input Ii(t) and behavior output Qi(t) in given data sets (step S408 in
An explanation of how the agent learning apparatus 100 operates in advance learning stage is finished.
After the advance learning stage using the data sets is completed, the agent learning apparatus 100 starts to control the controlled object 308 based on the established learning result. Now it will be described below how the agent learning apparatus 100 operates on behavior control stage referring to
In behavior control stage, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) for each column and conditional probabilistic distribution p(Ii(t)|Ωl), both has been calculated in advance learning stage, are used. New sensory inputs Ii(t) captured at sensor 301 are provided to the attention class selector 306 in the selective attention mechanism 304 (step S410 in
The confidence p(Ωl(t)) is the probability that a sensory input Ii(t) belongs to each attention class Ωl(t). Calculating the probability that a sensory input Ii(t) belongs to each attention class Ωl(t) with Bayes' rule means that one attention class may be identified selectively by increasing the confidence p(Ωl(t)) with learning of Bayes' rule (weight). In other words, with selective attention mechanism 304, the attention class Ωl, or hidden control parameter, may be directly identified based on observable sensory input Ii(t).
The attention class selector 306 determines that the attention class Ωl with highest confidence p(Ωl(t)) is an attention class corresponding to the new sensory input Ii(t). The determined attention class Ωl is informed to the behavior controller 307 (step S412 in
When informed attention class Ωl is Ωl corresponding to “stable” column, the behavior controller 307 calculates a behavior output Qi(t) corresponding to a captured sensory input Ii(t) based on the behavior outputs Qi(t) stored in the column 1 (step S413 in
When informed attention class Ωl is Ω2 corresponding to “unstable” column, the behavior controller 307 selects not column 2 but column 1 having smaller variance, and calculates a behavior output Qi(t) corresponding to a captured sensory input Ii(t) based on the behavior outputs Qi(t) stored in column 1, then provides it to the controlled object 308 (S414). If no corresponding behavior output is stored, last behavior output is supplied. If no behavior output Qi(t) associated with the sensory input Ii(t) is stored in the column, previous behavior output Qi(t) is selected and provided to the controlled object 308. By repeating such process, relation between variance of columns σ(Q1)<σ(Q2) is accomplished (that is, variance of behavior outputs in column 1 gets smaller rapidly and the stability of controlled object 308 is attained).
Alternatively, when supplied attention class Ω1 is Ω2 which corresponds to “unstable”, the behavior controller 307 may select column 2, calculate behavior outputs Qi(t) corresponding to captured sensory inputs Ii(t) from among behavior outputs Qi(t) stored in column 2 having smaller variance, and supply them to the controlled object 308.
The controlled object 308 exhibits behavior according to the provided behavior output Qi(t). The result of this behavior is provided to the behavior status evaluator 303 again. Then, when new sensory input Ii(t) is captured by sensor 301, attention class is selected using conditional probability density function p(Ii(t)|Ωl) based on learning by Bayes' rule. After that, processes described above are repeated (S415).
An explanation of how the agent learning apparatus 100 operates in behavior control stage is finished.
In this embodiment, since conditional probabilistic density function p(Ii(t)|Ωl) is calculated in the advance learning stage, the attention class Ωl may be directly selected corresponding to new sensory input Ii(t) using statistical learning in behavior control stage without computing mapping f and h.
Generally, the information amount of sensory inputs Ii(t) from the sensor 301 is enormous. Therefore, if mapping f and h are calculated for all sensory inputs Ii(t), its computing amount far exceeds the processing capacity of the typical computer. Thus, according to the invention, appropriate filtering for sensory inputs Ii(t) with the attention classes Ωl may improve the efficiency of the learning.
In addition, selecting attention class Ωl with highest confidence p(Ωl(t)) corresponds to selecting column which includes behavior output Qi(t) with highest reward for a sensory input Ii(t).
Learning process is performed three times in this embodiment. That is, 1) reinforcement learning in the behavior status evaluator 303 (in other words, generation of cluster models by rewards), 2) learning of the relationship between attention classes Ωl and sensory inputs Ii(t) using the hierarchical neural network, and 3) selection of attention class corresponding to new sensory input Ii(t) with Bayes' rule. Thus, the agent learning apparatus 100 according to the invention is characterized in that supervised learning scheme and statistical learning scheme are synthetically applied.
In conventional supervised learning scheme, optimal control given by an operator is learned by a control system, but this is not practical as noted above. In conventional reinforcement learning scheme, optimal control is acquired through try and error process of a control system, but this takes much processing time.
In contrast, the agent learning apparatus 100 according to the invention can select the attention class with the selective attention mechanism 304 and learn the important sensory inputs Ii(t) selectively, resulting to reduce the processing time and to eliminate the need of the supervising information given by an operator. Furthermore, if the motion of the controlled object 308 has non-linear characteristics, it takes much time for learning with only the reinforcement learning scheme because complicated non-linear function approximation is required. On the other hand, since the agent learning apparatus 100 of the invention learns the sensory inputs according to their importance with the selective attention mechanism 304, the processing speed is improved. The agent learning apparatus 100 is also characterized in that feed back control is performed in advance learning stage and feed forward control is performed in behavior control stage.
Now referring to
Mounted on a helicopter 601 is a vision sensor 602, which captures visual information every 30-90 milliseconds and sends them to a computer 603 as sensory inputs Ii(t). The computer 603 is programmed to implement the agent learning apparatus 100 shown in
In this example, the number of the attention classes Ωl is set at two. Total 360 data sets for advance learning were used for processing in selective attention mechanism 304. After the advance learning was over, it was confirmed whether the system could select correct attention class Ωl when another 150 test data (new sensory inputs Ii(t)) were provided to the system.
In advance learning stage, two types of rewards (positive rewards and negative rewards) are assigned to behavior outputs Qi(t). The selective attention mechanism 304 distributes the behavior outputs Qi(t) to column 1 or column 2 according to their rewards. This process is represented by the following evaluation function.
If |{tilde over (Q)}i−Qi|≦δ then Qi1Qi(Positive) else Qi2=Qi(Negative)
where Q1 or Q2 representes a group of behavior outputs Qi(t) distributed to column 1 or column 2 respectively. In this case, the positive reward corresponds to column 1 and the negative reward correspond to column 2. {tilde over (Q)}i denotes the behavior output Qi(t) to keep the helicopter stable. {tilde over (Q)}i is a mean value of a probabilistic distribution p(Qi) and set to “82” in this example. δ is a threshold that represents a tolerance of stability and set to “1.0” in this example. The evaluation function shown above acts as reinforcement learning satisfying the relation between the variances of column σ(Q1)<σ(Q2).
These results prove that the agent learning apparatus 100 according to the invention can learn the predictive relationship between the sensory inputs Ii(t) and the two attention classes Ωl. In other words, the result suggests that the discriminative power of the prediction between sensory inputs Ii(t) and two attention classes is “weak” when the probabilistic distribution for statistical column is in early stage of EM algorithm. As the iteration number in the EM algorithm is increased, the accuracy of the prediction is improved. The discriminative power of the prediction may be affected by the number of the normal distribution (Gaussian function) used in the EM algorithm. Although single Gaussian function is used in the aforementioned embodiments, mixture of Gaussian functions may be used in the EM algorithm to improve the discriminative power of the prediction.
In contrast, the agent learning apparatus 100 according to the invention acquires the sensory inputs Ii(t) needed for stabilizing the helicopter 601 not through try and error process but by the learning according to the importance of the sensory inputs Ii(t) using the selective attention mechanism 304. Therefore, minimization of the variance of the behavior outputs Qi(t) may be attained very rapidly.
Although the vision sensor 602 is used in this example, sensory inputs Ii(t) is not limited to vision information but other input such as auditory information or tactile information may be also used. In addition, the example has been described for two columns and two rewards, but three or more columns and rewards may be used. Only one column will not accelerate the convergence of the learning because, if there is only one column, it will take much time until the normal distribution curve of behavior outputs Qi(t) contained in the column is sharpened and the variance gets small. One feature of the invention is that the normal distribution curve of behavior outputs Qi(t) is sharpened rapidly by generating a plurality of columns. The more columns are used, the more complicated and various behavior outputs may be obtained.
In embodiments described above, a priori learning using data sets is performed. Such a priori learning is for stabilizing the controlled object 308 more rapidly. However, control for controlled object 308 (for example, a helicopter 601) may be started by applying the agent learning apparatus 100 without a priori learning. In this case, the behavior controller 307 supplies behavior outputs Qi(t) to the controlled object 308 in a random fashion irrespective of sensory inputs Ii(t) captured by the sensor because no probabilistic model described above is created during short period after control is started. The behavior status evaluator 303 supplies rewards to behavior results of the controlled object 308. The selective attention mechanism distributes behavior outputs Qi(t) to columns in association with sensory inputs Ii(t) according to the reward. Then, relationship between sensory inputs Ii(t) and behavior outputs Qi(t) is stored in columns according to rewards and normal distributions for behavior outputs stored in columns may be computed with EM algorithm, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) may be computed according to processes described above. These values are applied to Bayes' rule to calculate confidence p(Ωl(t)) for each attention class. The behavior controller 307 computes behavior output Qi(t) corresponding to newly-captured sensory input Ii(t) based on column where confidence p(Ωl(t)) is maximum or where behavior outputs having best reward are stored and supply them to controlled object. Again, the behavior status evaluator supplies rewards to behavior results of the controlled object 308 and the computed behavior outputs Qi(t) are stored in one of columns. Based on this, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) may be updated. Then, these updated probabilities are applied to Bayes' rule and new behavior output Qi(t) is output. In this way, without advance learning, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) are updated one after another. In this case, the instability of controlled object is large when at the beginning of starting the control and the controlled object may be damaged or so by unexpected motion of the object. Therefore, range of behavior output Ql(t) given to the controlled object 308 by the behavior controller 307 is preferably limited forcefully until the predetermined number of the relationship between sensory inputs and behavior outputs are gained (alternatively, until predetermined period is lapsed).
Furthermore, well-known competitive learning or learning with self-organized network may be employed instead of EM algorithm in step S405. Well-known belief network or graphical model may be employed instead of Bayes' rule in step S411.
As described above, according to the invention, variance of behavior outputs Qi(t) may be minimized rapidly to stabilize a controlled object by computing the behavior output based on a column which is estimated as stable.
Number | Date | Country | Kind |
---|---|---|---|
2001-028758 | Feb 2001 | JP | national |
2001-028759 | Feb 2001 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP02/00878 | 2/4/2002 | WO | 5/13/2005 |