Agent learning apparatus, method and program

Information

  • Patent Application
  • 20060155660
  • Publication Number
    20060155660
  • Date Filed
    February 04, 2002
    22 years ago
  • Date Published
    July 13, 2006
    18 years ago
Abstract
An agent learning apparatus comprises a sensor (301) for acquiring a sense input, an action controller (307) for creating an action output in response to the sense input and giving the action output to a controlled object, an action state evaluator (303) for evaluating the behavior of the controlled object, a selective attention mechanism (304) for storing the action output and the sense input corresponding to the action output in one of the columns according to the evaluation, calculating a probability model from the action outputs stored in the columns, and outputting, as a learning result, the action output related to a newly given sense input in the column where the highest confidence obtained by applying the newly given sense input to the probability model is stored. By thus learning, the selective attention mechanism (304) obtains a probability relationship between the sense input and the column. An action output is calculated on the basis of the column evaluated as a stable column. As a result, the dispersion of the action output is quickly minimized, and thereby the controlled object can be stabilized.
Description
TECHNICAL FIELD

The invention relates to an agent learning apparatus, method and program. More specifically, the invention relates to an agent learning apparatus, method and program for implementing the rapid and highly adaptive control for non-linear or non-stationary targets or physical system control such as industrial robots, automobiles, and airplanes with high-order cognitive control mechanism.


BACKGROUND ART

Examples of the conventional learning scheme include a supervised learning scheme for minimizing an error between model control path by the time-series representation given by an operator and predicted path (Gomi. H. and Kawato. M., Neural Network Control for a Closed-Loop System Using Feedback-Error-Learning, Neural Networks, Vol. 6, pp. 933-946, 1933). Another example is a reinforcement learning scheme, in which optimal path is acquired by iterating try and error process in given environment for control system without model control path (Doya. K., Reinforcement Learning In Continuous Time and Space, Neural Computation, 2000).


However, since the environment surrounding the control system is changing constantly in real world, it is difficult for the operator to keep giving model control path to the control system and therefore such a supervised learning scheme cannot be applied. In the latter learning scheme, there is a problem that it takes much time for the control system to acquire the optimal path by iterating try and error process. Thus, it is difficult to employ the aforementioned learning schemes for controlling an object (e.g. a helicopter) which requires to be controlled rapidly and precisely responsive to the environment.


On the other hand, recent research on human control mechanism proves that the human control mechanism focuses on time-series “smoothness” of behavior outputs determined by non-linear approximations of control system based on sensory inputs and symmetric nature of behavior outputs in statistical normal distribution, and acquires control path statistically and very rapidly for minimizing the variance of the behavior outputs by selecting the sensory inputs to be paid attention (Harris. M. C., Signal-dependent noise determines motor planning, Nature, Vol. 394, 20 August, 1998).


In the cognitive science field, it is considered that human being has a mechanism to realize rapid and efficient control by consciously selecting necessary information out of massive sensory information. It has been suggested that this mechanism should be applied to engineering, but no concrete model has been proposed to apply this mechanism to engineering.


Therefore, it is an objective of the invention to provide an agent learning apparatus, method and program for acquiring optimal control path rapidly.


DISCLOSURE OF INVENTION

According to the invention, a selective attention mechanism is devised for creating non-observable information (attention classes) by learning and for associating sensory inputs with the attention classes. With this mechanism, optimal control path for minimizing the variance of the behavior outputs may be acquired rapidly.


An agent learning apparatus according to the invention comprises a sensor for capturing external environmental information for conversion to sensory inputs, and a behavior controller for supplying behavior outputs to a controlled object based on results of learning performed on said sensory inputs. The apparatus further comprises a behavior status evaluator for evaluating behavior of the controlled object caused by said behavior outputs. The apparatus further comprises a selective attention mechanism for storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation, computing probabilistic models based on the behavior outputs stored in said columns, calculating confidence for each column by applying newly given sensory inputs to said probabilistic models, and outputting, as said results of learning, behavior outputs in association with said newly given sensory inputs in the column having largest confidence. The probabilistic model is probabilistic relationship that a sensory input belongs to each column.


By such configuration, the agent learning apparatus may be applied to initiate controlling an object without advance learning. In this case, the instability of controlled object is large before computing the probabilistic models and the object may be damaged or so by unexpected motion of the object. Therefore, range of behavior outputs given to the object by the behavior controller is preferably limited forcefully for a predetermined period.


Instead of selecting a column having largest confidence for a given sensory input, a column containing the behavior outputs having largest evaluation by the behavior status evaluator may be always selected and a behavior output in association with newly given sensory inputs in that column may be outputted.


The computing probabilistic model comprises representing behavior outputs stored in columns as normal distribution by using Expectation Maximization algorithm, using said normal distribution to compute a priori probability that a behavior output is contained in each column, and using said a priori probability to compute said probabilistic model by supervised learning with neural network. The probabilistic model is probabilistic relationship between any sensory input and each column. Specifically, the probabilistic model may be conditional probabilistic density function p(Ii(t)|Q1).


The confidence may be calculated by applying the a priori probability and the probabilistic model to Bayes' rule. The confidence is the probability that a sensory input belongs to each attention class (column).


As described above, controlling the object may be initiated without advance learning. However, it is preferable that data sets of relationship between sensory inputs and behavior outputs are prepared and probabilistic models are computed in advance by performing advance learning with the data sets. After computing the probabilistic models, confidence is calculated using the probabilistic model for newly given sensory inputs. In this case, probabilistic models same with those computed in advance learning stage are continued to be used. Therefore, the object may be stabilized more rapidly. When performing advance learning, sensory inputs are converted into behavior outputs by a behavior output generator based on the data sets and supplied to the object.




BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 shows a graph of an exemplary time-series data of behavior outputs;



FIG. 2 shows a histogram of the time-series data in FIG. 1;



FIG. 3 shows a functional block diagram of an agent learning apparatus in advance learning stage according to the invention;



FIG. 4 shows a functional block diagram of an agent learning apparatus in behavior control stage according to the invention;



FIG. 5 is a flowchart illustrating operation of the agent learning apparatus in advance learning stage;



FIG. 6 is an example of a normal distribution curved surface showing relationship between sensory inputs and behavior outputs, which are stored in the column corresponding to “stable” reward;



FIG. 7 is an example of a normal distribution curved surface showing relationship between sensory inputs and behavior outputs, which are stored in the column corresponding to “unstable” reward;



FIG. 8 shows an example of hierarchical neural network for learning the relationship between sensory inputs and attention class;



FIG. 9 is a flowchart illustrating operation of the agent learning apparatus in behavior control stage;



FIG. 10 shows configuration of helicopter control system according to the invention;



FIG. 11 shows learning result of relationship between visual sensory inputs and attention class in the system shown in FIG. 10; and



FIG. 12 shows a graph of the variance of the behavior outputs for the controlled helicopter over time when the control is executed in FIG. 6.




BEST MODE FOR CARRYING OUT THE INVENTION

First, preliminary experiment is described using a radio-controlled helicopter (hereinafter simply referred to as a “helicopter”) shown in FIG. 10, which will be described later.



FIG. 1 is a graph of the time-series data on outputs of control motor for the helicopter acquired every 30 milliseconds when the helicopter was operated to maintain stability. FIG. 2 is a histogram of that data. As shown in FIG. 2, control outputs for stabilizing the helicopter (hereinafter referred to as “behavior output”) may be represented in a normal distribution curve.


To realize a stable control for various controlled objects, attention should be paid on symmetric nature of such normal distribution of the behavior outputs of the controlled objects. This is because most frequent behavior outputs on the normal distribution may be expected to be heavily used for realizing stability of the controlled object. Therefore, through the use of the symmetric nature of the normal distribution, behavior outputs to be supplied to the controlled object under ever-changing environment may be statistically predicted.


When selecting behavior outputs supplied to the controlled object based on sensory inputs captured by a sensor or the like, the number of selectable behavior outputs may be unlimited. However, if learning is performed such that the variance of behavior result of the controlled object caused by supplied behavior output (hereinafter simply referred to as “behavior result”) is decreased over time, the range of selectable behavior outputs based on the captured sensory inputs come to be limited, and resultantly the controlled object will be stabilized. In other words, by minimizing the variance for a normal distribution of behavior outputs, stable control with lowest width or rate of fluctuation is realized.


The agent learning apparatus according to the invention is characterized in that both statistical learning schemes based on such preliminary experiment and conventional supervised learning scheme are employed synthetically. Now, preferable embodiments of the invention will be described with reference to FIGS. 1 to 12.


The agent learning apparatus 100 according to the invention performs learning with prepared data sets, for example. Such process is referred to as “advance learning stage” herein. FIG. 3 shows a functional block diagram of an agent learning apparatus 100 according to one embodiment of the invention in advance learning stage. The agent learning apparatus 100 is illustrated in the area enclosed with dotted lines in FIG. 3, including one or more sensors 301, a behavior output generator 302, a behavior status evaluator 303 and a selective attention mechanism 304. The selective attention mechanism 304 includes a plurality of columns 1, 2, 3, . . . , m created according to rewards produced by the behavior status evaluator 303. The selective attention mechanism 304 also includes an attention class selector 306.


To the sensory inputs captured by the sensor 301, the behavior output generator 302 generates behavior outputs based on the data sets and supplies them to a controlled object 308. The behavior status evaluator 303 evaluates the behavior result of the controlled object 308 to generate rewards for behavior outputs one by one. The selective attention mechanism 304 distributes the behavior outputs to one of columns according to each reward to create probabilistic models described later. Creating probabilistic models in advance enables high-accurate control.


After the advance learning stage is completed, the agent learning apparatus 100 performs a process which is referred to as “behavior control stage” herein.



FIG. 4 shows a functional block diagram of an agent learning apparatus 100 according to one embodiment of the invention in behavior control stage. In behavior control stage, new sensory inputs captured by sensor 301 are provided directly to the attention class selector 306. The attention class selector 306 performs some process to the sensory inputs using the probabilistic model computed in advance. A behavior controller 307 determines behavior outputs for stabilizing the controlled object 308 and supplies them to the controlled object 308. An example of the controlled object 308 is a helicopter as noted above.


It should be noted that advance learning stage is not necessary. Operation of the agent learning apparatus in such case without advance learning will be described later.


All or part of the behavior output generator 302, the behavior status evaluator 303, the selective attention mechanism 304 and the behavior controller 307 may be implemented by, for example, executing on a general purpose computer a program configured to realize functionality of them.


Features of each functional block and operation of the agent learning apparatus 100 in advance learning stage is described referring to flowcharts in FIGS. 3 and 5.


External environment information is captured by a sensor 301 at given interval and converted into signals as sensory inputs Ii(t) (i=1, 2, . . . , m), which are supplied to a behavior output generator 302. The behavior output generator 302 generates behavior outputs Qi(t) corresponding to the supplied sensory inputs Ii(t) and supplies them to a behavior status evaluator 303 and a controlled object 308. The transformation between the sensory inputs Ii(t) and the behavior outputs Qi(t) is represented by the following mapping f.

f: Ii(t)custom characterQi(t)  (1)

The mapping f is, for example, a non-linear approximation transformation using well-known Fourier series or the like.


In advance learning according the embodiment, the mapping f corresponds to preparing random data sets which includes the mapping between sensory inputs Ii(t) and behavior outputs Qi(t). In other words, the behavior output generator 302 generates behavior outputs Qi(t) one by one corresponding to each sensory input Ii(t) based on these data sets (step S401 in FIG. 5).


Generated behavior outputs Qi(t) are supplied to the behavior status evaluator 303 and the controlled object 308. The controlled object 308 will work in response to the supplied behavior outputs Qi(t). The result of this work is supplied to the behavior status evaluator 303 (step S402 in FIG. 5).


The behavior status evaluator 303 then evaluates the result of this work (for example, behavior of the controlled object gets stable or not) with predetermined evaluation function and generates reward for every behavior outputs Qi(t) (step S403 in FIG. 5). Such process in the behavior status evaluator 303 is considered as reinforcement learning.


The evaluation function herein is a function that yields reward “1” if the controlled object gets stable by the supplied behavior output Qi(t) or yields reward “2” otherwise. Type of rewards may be selected in consideration of behavior characteristics of the controlled object 308 or the required control accuracy. When using the helicopter noted above, reward “1” or reward “2” is yielded according to whether the helicopter is stable or not, which may be judged on, for example, its pitching angle detected with gyro-sensor on the helicopter.


Evaluation function is used to minimize the variance σ of the behavior outputs Qi(t). In other words, by using the evaluation function, sensory inputs Ii(t) unnecessary for stable control may be removed and necessary sensory inputs Ii(t) are reinforced. Finally, reinforcement learning satisfying σ(Q1)<σ(Q2) is attained. Q1 is a group of the behavior outputs Qi(t) which are given reward “1” and Q2 is a group of the behavior outputs Qi(t) which are given reward On receiving rewards from the behavior status evaluator 303, the selective attention mechanism 304 creates a plurality of columns 1, 2, 3, . . . , m in response to the type of the rewards. Then the selective attention mechanism 304 distributes the behavior outputs Qi(t) to each column (step S404 in FIG. 5). In each column, behavior outputs Qi(t) are stored by rewards, being associated with the sensory inputs Ii(t) which cause that behavior outputs. More specifically, when the behavior status evaluator 303 generates either reward “1” or reward “2” for example, the selective attention mechanism 304 creates column 1 and column 2. Then the behavior outputs Qi(t) which are given reward “1” are stored in the column 1 (stable) and the behavior outputs Qi(t) which are given reward “2” are stored in the column 2 (unstable). Thus the columns 1, 2, 3, . . . , m correspond to cluster models of behavior outputs Qi(t) distributed by the rewards.


The selective attention mechanism 304 performs expectation maximization (EM) algorithm and supervised learning with neural network, both will be described later, to calculate conditional probabilistic distribution (that is, probabilistic model) p(Ii(t)|Ωl) (steps S405-S408 in FIG. 4) for sensory inputs Ii(t). Ωl (l=1, 2, 3, . . . , n) is a parameter called “attention class” and corresponds to a column one by one. It should be noted that such attention class Ωl is created under the assumption that true probabilistic distribution p(Ii(t)|Ωl) will exist.


The attention class Ωl is used to select noticeable sensory inputs Ii(t) from among massive sensory inputs Ii(t). More specifically, attention class Ωl is a parameter used for modeling the behavior outputs Ωi(t) stored in each column using the probabilistic density function of the normal distribution of behavior outputs Qi(t). Attention classes Ωl are created as many as the number of the columns storing these behavior outputs Qi(t). Calculating the attention class Ωl corresponding to the behavior outputs Qi(t) stored in each column is represented by the following mapping h.

h: Qi(t)custom characterΩl(t)  (2)


Processes in steps S405-S408 in FIG. 5 will be next described in detail. It should be noted that each process in steps S405-S408 is performed on every column.


First, Expectation Maximization algorithm (EM algorithm) in step S405 is described.


The EM algorithm is an iterative algorithm for estimating parameter θ which takes maximum likelihood when observed data is viewed as incomplete data. As noted above, since it is considered that the behavior outputs Qi(t) stored in each column appear to be the normal distribution, the parameter θ may be represented as θ(μll) with mean μl and covariance Σl. EM algorithm is initiated with appropriate initial values of θ(μl, Σl). Then the parameter θ(μl, Σl) is updated one after another by iterating Expectation (E) step and Maximization (M) step alternately.


On the E step, conditional expected value φ(θ|θ(k)) is calculated by following equation.
φ(θθ(k))=ilp(Qil(t)Ωl;θ(k))log(p(Qil(t),Ωl;θ(k)))(3)


Then on the M step, parameters μl and Σl for maximizing φ(θ|θ(k)) is calculated by following equation and are set to a new estimated value θ(k+1).

θ(k+1)=arg maxθφ(θ,θ(k))  (4)


By partial differentiating the calculated φ(θ|ƒ(k)) on θ(k) and letting a result equal to zero, parameters μl and Σl may be finally calculated. More detailed explanation will be omitted because this EM algorithm is well known in the art.


Thus, behavior outputs Qi(t) stored in each column may be represented by normal distribution (step S405 in FIG. 5). Calculating μl and Σl for the behavior outputs Qi(t) corresponds to calculating a posteriori probability for attention class Ωl.



FIGS. 6 and 7 show examples of normal distribution of behavior outputs Qi(t) included in column 1 (stable) or column 2 (unstable) respectively. As is apparent from the figures, the normal distribution of column 1 has shaper end of graph than that of column 2 and thus the variance of behavior outputs Qi(t) is smaller (σ(Q1)<σ(Q2)).


A priori probability {overscore (p)}(Qil(t)|Ωl(t)) of attention class Ωl is calculated by following equation with the calculated parameters μl and Σl (step S406 in FIG. 5).
p_(Qil(t)Ωl(t))=j=1JαjQil(t)(2π)N/2jQil(t)exp(-12(Qil(t)-μ)TjQil(t)-1(Qil(t)-μ))(5)

where N is dimension of the behavior outputs Qi(t).


Supervised learning with neural network is described below. In this learning, conditional probabilistic density function p(Ii(t)|Ωl) is calculated with the attention class Ωl, which has been calculated as a posteriori probability, as supervising signal (step S407 in FIG. 5).



FIG. 8 shows the exemplary structure of hierarchical neural network used for the supervised learning with neural network. This hierarchical neural network is composed of three layers of nodes. The input layer 501 corresponds to sensory inputs Ii(t), middle layer 502 to behavior outputs Qi(t), and output layer 503 to attention classes Ωl, respectively. Although only three nodes are illustrated at the input layer 501 for simple illustration, there are actually as many nodes as the number of sensory inputs Ii(t) in the data sets. Likewise, in the middle layer 502, behavior output nodes Qi(t) exist as many as nodes in the input layer 501. Nodes in the middle layer 502 correspond to nodes in the input layer one by one. Nodes in the output layer 503 are created as many as the number of the attention classes Ωl.


λ shown in FIG. 8 denotes synaptic weight matrices of the hierarchical neural network. Since a probability that a behavior outputs Qi(t) belong to its each attention classes Ωl is computed by EM algorithm, and behavior outputs Qi(t) are stored in the column in a one-to-one correspondence with sensory inputs Ii(t) probabilistic relationship (that is, λ in FIG. 8) between sensory inputs Ii(t) and attention classes Ωl are determined by repeating the supervised learning with attention class Ωl as a teacher signal. This probabilistic relationship is conditional probabilistic density function p(Ii(t)|Ωl). It should be noted that the attention class Ωl may be calculated from sensory inputs Ii(t) through synthetic mapping h·f. More detailed explanation will be omitted because such hierarchical neural network is well known in the art.


With such supervised learning using neural network, conditional probabilistic density function p(Ii(t)|Ωl), which is probabilistic relationship between sensory inputs Ii(t) and attention class Ωl, may be computed.


As noted above, learning within the selective attention mechanism 304 on advance learning stage proceed in feed back fashion. After the conditional probabilistic density function p(Ii(t)|Ωl) is calculated, the probability may be determined of which attention class Ωl a new sensory input Ii(t) belongs to without calculating the mapping h·f at each time for that sensory input.


Processes in steps S401 to S407 are performed on every pair of sensory input Ii(t) and behavior output Qi(t) in given data sets (step S408 in FIG. 5). During the advance learning stage, conditional probabilistic distribution p(Ii(t)|Ωl) continues to be updated in response to given behavior outputs Qi(t).


An explanation of how the agent learning apparatus 100 operates in advance learning stage is finished.


After the advance learning stage using the data sets is completed, the agent learning apparatus 100 starts to control the controlled object 308 based on the established learning result. Now it will be described below how the agent learning apparatus 100 operates on behavior control stage referring to FIGS. 4 and 9.


In behavior control stage, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) for each column and conditional probabilistic distribution p(Ii(t)|Ωl), both has been calculated in advance learning stage, are used. New sensory inputs Ii(t) captured at sensor 301 are provided to the attention class selector 306 in the selective attention mechanism 304 (step S410 in FIG. 9). Then using a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probabilistic distribution p(Ii(t)|Ωl), confidence p(Ωl(t)) for each attention class Ωl is calculated by following Bayes' rule (step S411 in FIG. 9).
p(Ωl(t))=p_(Qil(t)Ωl(t))p(Ii(t)Ωl(t))kp_(Qil(t)Ωk(t))p(Ii(t)Ωk(t))(6)


The confidence p(Ωl(t)) is the probability that a sensory input Ii(t) belongs to each attention class Ωl(t). Calculating the probability that a sensory input Ii(t) belongs to each attention class Ωl(t) with Bayes' rule means that one attention class may be identified selectively by increasing the confidence p(Ωl(t)) with learning of Bayes' rule (weight). In other words, with selective attention mechanism 304, the attention class Ωl, or hidden control parameter, may be directly identified based on observable sensory input Ii(t).


The attention class selector 306 determines that the attention class Ωl with highest confidence p(Ωl(t)) is an attention class corresponding to the new sensory input Ii(t). The determined attention class Ωl is informed to the behavior controller 307 (step S412 in FIG. 9).


When informed attention class Ωl is Ωl corresponding to “stable” column, the behavior controller 307 calculates a behavior output Qi(t) corresponding to a captured sensory input Ii(t) based on the behavior outputs Qi(t) stored in the column 1 (step S413 in FIG. 9), then provides it to the controlled object 308 (step S414). It should be noted that these behavior outputs Qi(t) may be calculated on the probabilistic distribution calculated by EM algorithm and not behavior outputs Qi(t) itself which was given as data sets in advance learning stage.


When informed attention class Ωl is Ω2 corresponding to “unstable” column, the behavior controller 307 selects not column 2 but column 1 having smaller variance, and calculates a behavior output Qi(t) corresponding to a captured sensory input Ii(t) based on the behavior outputs Qi(t) stored in column 1, then provides it to the controlled object 308 (S414). If no corresponding behavior output is stored, last behavior output is supplied. If no behavior output Qi(t) associated with the sensory input Ii(t) is stored in the column, previous behavior output Qi(t) is selected and provided to the controlled object 308. By repeating such process, relation between variance of columns σ(Q1)<σ(Q2) is accomplished (that is, variance of behavior outputs in column 1 gets smaller rapidly and the stability of controlled object 308 is attained).


Alternatively, when supplied attention class Ω1 is Ω2 which corresponds to “unstable”, the behavior controller 307 may select column 2, calculate behavior outputs Qi(t) corresponding to captured sensory inputs Ii(t) from among behavior outputs Qi(t) stored in column 2 having smaller variance, and supply them to the controlled object 308.


The controlled object 308 exhibits behavior according to the provided behavior output Qi(t). The result of this behavior is provided to the behavior status evaluator 303 again. Then, when new sensory input Ii(t) is captured by sensor 301, attention class is selected using conditional probability density function p(Ii(t)|Ωl) based on learning by Bayes' rule. After that, processes described above are repeated (S415).


An explanation of how the agent learning apparatus 100 operates in behavior control stage is finished.


In this embodiment, since conditional probabilistic density function p(Ii(t)|Ωl) is calculated in the advance learning stage, the attention class Ωl may be directly selected corresponding to new sensory input Ii(t) using statistical learning in behavior control stage without computing mapping f and h.


Generally, the information amount of sensory inputs Ii(t) from the sensor 301 is enormous. Therefore, if mapping f and h are calculated for all sensory inputs Ii(t), its computing amount far exceeds the processing capacity of the typical computer. Thus, according to the invention, appropriate filtering for sensory inputs Ii(t) with the attention classes Ωl may improve the efficiency of the learning.


In addition, selecting attention class Ωl with highest confidence p(Ωl(t)) corresponds to selecting column which includes behavior output Qi(t) with highest reward for a sensory input Ii(t).


Learning process is performed three times in this embodiment. That is, 1) reinforcement learning in the behavior status evaluator 303 (in other words, generation of cluster models by rewards), 2) learning of the relationship between attention classes Ωl and sensory inputs Ii(t) using the hierarchical neural network, and 3) selection of attention class corresponding to new sensory input Ii(t) with Bayes' rule. Thus, the agent learning apparatus 100 according to the invention is characterized in that supervised learning scheme and statistical learning scheme are synthetically applied.


In conventional supervised learning scheme, optimal control given by an operator is learned by a control system, but this is not practical as noted above. In conventional reinforcement learning scheme, optimal control is acquired through try and error process of a control system, but this takes much processing time.


In contrast, the agent learning apparatus 100 according to the invention can select the attention class with the selective attention mechanism 304 and learn the important sensory inputs Ii(t) selectively, resulting to reduce the processing time and to eliminate the need of the supervising information given by an operator. Furthermore, if the motion of the controlled object 308 has non-linear characteristics, it takes much time for learning with only the reinforcement learning scheme because complicated non-linear function approximation is required. On the other hand, since the agent learning apparatus 100 of the invention learns the sensory inputs according to their importance with the selective attention mechanism 304, the processing speed is improved. The agent learning apparatus 100 is also characterized in that feed back control is performed in advance learning stage and feed forward control is performed in behavior control stage.


Now referring to FIG. 10, one example of the invention is described in detail. FIG. 10 shows a schematic view of a control system for a radio-controlled helicopter 601 applying an agent learning apparatus 100 of the invention.


Mounted on a helicopter 601 is a vision sensor 602, which captures visual information every 30-90 milliseconds and sends them to a computer 603 as sensory inputs Ii(t). The computer 603 is programmed to implement the agent learning apparatus 100 shown in FIG. 3 or FIG. 4 and generates behavior outputs Qi(t) for the sensory inputs Ii(t) according to the invention. The behavior outputs Qi(t) are sent to a motor control unit 605 mounted on the helicopter 601 via a radio transmitter 604, rotating a rotor on the helicopter.


In this example, the number of the attention classes Ωl is set at two. Total 360 data sets for advance learning were used for processing in selective attention mechanism 304. After the advance learning was over, it was confirmed whether the system could select correct attention class Ωl when another 150 test data (new sensory inputs Ii(t)) were provided to the system.


In advance learning stage, two types of rewards (positive rewards and negative rewards) are assigned to behavior outputs Qi(t). The selective attention mechanism 304 distributes the behavior outputs Qi(t) to column 1 or column 2 according to their rewards. This process is represented by the following evaluation function.

If |{tilde over (Q)}i−Qi|≦δ then Qi1custom characterQi(Positive) else Qi2custom character=Qi(Negative)

where Q1 or Q2 representes a group of behavior outputs Qi(t) distributed to column 1 or column 2 respectively. In this case, the positive reward corresponds to column 1 and the negative reward correspond to column 2. {tilde over (Q)}i denotes the behavior output Qi(t) to keep the helicopter stable. {tilde over (Q)}i is a mean value of a probabilistic distribution p(Qi) and set to “82” in this example. δ is a threshold that represents a tolerance of stability and set to “1.0” in this example. The evaluation function shown above acts as reinforcement learning satisfying the relation between the variances of column σ(Q1)<σ(Q2).



FIGS. 11A to 11C show the relationship between the sensory inputs Ii(t) and the attention class Ωl, which was acquired from the experiment using the system in FIG. 10. It should be noted that the actual attention class could be calculated by the data sets. FIG. 11A is a diagram of the correct attention class Ωl corresponding to sensory inputs Ii(t). FIG. 11B is a diagram of the experiment result when the iteration number is small (early stage) in the EM algorithm. FIG. 11C is a diagram of the experiment result when the iteration number is sufficient (late stage). Solid lines in each figure represent that the attention class Ωl selected at its time point is shifted from one to another. In other words, during time steps where no solid line is shown in each figure, same attention class Ωl keeps on being selected corresponding to either column 1 or column 2. It is apparent that the diagram in late stage (FIG. 11C) is similar to the diagram of actual attention class (FIG. 11A) than that in early stage (FIG. 11B).


These results prove that the agent learning apparatus 100 according to the invention can learn the predictive relationship between the sensory inputs Ii(t) and the two attention classes Ωl. In other words, the result suggests that the discriminative power of the prediction between sensory inputs Ii(t) and two attention classes is “weak” when the probabilistic distribution for statistical column is in early stage of EM algorithm. As the iteration number in the EM algorithm is increased, the accuracy of the prediction is improved. The discriminative power of the prediction may be affected by the number of the normal distribution (Gaussian function) used in the EM algorithm. Although single Gaussian function is used in the aforementioned embodiments, mixture of Gaussian functions may be used in the EM algorithm to improve the discriminative power of the prediction.



FIG. 12 shows a graph of the minimum variance value of the behavior outputs Qi(t) over time when the helicopter 601 in the example is controlled. A dashed line represents the result when the helicopter 601 is controlled by the conventional feedback control, while a solid line represents the result when the helicopter 601 is controlled by the agent learning apparatus 100 according to the invention. The conventional methods have no learning process by the selective attention mechanism 304. Therefore, since it is learned through try and error process which visual information (that is, sensory inputs Ii(t)) are needed for stabilizing the helicopter 601 among sensory inputs Ii(t) captured by the helicopter 601, it takes much time until the variance becomes small, in other words, the helicopter 601 gets stabilized.


In contrast, the agent learning apparatus 100 according to the invention acquires the sensory inputs Ii(t) needed for stabilizing the helicopter 601 not through try and error process but by the learning according to the importance of the sensory inputs Ii(t) using the selective attention mechanism 304. Therefore, minimization of the variance of the behavior outputs Qi(t) may be attained very rapidly.


Although the vision sensor 602 is used in this example, sensory inputs Ii(t) is not limited to vision information but other input such as auditory information or tactile information may be also used. In addition, the example has been described for two columns and two rewards, but three or more columns and rewards may be used. Only one column will not accelerate the convergence of the learning because, if there is only one column, it will take much time until the normal distribution curve of behavior outputs Qi(t) contained in the column is sharpened and the variance gets small. One feature of the invention is that the normal distribution curve of behavior outputs Qi(t) is sharpened rapidly by generating a plurality of columns. The more columns are used, the more complicated and various behavior outputs may be obtained.


In embodiments described above, a priori learning using data sets is performed. Such a priori learning is for stabilizing the controlled object 308 more rapidly. However, control for controlled object 308 (for example, a helicopter 601) may be started by applying the agent learning apparatus 100 without a priori learning. In this case, the behavior controller 307 supplies behavior outputs Qi(t) to the controlled object 308 in a random fashion irrespective of sensory inputs Ii(t) captured by the sensor because no probabilistic model described above is created during short period after control is started. The behavior status evaluator 303 supplies rewards to behavior results of the controlled object 308. The selective attention mechanism distributes behavior outputs Qi(t) to columns in association with sensory inputs Ii(t) according to the reward. Then, relationship between sensory inputs Ii(t) and behavior outputs Qi(t) is stored in columns according to rewards and normal distributions for behavior outputs stored in columns may be computed with EM algorithm, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) may be computed according to processes described above. These values are applied to Bayes' rule to calculate confidence p(Ωl(t)) for each attention class. The behavior controller 307 computes behavior output Qi(t) corresponding to newly-captured sensory input Ii(t) based on column where confidence p(Ωl(t)) is maximum or where behavior outputs having best reward are stored and supply them to controlled object. Again, the behavior status evaluator supplies rewards to behavior results of the controlled object 308 and the computed behavior outputs Qi(t) are stored in one of columns. Based on this, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) may be updated. Then, these updated probabilities are applied to Bayes' rule and new behavior output Qi(t) is output. In this way, without advance learning, a priori probability {overscore (p)}(Qil(t)|Ωl(t)) and conditional probability density function p(Ii(t)|Ωl) are updated one after another. In this case, the instability of controlled object is large when at the beginning of starting the control and the controlled object may be damaged or so by unexpected motion of the object. Therefore, range of behavior output Ql(t) given to the controlled object 308 by the behavior controller 307 is preferably limited forcefully until the predetermined number of the relationship between sensory inputs and behavior outputs are gained (alternatively, until predetermined period is lapsed).


Furthermore, well-known competitive learning or learning with self-organized network may be employed instead of EM algorithm in step S405. Well-known belief network or graphical model may be employed instead of Bayes' rule in step S411.


INDUSTRIAL APPLICABILITY

As described above, according to the invention, variance of behavior outputs Qi(t) may be minimized rapidly to stabilize a controlled object by computing the behavior output based on a column which is estimated as stable.

Claims
  • 1. An agent learning apparatus (100) for performing optimal control for a controlled object, comprising: a sensor (301) for capturing external environmental information for conversion to sensory inputs; a behavior controller (302,307) for supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; a behavior status evaluator (303) for evaluating behavior of the controlled object caused by said behavior outputs; and a selective attention mechanism (304) for storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation, computing probabilistic models based on the behavior outputs stored in said columns, calculating confidence for each column by applying newly given sensory inputs to said probabilistic models and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column having largest confidence; wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column.
  • 2. An agent learning apparatus (100) for performing optimal control for a controlled object, comprising: a sensor (301) for capturing external environmental information for conversion to sensory inputs; a behavior controller (302,307) for supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; a behavior status evaluator (303) for evaluating behavior of the controlled object caused by said behavior outputs; and a selective attention mechanism (304) for storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation, computing probabilistic models based on the behavior outputs stored in said columns, and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column, said column containing the behavior outputs having largest evaluation; wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column.
  • 3. The agent learning apparatus (100) of claim 1 or 2, said computing probabilistic model comprising: representing behavior outputs stored in columns as normal distribution by using Expectation Maximization algorithm; using said normal distribution to compute a priori probability that a behavior output is contained in each column; and using said a priori probability to compute said probabilistic model by supervised learning with neural network, said probabilistic model being probabilistic relationship between any sensory input and each column.
  • 4. The agent learning apparatus (100) of claim 3, said confidence being calculated by applying said a priori probability and said probabilistic model to Bayes' rule.
  • 5. The agent learning apparatus (100) of claim 4, wherein said probabilistic model is computed in advance using data sets of relationship between sensory inputs and behavior outputs, wherein after computing said probabilistic model, said confidence is calculated using the probabilistic model for newly given sensory inputs.
  • 6. An agent learning method for performing optimal control for a controlled object, comprising: capturing external environmental information for conversion to sensory inputs; supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; evaluating behavior of the controlled object caused by said behavior outputs; storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation; computing probabilistic models based on the behavior outputs stored in said columns, wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column; calculating confidence for each column by applying newly given sensory inputs to said probabilistic models, and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column having largest confidence.
  • 7. An agent learning method for performing optimal control for a controlled object, comprising: capturing external environmental information for conversion to sensory inputs; supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; evaluating behavior of the controlled object caused by said behavior outputs; storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation; computing probabilistic models based on the behavior outputs stored in said columns, wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column; and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column, said column containing the behavior outputs having largest evaluation.
  • 8. The agent learning method of claim 6 or 7, said computing probabilistic model comprising: representing behavior outputs stored in columns as normal distribution by using Expectation Maximization algorithm; using said normal distribution to compute a priori probability that a behavior output is contained in each column; and using said a priori probability to compute said probabilistic model by supervised learning with neural network, said probabilistic model being probabilistic relationship between any sensory input and each column.
  • 9. The agent learning method of claim 8, said confidence being calculated by applying said a priori probability and said probabilistic model to Bayes' rule.
  • 10. The agent learning method of claim 9, wherein said probabilistic model is computed in advance using data sets of relationship between sensory inputs and behavior outputs, wherein after computing said probabilistic model, said confidence is calculated using the probabilistic model for newly given sensory inputs.
  • 11. An agent learning program for performing optimal control for a controlled object, comprising: capturing external environmental information for conversion to sensory inputs; supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; evaluating behavior of the controlled object caused by said behavior outputs; storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation; computing probabilistic models based on the behavior outputs stored in said columns, wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column; calculating confidence for each column by applying newly given sensory inputs to said probabilistic models; and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column, having largest confidence.
  • 12. An agent learning program when executing on a computer to realize optimal control for a controlled object, comprising: capturing external environmental information for conversion to sensory inputs; supplying behavior outputs to said controlled object based on results of learning performed on said sensory inputs; evaluating behavior of the controlled object caused by said behavior outputs; storing said behavior outputs in one of a plurality of columns in association with corresponding sensory inputs based on the evaluation; computing probabilistic models based on the behavior outputs stored in said columns, wherein said probabilistic model is probabilistic relationship that a sensory input belongs to each column; and outputting, as said results of learning, behavior outputs in association with newly given sensory inputs in the column, said column containing the behavior outputs having largest evaluation.
  • 13. The agent learning program of claim 11 or 12, said computing probabilistic model comprising: representing behavior outputs stored in columns as normal distribution by using Expectation Maximization algorithm; using said normal distribution to compute a priori probability that a behavior output is contained in each column; and using said a priori probability to compute said probabilistic model by supervised learning with neural network, said probabilistic model being probabilistic relationship between any sensory input and each column.
  • 14. The agent learning program of claim 13, said confidence being calculated by applying said a priori probability and said probabilistic model to Bayes' rule.
  • 15. The agent learning program of claim 14, wherein said probabilistic model is computed in advance using data sets of relationship between sensory inputs and behavior outputs, wherein after computing said probabilistic model, said confidence is calculated using the probabilistic model for newly given sensory inputs.
Priority Claims (2)
Number Date Country Kind
2001-028758 Feb 2001 JP national
2001-028759 Feb 2001 JP national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/JP02/00878 2/4/2002 WO 5/13/2005