ROBOT CONTROL MODEL LEARNING METHOD, ROBOT CONTROL MODEL LEARNING DEVICE, RECORDING MEDIUM STORING ROBOT CONTROL MODEL LEARNING PROGRAM, ROBOT CONTROL METHOD, ROBOT CONTROL DEVICE, RECORDING MEDIUM STORING ROBOT CONTROL PROGRAM, AND ROBOT

Information

  • Patent Application
  • 20220397900
  • Publication Number
    20220397900
  • Date Filed
    October 21, 2020
    3 years ago
  • Date Published
    December 15, 2022
    a year ago
Abstract
A robot control model learning device (10) performs, by using state information indicating the state of a robot which autonomously travels to a destination in a dynamic environment as an input, reinforcement learning to obtain a robot control model for selecting and outputting a behavior in accordance with the state of the robot from among a plurality of behaviors including an intervention behavior for intervening in the environment, while using the number of times the intervention behavior has been performed as a minus reward.
Description
TECHNICAL FIELD

The technique of the present disclosure relates to a robot control model learning method, a robot control model learning device, a robot control model learning program, a robot control method, a robot control device, a robot control program and a robot.


BACKGROUND ART

In path planning techniques that are exemplified by RRT (Rapidly-exploring Random Tree) and PRM (Probabilistic Road Map), a path from an initial position to a target position is derived by carrying out graph searching using respective geographical points in a sampled space as nodes.


The subjects of such techniques are static, known environments. In dynamic environments, “re-planning” must be carried out each time the environment changes.


Known “re-planning” techniques are based on updating the map in accordance with changes in the environment, and searching for another global path that can transform continuously. In a congested environment in which changes arise continuously such as in a crowd environment, a solution may not be found, and stoppage of the robot in the re-planning may occur frequently.


Further, in a complex environment such as a crowd or the like, obstacles right in front of the eyes are merely continued to be removed intermittently, and much stress is applied to the environment.


Non-Patent Document 1 (Decentralized Non-communicating Multiagent Collision Avoidance with Deep Reinforcement Learning https://arxiv.org/pdf/1609.07845) discloses a technique of acquiring a collision avoiding policy by deep reinforcement learning. In the technique disclosed in Non-Patent Document 1, a policy that minimizes the time until the destination is reached is acquired while avoiding collisions with agents at the periphery.


Non-Patent Document 2 (Socially Aware Motion Planning with Deep Reinforcement Learning https://arxiv.org/pdf/1703.08862.pdf) discloses a technique that improves on the technique disclosed in Non-Patent Document 1. In the technique disclosed in Non-Patent Document 2, natural avoidance behaviors are realized socially by adding social norms to a reward function in which features of collision avoidance behaviors of humans are taken into consideration.


Non-Patent Document 3 (ZMP https://news.mynavi.jp/article/20180323-604926/) discloses a technique in which a robot autonomously travels without changing the path plan of the robot itself, by carrying out intervention behaviors that work so as to yield the way to obstacles (humans) on the path plan.


SUMMARY OF INVENTION
Technical Problem

However, the techniques disclosed in above-described Non-Patent Documents 1, 2 both deal with only passive collision avoidance behaviors with respect to the environment, and neither deals with intervention behaviors.


Further, the techniques disclosed in above-described Non-Patent Documents 1, 2 assume interactions with a small number of agents, and interactions in crowd environments are not assumed.


Further, as in the technique disclosed in Non-Patent Document 3, although the implementing of an intervention by a simple policy is easy, a high frequency of interventions is a cause of stress at the environment side, and there are cases in which the transport efficiency of the pedestrian group at the periphery deteriorates.


The technique of the disclosure was made in view of the above-described points, and an object thereof is to provide a robot control model learning method, a robot control model learning device, a robot control model learning program, a robot control method, a robot control device, a robot control program and a robot that, in a case in which a robot is moved to a destination in a dynamic environment, can reduce the number of times of intervention behaviors in which the robot intervenes in the surrounding environment.


Solution to Problem

A first aspect of the disclosure is a robot control model learning method comprising: a learning step in which a computer reinforcement-learns a robot control model, whose input is state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and that selects and outputs a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention in which the intervention behavior is executed being a negative reward.


In the above-described first aspect, the behavior may include at least one of a moving direction of the robot, a moving velocity of the robot, and the intervention behavior, and the reward may be given such that at least one of an arrival time until the robot reaches the destination and the number of times of intervention decreases.


In the above-described first aspect, the behaviors may include an avoidance behavior in which the robot avoids a collision with another object, and the reward may be given such that a number of times of avoidance in which the collision is avoided decreases.


In the above-described first aspect, the learning step may reinforcement-learn by updating a state value function that expresses the state of the robot.


A second aspect of the disclosure is a robot control model learning device comprising: a learning section that reinforcement-learns a robot control model, whose input is state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and that selects and outputs a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention in which the intervention behavior is executed being a negative reward.


A third aspect of the disclosure is a robot control model learning program for causing a computer to execute processings comprising: a learning step of reinforcement-learning a robot control model, whose input is state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and that selects and outputs a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention in which the intervention behavior is executed being a negative reward.


A fourth aspect of the disclosure is a robot control method in which a computer executes processings comprising: an acquiring step of acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; and a control step of effecting control such that the robot moves to the destination, on the basis of the state information and a robot control model learned by a robot control model learning method.


A fifth aspect of the disclosure is a robot control device comprising: an acquiring section that acquires state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; and a control section that effects control such that the robot moves to the destination, on the basis of the state information and a robot control model learned by a robot control model learning device.


A sixth aspect of the disclosure is a robot control program for causing a computer to execute processings comprising: an acquiring step of acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; and a control step of effecting control such that the robot moves to the destination, on the basis of the state information and a robot control model learned by a robot control model learning method.


A seventh aspect of the disclosure is a robot comprising: an acquiring section that acquires state information expressing a state of the robot that travels autonomously to a destination in a dynamic environment; an autonomous traveling section that causes the robot to travel autonomously; and a robot control device that includes a control section effecting control such that the robot moves to the destination on the basis of the state information and a robot control model learned by a robot control model learning device.


Advantageous Effects of Invention

In accordance with the technique of the disclosure, in a case in which a robot is moved to a destination in a dynamic environment, the number of times of intervention behaviors, in which the robot intervenes in the surrounding environment, can be reduced.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a drawing illustrating the schematic structure of a robot control model learning system.



FIG. 2 is a block drawing illustrating hardware structures of a robot control model learning device.



FIG. 3 is a block drawing illustrating functional structures of the robot control model learning device.



FIG. 4 is a drawing illustrating a situation in which a robot moves within a crowd to a destination.



FIG. 5 is a flowchart illustrating the flow of robot control model learning processing by the robot control model learning device.



FIG. 6 is a block drawing illustrating functional structures of a robot control device.



FIG. 7 is a block drawing illustrating hardware structures of the robot control device.



FIG. 8 is a flowchart illustrating the flow of robot controlling processing by the robot control device.





DESCRIPTION OF EMBODIMENTS

Examples of embodiments of the technique of the disclosure are described hereinafter with reference to the drawings. Note that structural elements and portions that are the same or equivalent are denoted by the same reference numerals in the respective drawings. Further, there are cases in which the dimensional proportions in the drawings are exaggerated for convenience of explanation, and they may differ from actual proportions.



FIG. 1 is a drawing illustrating the schematic structure of a robot control model learning system 1.


As illustrated in FIG. 1, the robot control model learning system 1 has a robot control model learning device 10 and a simulator 20. The simulator 20 is described later.


The robot control model learning device 10 is described next.



FIG. 2 is a block drawing illustrating hardware structures of the robot control model learning device 10.


As illustrated in FIG. 2, the robot control model learning device 10 has a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a storage 14, an input portion 15, a monitor 16, an optical disk drive device 17 and a communication interface 18. These respective structures are connected so as to be able to communicate with one another via a bus 19.


In the present embodiment, a robot control model learning program is stored in the storage 14. The CPU 11 is a central computing processing unit, and executes various programs and controls the respective structures. Namely, the CPU 11 reads-out a program from the storage 14, and executes the program by using the RAM 13 as a workspace. The CPU 11 caries out control of the above-described respective structures, and various computing processings, in accordance with the programs recorded in the storage 14.


The ROM 12 stores various programs and various data. The RAM 13 temporarily stores programs and data as a workspace. The storage 14 is structured by an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs, including the operating system, and various data.


The input portion 15 includes a keyboard 151 and a pointing device such as a mouse 152 or the like, and is used in order to carry out various types of input. The monitor 16 is a liquid crystal display for example, and displays various information. The monitor 16 may function as the input portion 15 by employing a touch panel type therefor. The optical disk drive device 17 reads-in data that is stored on various recording media (a CD-ROM or a flexible disk or the like), and writes data to recording media, and the like.


The communication interface 18 is an interface for communicating with other equipment such as the simulator 20 and the like, and uses standards such as, for example, Ethernet®, FDDI, Wi-Fi®, or the like.


Functional structures of the robot control model learning device 10 are described next.



FIG. 3 is a block drawing illustrating an example of the functional structures of the robot control model learning device 10.


As illustrated in FIG. 3, the robot control model learning device 10 has a state value computing section 30 and a behavior selecting section 32 as functional structures thereof. The respective functional structures are realized by the CPU 11 reading-out a robot control program that is stored in the storage 14, and expanding and executing the program in the RAM 13. Note that the state value computing section 30 and the behavior selecting section 32 are examples of the learning section.


The present embodiment describes a case in which a state value function that is described later is learned by value-based deep reinforcement learning.


The state value computing section 30 acquires state information from the simulator 20. The simulator 20 has the function of, in a case in which an autonomously traveling robot RB moves to destination pg, simulating a dynamic environment that includes moving objects such as humans HB or the like that exist at the periphery of the robot RB, as shown in FIG. 4 for example. The simulator 20 outputs state information, which relates to the state of the robot RB and the peripheral environment of the robot RB, to the state value computing section 30.


Here, state information includes robot information relating to the state of the robot RB, environment information relating to the peripheral environment of the robot RB, and destination information relating to the destination that the robot RB is to reach.


The robot information includes information of the position and the velocity of the robot RB. In the present embodiment, velocity v of the robot RB is expressed by a vector in a two-dimensional coordinate system as follows.






v={v
x
,v
y}


Further, in the present embodiment, position p of the robot RB is expressed by coordinates in a two-dimensional coordinate system as follows.






p={p
x
,p
y}


In the present embodiment, state st of the robot RB at time t is expressed as follows.






s
t
={p
x
,p
y
,v
x
,v
y
,r
b}


Here, rb expresses the radius of influence of the robot RB. As described later, the radius of influence rb is used at the time of judging whether or not the robot RB and another object other than the robot RB have collided.


The environment information is information relating to the dynamic environment, and specifically includes, for example, information of the positions and velocities of the moving objects such as the humans HB and the like that exist at the periphery of the robot RB. The present embodiment describes a case in which the environment information is information relating to the humans HB.


In the present embodiment, as illustrated in FIG. 4, states st of the humans HB at the periphery of the robot RB are expressed as follows. Note that in the present embodiment, for convenience, in the formulas and the like, there are cases in which the symbols “{tilde over ( )}(tilde)” and “{circumflex over ( )}(hat)” are added above the characters, and there are cases in which these symbols are added before the characters.


Here, N is the number of the humans HB existing at the periphery. Further, {tilde over ( )}st1, {tilde over ( )}st2, . . . {tilde over ( )}stN express the states of the respective humans HB at time t, i.e., the positions and velocities thereof.


Further, in the present embodiment, state stin, which joins the state st of the robot RB and the states {tilde over ( )}st of the humans HB existing at the periphery of the robot RB at time t, is expressed as follows.


The destination information includes position information of the destination pg. Position pg of the destination is expressed by coordinates in a two-dimensional coordinate system as follows.






p
g
={p
g
x,p
g
y}


On the basis of the acquired state information, the state value computing section 30 computes reward r by using reward function R(sin, a). Here, a represents the behavior, and includes at least one of moving direction, moving velocity, an intervention behavior, and an avoidance behavior of the robot RB. Further, reward r is given such that the arrival time period until the robot RB reaches the destination pg, the number of times of intervention that is the number of times that an intervention behavior is executed, and the number of times of avoidance in which the robot RB avoids a collision, become smaller.


Here, an intervention behavior is the behavior of notifying the humans HB, who are at the periphery, of the existence of the robot RB in order for the robot RB to move without stopping. Specifically, an intervention behavior is a behavior such as voice output of a message such as “move out of the way” or the like, or the issuing of a warning noise, or the like, but intervention behaviors are not limited to these. Further, an avoidance behavior is a behavior by which the robot RB avoids a collision with another object, and means the behavior of moving in a direction and at a velocity by which other objects can be avoided.


In the present embodiment, the reward function R(sin, a) is set as follows. Note that, hereinafter, there are cases in which the reward function R(sin, a) is simply called the reward function R.


Here, re is the reward obtained from the environment, and rc is the influence reward due to the intervention. Further, α is the weight of the reward re, and β is the weight of the reward rc, and both are set to arbitrary values. The reward re and the reward rc are expressed as follows.


Here, d is the distance that is used in order to judge whether or not the robot RB and the human HB will collide, and is expressed by the following formula.






d=D−(rb+rh)


D represents the distance between the robot RB and the human HB. rb is the aforementioned radius of influence of the robot RB, and rh is the radius of influence of the human HB. Note that rb and rh may be such that rb=rh, or may be such that rb≠rh. A case in which d is less than 0 expresses a state in which the region within the radius of influence rb of the robot RB and the region within the radius of influence rh of the human HB partially overlap, i.e., a state in which the robot RB and the human HB are close. In the present embodiment, in a case in which d is less than 0, it is considered that the robot RB and the human HB have collided.


Further, bt is an intervention parameter that expresses whether or not the robot RB has carried out an intervention behavior with respect to the peripheral environment at time t. A case in which the intervention parameter bt is “0” expresses that an intervention behavior has not been carried out. On the other hand, a case in which the intervention parameter bt is a value other “0” expresses that the robot RB has carried out an intervention behavior.


In a case in which the distance d is less than 0, i.e., in a case in which it is considered that the robot RB and the human HB have collided, as described above, the reward re is “εc”.


Further, when the position p of the robot RB reaches the destination pg, the reward re is “εg”. Here, the range of values that εg can assume is 0≤εg≤1. Further, the reward εg is given such that, the later the arrival time until the destination pg is reached, the smaller the value thereof, i.e., the more the value thereof approaches “0”. Further, the reward εg is given such that, the earlier the arrival time until the destination pg is reached, the larger the value thereof, i.e., the more the value thereof approaches “1”.


Further, in cases other than those described above, the reward re is “0”.


Further, in cases in which the intervention parameter bt is a value other than “0”, the reward re is “εb”. Namely, εb can be called a reward that relates to an intervention behavior. Further, in a case in which the intervention parameter bt is “0”, the reward re is “0”.


Here, εc, εb are set to values that are less than 0, i.e., to negative values, as negative rewards. Namely, εc can be called a reward relating to collision avoidance, and εb can be called a reward relating to an intervention behavior. Note that εc may be expressed as a function of the distance d. Further, εb may be expressed as a function of the intervention parameter bt.


Further, the state value computing section 30 computes value yt of the state at time t by the following formula that uses state value function V(stin).










?




?

indicates text missing or illegible when filed





(
1
)







Here, rt is the reward at time t that is computed by the reward function R. Further, Δt is the increase in time in one step. Further, γ is the discount factor of the reward, and is defined as follows. text missing or illegible when filed


Namely, the discount factor γ can assume values that are greater than or equal to 0 and less than or equal to 1. Further, the discount factor γ is set to a value that is such that, the further ahead in the future a reward is obtained, the more that reward is evaluated to be discounted.


The state value function V(stin) is, in a policy selected by using policy function π described later, a function that expresses the value of the robot RB and the humans HB at the periphery thereof being in the state stin, and is expressed by the following formula. Here, “*” expresses optimality, and V* expresses the optimal state value function, and π* expresses the optimal policy function. Note that, hereinafter, there are cases in which the state value function V(stin) is simply called the state value function V.










?




?

indicates text missing or illegible when filed





(
2
)







Above formula (2) shows that the state value function V(stin) is the sum of discounted cumulative rewards, which is the accumulation of rewards that will be obtained in the future and that have been discounted by the discount factor γ, i.e., that the state value function V(stin) is the expected reward. In the present embodiment, the state value function V is approximated by using a deep neural network (value network). Hereinafter, the deep neural network that expresses the state value function V is called the V network.


The state value computing section 30 learns the V network. In the present embodiment, as an example, the V network is learned by a gradient descent method using an experience reply buffer. Namely, the states stin and the values yt are stored in buffer E, and a pair of the state stin and the value yt is read-out randomly from the buffer E, and the V network is learned by using the read-out pair as training data. Namely, the parameters of the V network are updated.


On the basis of the value of the state that is computed by the state value computing section 30, the behavior selecting section 32 selects behavior at that the robot RB is to perform. The behavior at is selected by using the policy function π(stin) expressed by following formula (3). Note that, hereinafter, there are cases in which the policy function π(stin) is simply called the policy function π.










?




?

indicates text missing or illegible when filed





(
3
)







Here, P(stin, st+Δtin|at) represents the state in a case of selecting the behavior at.


The behavior selecting section 32 outputs, to the simulator 20, the behavior at that is selected by the policy function π(stin). Due thereto, the simulator 20 causes the robot RB to execute the behavior at in a simulation. For example, in a case in which the behavior at is movement in moving direction mt and at moving velocity vt, the simulator 20 makes the robot RB move in the moving direction mt and at the moving velocity vt in the simulation. Further, in a case in which the behavior at is an intervention behavior, the simulator 20 simulates avoidance behaviors that the humans HB at the periphery can take in a case in which a message such as “move out of the way” or the like is outputted by voice, or a warning noise is issued, or the like. Due to the robot RB executing the behavior at in this way, the state st of the robot RB and also the states {tilde over ( )}st of the humans HB at the periphery change. Further, the V network is learned, for the state after the change and in the same way as described above, by repeating the processings of computing the reward r, computing the state value V, selecting and executing the behavior a, and updating the parameters of the V network.


In this way, the robot control model learning device 10 can be called a robot control model that, functionally, uses state information as the inputs, and selects and outputs behavior corresponding to the inputted state information.


Operation of the robot control model learning device 10 is described next.



FIG. 5 is a flowchart illustrating the flow of robot control model learning processing by the robot control model learning device 10. The robot control model learning processing is carried out due to the CPU 11 reading-out the robot control model learning program from the storage 14, and expanding and executing the program in the RAM 13.


In step S100, as the state value computing section 30, the CPU 11 acquires position information of the destination pg from the simulator 20.


In step S102, as the state value computing section 30, the CPU 11 initializes the state value function V. Namely, the CPU 11 initializes the parameters of the V network.


In step S104, as the state value computing section 30, the CPU 11 initializes the state st of the robot RB.


In step S106, as the state value computing section 30, the CPU 11 initializes the states {tilde over ( )}st of the humans HB at the periphery.


In step S108, as the behavior selecting section 32, the CPU 11 sets the behavior a that the robot RB will initially perform, and, by outputting the set behavior a to the simulator 20, causes the robot to perform behavior a. Due thereto, the simulator 20 executes behavior a in the simulation.


In step S110, as the state value computing section 30, the CPU 11 acquires the state st of the robot RB from the simulator 20.


In step S112, as the state value computing section 30, the CPU 11 acquires the states {tilde over ( )}st of the humans HB at the periphery from the simulator 20.


In step S114, as the state value computing section 30, the CPU 11 computes the reward rt from the reward function R, on the basis of the states stin of the robot RB and the humans HB at the periphery that were acquired from the simulator 20, and the behavior at.


In step S116, as the state value computing section 30, the CPU 11 computes value yt of the state from above formula (1).


In step S118, as the behavior selecting section 32, the CPU 11 selects behavior at from above formula (3), and outputs the selected behavior at to the simulator 20. Due thereto, the simulator 20 causes the robot RB to execute the behavior at in the simulation.


In step S120, as the state value computing section 30, the CPU 11 stores the state stin of the robot RB and the state value yt as a pair in the buffer E.


In step S122, as the state value computing section 30, the CPU 11 updates the parameters of the V network. Namely, the CPU 11 learns the V network. At this time, a past state sin and state value y that are stored in the buffer E are selected randomly, and the parameters of the V network are updated by using these as training data. Namely, the parameters of the V network are updated by using the gradient descent method in reinforcement learning. Note that the processing of step S122 may be executed once each plural times, and not executed each time.


In step S124, as the state value computing section 30, the CPU 11 judges whether or not the robot RB has arrived at the destination pg. Namely, the CPU 11 judges whether or not the position p of the robot RB coincides with the destination pg. Further, in a case in which it is judged that the robot RB has reached the destination pg, the routine moves on to step S126. On the other hand, in a case in which it is judged that the robot RB has not reached the destination pg, the routine moves on to step S110, and the processings of steps S110˜S124 are repeated until it is judged that the robot RB has reached the destination pg. Namely, the V network is learned. Note that the processings of steps S110˜S124 are an example of the learning step.


In step S126, as the state value computing section 30, the CPU 11 judges whether or not an end condition that ends the learning is satisfied. In the present embodiment, the end condition is a case in which a predetermined number of (e.g., 100) episodes has ended, with one episode being, for example, the robot RB having arrived at the destination pg from the starting point. In a case in which it is judged that the end condition is satisfied, the CPU 11 ends the present routine. On the other hand, in a case in which the end condition is not satisfied, the routine moves on to step S100, and the destination pg is changed, and the processings of steps S100˜S126 are repeated until the end condition is satisfied.


As described above, in the present embodiment, the reward rt that is computed by the reward function R includes reward εc relating to collision avoidance and reward εb relating to intervention behavior, and these assume negative values as negative rewards. By learning the V network by using such a reward function R, the number of times of intervention, in which the robot RB performs an intervention behavior, and the number of times of collision avoidance can be reduced. Due thereto, the time until the robot RB reaches the destination pg can be shortened, while the stress that is applied to the peripheral environment is reduced.


The robot RB, which is controlled by the robot control model learned by the robot control model learning device 10, is described next.


The schematic structure of the robot RB is illustrated in FIG. 6. As illustrated in FIG. 6, the robot RB has a robot control device 40, a camera 42, a robot information acquiring section 44, a notification section 46 and an autonomous traveling section 48. The robot control device 40 has a state information acquiring section 50 and a control section 52.


The camera 42 captures images of the periphery of the robot RB at a predetermined interval while the robot RB moves from the starting point to the destination pg, and outputs the captured local images to the state information acquiring section 50 of the robot control device 40.


The robot information acquiring section 44 acquires the state st of the robot RB. Namely, the position and the velocity of the robot RB are acquired. Specifically, for example, the position p of the robot RB may be acquired by using a GPS (Global Positioning System) device, or may be acquired by using a known self-position estimating technique such as SLAM (Simultaneous Localization and Mapping) or the like. Further, the velocity of the robot RB is acquired by using a speed sensor for example.


The robot information acquiring section 44 outputs the acquired state st of the robot RB to the state information acquiring section 50.


On the basis of captured images that are captured by the camera 42, the state information acquiring section 50 acquire the states {tilde over ( )}st of the humans HB. Specifically, the captured images are analyzed by using a known method, and the positions and the velocities of the humans HB existing at the periphery of the robot RB are computed.


Further, destination information is inputted to the state information acquiring section 50 by communication from an external device for example.


The state information acquiring section 50 outputs the acquired destination information, state st of the robot RB, and states {tilde over ( )}st of the humans HB to the control section 52.


The control section 52 has the function of the robot control model learned by the robot control model learning device 10. Namely, the control section 52 has the functions of the state value computing section 30 after having learned the V network, and the behavior selecting section 32.


The control section 52 selects a behavior corresponding to the inputted state information, and, on the basis of the selected behavior, controls at least one of the notification section 46 and the autonomous traveling section 48.


The notification section 46 has the function of notifying the humans HB, who are at the periphery, of the existence of the robot RB by outputting a voice or outputting a warning sound.


The autonomous traveling section 48 has the function of causing the robot RB, such as the tires and a motor that drives the tires and the like, to travel autonomously.


In a case in which the selected behavior is a behavior of making the robot RB move in an indicated direction and at an indicated velocity, the control section 52 controls the autonomous traveling section 48 such that the robot RB moves in the indicated direction and at the indicated velocity.


Further, in a case in which the selected behavior is an intervention behavior, the control section 52 controls the notification section 46 to output a voice message such as “move out of the way” or the like, or to emit a warning sound.


Hardware structures of the robot control device 40 are described next.


As illustrated in FIG. 7, the robot control device 40 has a CPU (Central Processing Unit) 61, a ROM (Read Only Memory) 62, a RAM (Random Access Memory) 63, a storage 64 and a communication interface 65. The respective structures are connected so as to be able to communicate with one another via a bus 66.


In the present embodiment, the robot control program is stored in the storage 64. The CPU 61 is a central computing processing unit, and executes various programs and controls the respective structures. Namely, the CPU 61 reads-out a program from the storage 64, and executes the program by using the RAM 63 as a workspace. The CPU 61 caries out control of the above-described respective structures, and various computing processings, in accordance with the programs recorded in the storage 64.


The ROM 62 stores various programs and various data. The RAM 63 temporarily stores programs and data as a workspace. The storage 64 is structured by an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs, including the operating system, and various data.


The communication interface 65 is an interface for communicating with other equipment, and uses standards such as, for example, Ethernet®, FDDI, Wi-Fi®, or the like.


Operation of the robot control device 40 is described next.



FIG. 8 is a flowchart illustrating the flow of robot controlling processing by the robot control device 40. The robot controlling processing is carried out due to the CPU 51 reading-out the robot control program from the storage 64, and expanding and executing the program in the RAM 63.


In step S200, as the state information acquiring section 50, the CPU 61 acquires position information of the destination pg by wireless communication from, for example, an unillustrated external device.


In step S202, as the state information acquiring section 50, the CPU 61 acquires the state st of the robot RB from the robot information acquiring section 44.


In step S204, as the state value computing section 30, the CPU 61 acquires the states {tilde over ( )}st of the humans HB at the periphery on the basis of the captured images captured by the camera 42.


In step S206, as the control section 52, the CPU 61 computes the reward rt from the reward function R on the basis of the states sea of the robot RB and the humans HB at the periphery that were acquired from the state information acquiring section 50, and the behavior at.


In step S208, as the control section 52, the CPU 61 computes value yt of the state from above formula (1).


In step S210, as the control section 52, the CPU 61 selects behavior at from above formula (3), and controls at least one of the notification section 46 and the autonomous traveling section 48 on the basis of the selected behavior at. Due thereto, the robot RB executes the behavior at.


In step S212, as the control section 52, the CPU 61 judges whether or not the robot RB has arrived at the destination pg. Namely, the CPU 61 judges whether or not the position p of the robot RB coincides with the destination pg. Then, if it is judged that the robot RB has reached the destination pg, the present routine ends. On the other hand, if it is judged that the robot RB has not reached the destination pg, the routine moves on to step S202, and repeats the processings of steps S202˜S212 until it is judged that the robot RB has reached the destination pg. Note that the processings of steps S202˜S212 are examples of the controlling step.


In this way, in the present embodiment, the robot RB is controlled on the basis of the robot control model that is learned by the robot control model learning device 10. Due thereto, the number of times of intervention, in which the robot RB performs an intervention behavior, and the number of times of collision avoidance can be reduced. Accordingly, the time until the robot RB reaches the destination pg can be shortened, while the stress that is applied to the peripheral environment is reduced.


Note that, although the present embodiment describes a case of learning the state value function V, the learning method is not limited to this. For example, instead of learning the state value function V, a behavior value function Q (sin, a) that computes a behavior value of the robot RB may be learned.


Further, although the present embodiment describes a case in which the reward εc that relates to collision avoidance and the reward εb that relates to an intervention behavior are included as the rewards that the reward function R outputs, the function may be made such that the reward εc that relates to collision avoidance is not included therein.


Further, in the present embodiment, a structure in which the robot RB has the camera 42 is described, but the technique of the present disclosure is not limited to this. For example, the camera 42 may be omitted, and bird's-eye view images of looking down on the robot RB may be acquired from an external device, and the states {tilde over ( )}st of the humans HB at the periphery of the robot RB may be acquired by analyzing the acquired bird's-eye view images.


Further, although the present embodiment describes a case in which the robot RB has the robot control device 40, the function of the robot control device 40 may be provided at an external server. In this case, the robot RB transmits the captured images captured at the camera 42 and the robot information acquired at the robot information acquiring section 44 to the external server, and the robot RB performs the behavior that is instructed by the external server.


Note that any of various types of processors other than a CPU may execute the robot controlling processing that is executed due to the CPU reading software (a program) in the above-described embodiments. Examples of processors in this case include PLDs (Programmable Logic Devices) whose circuit structure can be changed after production such as FPGAs (Field-Programmable Gate Arrays) and the like, and dedicated electrical circuits that are processors having circuit structures that are designed for the sole purpose of executing specific processings such as ASICs (Application Specific Integrated Circuits) and the like, and the like. Further, the robot control model learning processing and the robot controlling processing may be executed by one of these various types of processors, or may be executed by a combination of two or more of the same type or different types of processors (e.g., plural FPGAs, or a combination of a CPU and an FPGA, or the like). Further, the hardware structures of these various types of processors are, more specifically, electrical circuits that combine circuit elements such as semiconductor elements and the like.


Further, the above-described respective embodiments describe forms in which the robot control model learning program is stored in advance in the storage 14, and the robot control program is stored in advance in the storage 64, but the present disclosure is not limited to this. The programs may be provided in a form of being recorded on a recording medium such as a CD-ROM (Compact Disc Read Only Memory), a DVD-ROM (Digital Versatile Disc Read Only Memory), a USB (Universal Serial Bus) memory, or the like. Further, the programs may in a form of being downloaded from an external device over a network.


All publications, patent applications, and technical standards mentioned in the present specification are incorporated by reference into the present specification to the same extent as if such individual publication, patent application, or technical standard was specifically and individually indicated to be incorporated by reference.


EXPLANATION OF REFERENCE NUMERALS




  • 10 robot control model learning device


  • 20 simulator


  • 30 state value computing section


  • 32 behavior selecting section


  • 40 robot control device


  • 42 camera


  • 44 robot information acquiring section


  • 46 notification section


  • 48 autonomous traveling section


  • 50 state information acquiring section


  • 52 control section

  • HB human

  • RB robot


Claims
  • 1. A robot control model learning method, comprising, by a computer: subjecting a robot control model to reinforcement learning, the robot control model being input with state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and the robot control model selecting and outputting a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention comprising execution of the intervention behavior being assigned a negative reward.
  • 2. The robot control model learning method of claim 1, wherein: the plurality of behaviors include at least one of a moving direction of the robot, a moving velocity of the robot, or the intervention behavior, andthe reward is given such that at least one of an arrival time until the robot reaches the destination or the number of times of intervention decreases.
  • 3. The robot control model learning method of claim 1, wherein: the plurality of behaviors include an avoidance behavior in which the robot avoids a collision with another object, andthe reward is given such that a number of times of avoidance in which the collision is avoided decreases.
  • 4. The robot control model learning method of any one of claim 1, wherein the reinforcement learning comprises updating a state value function that expresses the state of the robot.
  • 5. A robot control model learning device, comprising: a learning section that subjects a robot control model to reinforcement learning, the robot control model being configured to receive state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and to select and output a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention comprising execution of the intervention behavior being assigned a negative reward.
  • 6. A non-transitory recording medium storing a robot control model learning program that is executable by a computer to perform processing, the processing comprising: subjecting a robot control model to reinforcement learning, the robot control model being input with state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment, and the robot control model selecting and outputting a behavior corresponding to the state of the robot from among a plurality of behaviors including an intervention behavior of intervening in the environment, with a number of times of intervention comprising execution of the intervention behavior being assigned a negative reward.
  • 7. A robot control method, according to which a computer performs processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of.
  • 8. A robot control device, comprising: an acquisition section that acquires state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; anda control section that effects control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning device of claim 5.
  • 9. A non-transitory recording medium storing a robot control program that is executable by a computer to perform processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 1.
  • 10. A robot, comprising: an acquisition section that acquires state information expressing a state of the robot, which travels autonomously to a destination in a dynamic environment;an autonomous traveling section that causes the robot to travel autonomously; anda robot control device that includes a control section configured to effect control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning device of claim 5.
  • 11. A robot control method, according to which a computer performs processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 2.
  • 12. A robot control method, according to which a computer performs processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 3.
  • 13. A robot control method, according to which a computer performs processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 4.
  • 14. A non-transitory recording medium storing a robot control program that is executable by a computer to perform processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 2.
  • 15. A non-transitory recording medium storing a robot control program that is executable by a computer to perform processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 3.
  • 16. A non-transitory recording medium storing a robot control program that is executable by a computer to perform processing, the processing comprising: acquiring state information expressing a state of a robot that travels autonomously to a destination in a dynamic environment; andeffecting control such that the robot moves to the destination on the basis of the state information and on the basis of the robot control model learned by the robot control model learning method of claim 4.
Priority Claims (1)
Number Date Country Kind
2019-205690 Nov 2019 JP national
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2020/039554 10/21/2020 WO