The present invention relates to methods for improving navigation of mobile robotic devices. More particularly, the invention relates to reducing the rate of collisions with obstacles over time.
Robotic devices are used in a variety of applications for carrying out tasks autonomously. One problem robotic devices face is avoiding collisions while driving through an environment. Different types of sensors are commonly used to identify obstacles and avoid collisions. A robotic surface coverage device may drive through an area faster when fewer collisions are incurred, and thus a job of a robotic surface coverage device may be completed more efficiently. A need exists for a method to reduce the rate of collisions in mobile robotic devices.
The present invention discloses a method for a robotic device to autonomously reduce its collision rate over time. A robotic device selects and carries out actions and is assigned positive or negative rewards based on the results of carrying out the various actions. Actions that result in collisions will incur negative rewards. Actions that don't result in collisions will incur positive rewards. Over time, after servicing an area a plurality of times using varied actions, the best method for servicing the area can be identified by comparing the total rewards earned during each work session. The robotic device can then develop a policy that attempts to maximize rewards at all times, so the robotic device chooses the actions with the least likelihood of incurring negative rewards (and thus the least likelihood of resulting in collisions).
The present invention relates to a method for improving decision-making of a mobile robotic device over time so that collisions with obstacles may be reduced by defining a policy based on outcomes of prior actions. Briefly, a mobile robotic device may service a work area using a number of movements, herein referred to as controls. Controls result either in the device colliding with an obstacle or the device not colliding with an obstacle. Whenever a collision occurs, a negative reward is assigned to the system. Whenever a non-collision occurs following a control, a positive reward is assigned to the system. The system may be configured to try to maximize rewards by selecting controls with a lesser rate of producing collisions. Over time, the system may develop a policy to minimize collisions based on the total rewards earned during each work session.
A mobile robotic device is provided with a variety of controls from which it may select to navigate through its environment. A control may be comprised of an action or series of actions. Controls are selected, in part, based on input from sensors. For example, a robotic device that has received input from a leftmost obstacle sensor may be configured to only select from controls that do not begin with leftward or forward movement. However, controls may also be selected, in part, at random. A robotic device having received no input from sensors may select from controls without regard to the directionality of initial movements. After selecting a control, the control is executed. If, during execution of the control, the device collides with an obstacle as detected by one or more touch sensors, a negative reward is assigned to the system. If the control is interrupted before completion, for example, by a moving obstacle, but no collision occurs, a smaller negative reward is assigned to the system. If the control is completed without any collisions, a positive reward is assigned to the system. The robotic device repeats this process to select controls to move through its environment.
Execution of each control results in the transition from a first state to a next state. The reward (R) of each state (s) may be represented by:
R(s)=R(ts)γt
Where t is discrete time and γ is a discount factor.
The reward after the transition from state (s) to (S′) may be represented by:
R(s′)=R(ts)γt+R(ts+1)γt+1
The cumulative rewards over the course of a work session are combined to determine the payoff of the arrangement of controls. The total reward for work in a session can be represented by:
R(t0)γt+R(t1)γt+R(t2)γt+R(t3)γt+ . . . +R(tn)γt=Total reward
The system may be configured to attempt to maximize this value at all times, which is represented by the formula:
Where E is the expectation that R (reward) is maximized.
Therefore, the value of state (s) when policy (π) is executed equals the expected sum of all future discounted rewards provided that the initial state (s0) is (s) and policy (π) is executed as represented by the formula:
From the above, a value iteration may be concluded:
Where:
maxa=maximizing action
V(s′)=value of successor
R(s)=reward or cost to get to state s
P=state transition function
R=reward function
The above formula is found after convergence according to Bellman's equation represented by the formula:
The value of a given state depends on the outcome of the prior state multiplied by the cost (penalty incurred) to get there. The system can then compare the values of the controls used in each session and determine which set of controls has the highest value. As the system completes more sessions, more and more data is gathered and values are assigned to each state. That is, a value is assigned to each set of controls used. Once values have been assigned to sets of controls, the system can calculate a policy to maximize rewards. The system develops a policy that defines the best set of controls yet discovered. This is represented by the formula,
From the value iteration, the system may find policy 1, which is a better policy than policy 0 and then find a policy 2, which is a better than policy 1 and so on. The above formula therefore finds the best eventual policy.
Pa(s,s′)=Pr(st+1=s′|st=s,at=a) is the probability that action a in state s at time t will lead to state s′ at time t+1
And
Ra(s,s′) is the immediate reward received after transition to state s′ from s
And
γ€[0,1] is the discount factor.
A desirable outcome is to choose a policy (π) that will maximize the expected discounted sum of the rewards collected at any given state (s). The system uses the policy (π) to move through the environment in the best known manner.
In this method, S (state) refers to the state of the device after each control. A finite number of controls are possible, and thus there are a finite number of resulting states. A is the action or control selected, which takes the device from state S to state S′.
Referring to
Referring to
Number | Name | Date | Kind |
---|---|---|---|
6678582 | Waled | Jan 2004 | B2 |
7441298 | Svendsen et al. | Oct 2008 | B2 |
7474941 | Kim et al. | Jan 2009 | B2 |
7861365 | Sun et al. | Jan 2011 | B2 |
7952470 | Liao et al. | May 2011 | B2 |
9320398 | Hussey et al. | Apr 2016 | B2 |
20070017061 | Yan | Jan 2007 | A1 |
20180079076 | Toda | Mar 2018 | A1 |
Entry |
---|
Macek, Kristijan, Ivan PetroviC, and N. Peric. “A reinforcement learning approach to obstacle avoidance of mobile robots.” Advanced Motion Control, 2002. 7th International Workshop on. IEEE, 2002. (Year: 2002). |
Smart, William D., and L. Pack Kaelbling. “Effective reinforcement learning for mobile robots.” Robotics and Automation, 2002. Proceedings. ICRA'02. IEEE International Conference on. vol. 4. IEEE, 2002. (Year: 2002). |
Argall, Brenna D., et al. “A survey of robot learning from demonstration.” Robotics and autonomous systems 57.5 (2009): 469-483. (Year: 2009). |
Number | Date | Country | |
---|---|---|---|
62264194 | Dec 2015 | US |