The present disclosure relates to the visual debugging of robotic tasks.
Robots are electro-mechanical devices that are able to manipulate objects using a series of links, actuators, and end effectors. The links of a robot are typically interconnected via joints, each of which may be independently or interdependently driven by one or more joint actuators. Each joint represents an independent control variable or degree of freedom. End effectors are the particular devices used to perform a commanded work task sequence, such as grasping a work tool or stacking one component onto another.
Any modifications to an object handled by a robot in the execution of a given work task sequence typically requires expensive retraining of the robot. This may be true even if the surfaces of the grasped object do not change in a subsequent work task sequence. Similarly, changes to the positioning or orientation of an object in a robot's work environment as a result of error and/or relaxed operating conditions may require expensive retraining of the robot, whether via programming or manual retraining of the robot by back-driving of the joints and task demonstration. However, existing control software is not easily retooled to meet changing flexibility requirements.
A robotic system is disclosed herein. The robotic system includes a robot and a controller having a graphical user interface (GUI). The controller is configured, i.e., sufficiently equipped in hardware and programmed in software, to allow an operator/user of the robotic system to visually debug present and future actions of the robot, specifically via the manipulation of a set of displayed visual markers. The present approach is intended to facilitate user interaction with planned actions of the robot in a simulated environment.
In modern manufacturing, there is an ongoing drive to achieve flexible assembly lines that are able to produce new or more varied products with a minimum amount of downtime. The robotic system of the present invention addresses this need in part via intuitive graphical action planning functionality. The visual markers used in the robotic system in all of its disclosed embodiments enable users to visually examine the accuracy of planned future tasks or actions of the robot, to avoid conflicting actions by observing planned end effector trajectories for those actions in advance of the actions, and to adjust the planned actions by changing the visual markers in real time via the GUI.
The controller described combines an action planning module with a simulation module to provide various competitive advantages. For instance, all possible action trajectories and future robot and object positions and orientations may be depicted via the visual markers in a simulated environment viewable via a display screen of the GUI. By integrating the action planning and simulation modules, the controller allows for visualization of all currently planned actions and also facilitates necessary control adjustments via the GUI. This in turn enables a user to change the robot's behavior in real time. That is, a user can quickly discern all possible future actions of the robot and quickly ascertain whether the action planning module has chosen a desirable solution. Such an approach may facilitate the execution of multi-step work task sequences such as object stacking, as well as more complex work tasks in a constantly changing work environment.
In an example embodiment, the robotic system may include a robot that is responsive to input commands, sensors which measure a set of status information, and a controller. The status information may include a position and orientation of the robot and an object located within a workspace of the robot. The controller includes a processor and memory on which is recorded instructions for visually debugging an operation of the robot. The controller includes a simulator module, an action planning module, a marker generator module, and a GUI.
The simulator module receives the status information from the sensors and outputs visual markers as a graphical depiction of the object and robot in the workspace. The action planning module selects future actions of the robot. The marker generator module outputs marker commands to the simulator module. The GUI, which includes a display screen, receives and displays the visual markers and the selected future action, and also receives the input commands. Via the GUI and the action planning module, the position and/or orientation of the visual markers can be modified in real time by a user to change the operation of the robot.
A method is also disclosed for visually debugging the robot. The method includes receiving the set of status information from the sensors via a simulator module, transmitting a plurality of marker commands to the simulator module via a marker generator module, and generating visual markers in response to the marker commands, via the simulator module, as graphical depictions of the object and the robot in the workspace. The method also includes displaying the visual markers and the selected future action on a display screen of a GUI, selecting a future action of the robot via an action planning module, and modifying, via the action planning module, at least one of the position and orientation of the visual markers in real time in response to input signals to thereby change the operation of the robot.
The above features and advantages and other features and advantages of the present invention are readily apparent from the following detailed description of the best modes for carrying out the invention when taken in connection with the accompanying drawings.
With reference to the drawings, wherein like reference numbers refer to the same or similar components throughout the several views, an example robotic system 10 is shown schematically in
As is known in the art, conventional end effectors are designed to operate in a highly structured work environment with a minimum amount of variability. End effectors are often constrained to move via rigidly defined trajectories. Approach and departure trajectories, for instance, may be programmed for each new robotic task. Likewise, industry robots are often programmed with a fixed set of desired movements. Thus, future action planning is not used in such systems. In addition, conventional robots tend to rely on objects being placed in their work environment in a consistent and highly predictable manner. Such constraints therefore render conventional robot control approaches relatively inflexible and difficult to modify in real time.
Even robotic systems that incorporate sensory feedback for autonomous trajectory planning require significant programmer interaction to properly identify the robotic task, adjust the required movement parameters, set the required manipulator grasp positions, and adjust task trajectories in critical locations. The present approach is intended to reduce error and development time for robotic systems requiring a complex action planning system, for instance by simplifying user interactions.
The robot 12 shown in
Robotic joints 17 connect the various arm segments 18. Each robotic joint 17 may be driven by a joint actuator 19, such as a motor, so as to move the end effector 20 during execution of a commanded work task sequence (arrow 88). Raw sensor data (arrow 15) describing current robot performance values are relayed to the controller 50 and used by the controller 50 to actively monitor and visually debug current and future actions of the robot 12 in real time. The raw sensor data (arrow 15) may describe performance and state values of the robot 12. Example elements of the raw sensor data (arrow 15) may include measured or commanded torque of the joint actuators 19, a clamping force applied to the object 23 by the end effector 20, a speed and/or acceleration of the end effector 20 and/or any of its joint actuators 19, etc.
To collect such data, a sensor array 33 of one or more sensors may be connected to or positioned with respect to the robot 12, such as to the base 14. The sensor array 33 may include force, pressure, and/or temperature sensors, torque sensors, accelerometers, position sensors, and the like. The sensor array 33 may also include so-called “soft” sensors, e.g., software that indirectly calculates values from directly measured values, as is well understood in the art. Additionally, an environmental sensor 25 such as a vision system may be positioned with respect to the robot 12 and configured to film, video tape, image, and/or otherwise record anything in its field of view (arrow V), e.g., the behavior of the robot 12 in its operating environment or work space, as environmental information (arrow 26), which is transmitted to the controller 50.
The functionality of the controller 50 of
The controller 50 of
That is, the robot 12 of
Therefore, as a prerequisite to executing the present method 100, the robot 12 may be taught all required grasp positions and approach/departure directions while learning how to grasp an object, for instance the object 23. This training information is attached to any markers assigned by the controller 50 at runtime to any perceptual features detected via the environmental sensors 25 or other sensors in the environment in which the robot 12 operates. Thus, the robot 12 may first learn, and the controller 50 may first record, any required markers via human demonstration, and thereafter the controller 50 can dynamically assigns learned markers to any detected perceptual features in the work environment of the robot 12. This functionality allows for rapid adaptation to a changing environment while still allowing the robot 12 to complete multi-step assembly processes.
An example of task learning, whether commanded via a “teach pendant” in a human demonstrated task or otherwise, is a simple grasp and pick up task where the robot 12 of
Although the robot 12 of
With respect to robotic skills, behavioral imitation of a demonstrated work task is based around recognizing and repeating known robotic skills such as grasping the object 23, dropping the object 23, etc. Each skill in the repertoire of the robot 12 of
Regarding cost estimation, an example cost estimation function, e.g., (Ma, E, Wt), returns the cost of assigning a given marker Ma to an object given Wt and the set of all recognized end-states E, where Wt is the current state of the robot's environment as defined by:
Wt={P(t),J(t),sensors(t)}
where P(t) is the set of all objects visually identified and localized in time step t, J(t) is the most recent joint angle configuration of the robot 10, and sensors(t) is the set of data returned by all other available sensors used in conjunction with the robot 12. The motor schema 28 of
Referring to
The logic module 60 is referred to hereinafter as the Marker Generator Module (MGEN) 60. Logic module 70 is a simulation module that is referred to hereinafter as the Simulator Module (SIM) 70. The logic module 80 is an action planning module that is referred to hereinafter as the Action Planner Module (AP) 80. The present control scheme may be implemented, for example, on the MICROSOFT Robotics Developer Software (MRDS) or other suitable software. The logic elements 54 shown schematically in
Visual perception in the controller 50 of
The Marker Generator Module 60 of
Referring briefly to
The most fundamental use of the visual markers (arrow 62 of
In
As shown in
Referring to
Referring again to
The Simulator Module 70 of
The GUI 52 is operable to depict a 3D simulated world of the robot 12 from any user selected viewpoint and also enables the user to adjust actions of the robot 12 by changing the visual markers through interaction with the GUI 52. Four possible kinds of usage are trajectory modification, target modification, future position modification, and marker debugging.
In trajectory modification, the GUI 52 of
For target modification, the GUI 52 of
Future position modification is also possible via the GUI 52 to enable the user to change the future position that an object would be in after manipulation by the robot 12. This can be accomplished by changing either the location or the orientation of one or more Objective Markers (O1, O2 of
For marker debugging, the GUI 52 of
The Action Planner Module 80 shown in
Use of the State Predictor 82 is optional as noted above. Without the State Predictor 82, the Action Planner Module 80 can still function, e.g., by using a greedy algorithm that picks a next available action having the lowest present cost. An example cost function is described above. The Marker Generator Module 60 in this instance would still be able to generate all the markers (arrow 62) except for any Object Markers representing future object positions in a future state. However, if the State Predictor 82 is implemented, the Action Planner Module 80 will be able to choose the action that leads to the lowest cost after several steps of action.
The State Predictor 82 may generate all possible future world states to a certain depth via a state tree, as is known in the art. Example state generating steps of the optional State Predictor 82 may proceed as follows. In a first step, the current state of the Simulator Module 70, from the information (arrow 78), may be used as the root of a state tree, a term that is known in the art. Then, all valid actions for each leaf of the state tree may be found via the controller 50, which generates all new world states that are changed by each action of the robot 12 and adds them as children to the corresponding leaf of the state tree. This is repeated until the state tree has reached a calibrated depth. When a new world state is generated, all information in the simulated world is cloned, and then changed as needed according to how the action is defined.
The Action Selector 84 of the Action Planner Module 80 finds the next action to execute which has the lowest cost based on the current simulated world from the Simulator Module 70 and the future world states (arrow 83) generated by the State Predictor 82, if used. When the State Predictor 82 is not used, the Action Planner Module 80 selects the next action (arrow 86) for the robot 12 that has the lowest transition cost plus action cost. The transition cost as used herein is the cost for the robot 12 of
The Action Selector 84 may start a search on the state tree generated by the State Predictor 82. When a node in the state tree is reached, the execution costs of each child node are calculated. The execution cost is defined as the sum of a transition cost, an action cost, and a node cost of the child node. The transition cost and action cost are costs associated with each action that leads to the child node. The node cost of a node is the minimum execution cost among all of the node's children nodes. The controller 50 can set the execution cost of this node as the minimum execution cost among all of the children nodes for that particular node. The action that leads to the child with the minimum execution cost is set as this node's lowest cost action. The above steps are repeated recursively until completed for all of the nodes in the state tree. The lowest cost action of the root node of the state tree will then be the action selected.
After the lowest cost action is selected by the controller 50, the Action Selector 84 of the Action Planner Module 80 sends an action command to the robot 12, e.g., as the work task sequence (arrow 88). The Action Selector 84 can also modify or adjust a selected action based on the changes made on the marker model in the Simulator Module 70 through the GUI 52. Since objects in the environment can be changed at any time, the above steps may need to be repeated at any time during execution so that a revised action can be selected corresponding to the new state.
Referring to
At step 104, the controller 50 processes (PROC) the received request via the processor (P). Motor schema 28 as shown in
Step 106 entails generating the marker (arrow 62) via the Marker Generator Module 60 of
At step 108, a user of the robotic system 10 of
Step 110 includes performing the commanded work task (PT) via transmission of the work task sequence (arrow 88 of
Step 112 entails entering the input commands (arrow 53) into the GUI 52 to request a corrective action (CA). Step 112 may entail changing any or all of the visual markers (arrow 62) to change a future action sequence. Modification of the markers (arrow 62) may trigger a revision of any programming code needed to affect such a changed result. That is, similar to training the robot 12 how to move through a particular sequence via back-driving or other manual training techniques, step 112 could include changing the markers to command a virtual back-driving of the robot 12. For instance, changing the objective markers (O1, O2 of
Use of the robotic system 10 described hereinabove with reference to
While the best modes for carrying out the invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4517652 | Bennett et al. | May 1985 | A |
5483440 | Aono et al. | Jan 1996 | A |
5530791 | Okabayashi | Jun 1996 | A |
5937143 | Watanabe et al. | Aug 1999 | A |
6088628 | Watanabe et al. | Jul 2000 | A |
6292715 | Rongo | Sep 2001 | B1 |
6330495 | Kaneko et al. | Dec 2001 | B1 |
6353814 | Weng | Mar 2002 | B1 |
6847336 | Lemelson et al. | Jan 2005 | B1 |
7945349 | Svensson et al. | May 2011 | B2 |
8856335 | Yadwadkar et al. | Oct 2014 | B1 |
20010004718 | Gilliland et al. | Jun 2001 | A1 |
20030120391 | Saito | Jun 2003 | A1 |
20070171194 | Conti et al. | Jul 2007 | A1 |
20080114492 | Miegel et al. | May 2008 | A1 |
20080154428 | Nagatsuka et al. | Jun 2008 | A1 |
20110264263 | Kamioka et al. | Oct 2011 | A1 |
20120166165 | Nogami et al. | Jun 2012 | A1 |
20120290130 | Kapoor | Nov 2012 | A1 |
20130187930 | Millman | Jul 2013 | A1 |
20140014637 | Hunt | Jan 2014 | A1 |
20140088949 | Moriya et al. | Mar 2014 | A1 |
20150049186 | Pettersson et al. | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
1842631 | Oct 2007 | EP |
2006350602 | Dec 2006 | JP |
2011189431 | Sep 2011 | JP |
WO 2007113112 | Oct 2007 | WO |
WO 2013083730 | Jun 2013 | WO |
Entry |
---|
Ferretti et al., Modular dynamic virtual-reality modeling of robotic systems, Dec. 1999, IEEE Robotics and Automation Magazine, pp. 13-23. |
Lee et al.,Off-line programming in the shipbuilding industry—open architecture and semi-automatic approach, Mar. 2005, International Journal of Control, Automation, and Systems, vol. 3, No. 1, pp. 32-42. |
Luke F. Gumbley, Real-World Robotic Debugging, Feb. 2010, A thesis submitted in fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering, The University of Auckland, pp. 1-147. |
Number | Date | Country | |
---|---|---|---|
20150239127 A1 | Aug 2015 | US |