The present application generally relates to modeling of an industrial scene by a robot, and more particularly, but not exclusively, to constructing a 3D model of an industrial scene with scans taken with a vision sensor associated with a robot.
As the field of robotics continues to advance, an increasing amount of focus is placed on the development of technologies that permit a robot to determine collision free pathways and the location of a work piece or other objects in real time. Randomly placed objects within a robot work area or industrial scene can cause interference with certain movements of the robot and prevent work tasks from being completed. Some existing systems have various shortcomings relative to certain applications. Accordingly, there remains a need for further contributions in this area of technology.
One embodiment of the present application is a unique system and method for generating a real time 3D model of a robot work area or industrial scene. Other embodiments include apparatuses, systems, devices, hardware, methods, and combinations for generating a collision free path for robot operation in an industrial scene. Further embodiments, forms, features, aspects, benefits, and advantages of the present application shall become apparent from the description and figures provided herewith.
For the purposes of promoting an understanding of the principles of the application, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the application is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the application as described herein are contemplated as would normally occur to one skilled in the art to which the application relates.
As the field of robotics continues to advance, an increasing amount of focus is placed on the development of technologies that permit more tightly coupled human-robot interaction (HRI). Application of HRI technologies facilitates robot understanding of information about its surrounding environment and permits an operator to understand or receive feedback about the level of understanding that the robot has attained. Before interaction occurs between an operator and a robot, an initial understanding of the working environment or industrial scene can be obtained. As the robot's understanding of the scene increases, the level of human interaction can decrease (i.e. an operator does not have to program all information into the robot prior to operation which minimizes setup time).
The robot system disclosed herein provides for control methods to retrieve valid information from an industrial scene that can be recognizable by the robot's memory. The control methods enable the robot to obtain information and understanding of elements or objects in the scene while simultaneously optimizing robot motion to perform a task within the industrial scene. This robot path is reactive to variation of the scene and helps the robot to understand the boundaries of the surroundings. The ability for the robot to autonomously retrieve information from the scene facilitates detection of objects, constructive robot motion generation, reduction of time required for overall discovery process and minimizes human involvement in setup and robot programming. Industrial robots can use teach pendants and joy sticks for “jogging a robot” and teaching robot points, however this can be cumbersome, time consuming and dangerous if the operator is in close proximity to the robot while the robot moves through the industrial scene. 3D vision and implicit programming can be used to improve robot path programming by generating a 3D model of an unknown scene around a robot, it takes time to teach the robot to hold a 3D sensor to scan the industrial scene. It can be difficult to manually generate the scanning path to collect sufficient data from the scene and the object without a robot collision and/or incurring reachability problems.
The present disclosure includes methods and systems for automatically generating a complete 3D model of an industrial scene for robotic application that reduces engineering time and cost over manually programming the robot. The methods and systems automatically generate a scanning path to collect sufficient data about the scene and the object location without causing a collision or a reachability problem. The robot scanning system is operably connected to the robot path planning algorithm to the 3D object recognition algorithm so as to provide control input to the robot to perform a task on an object within the industrial scene.
Referring now to
The robot 12 may include a movable base 20 and a plurality of movable portions connected thereto. The movable portions may translate or rotate in any desired direction. By way of example and not limitation, movable portions illustrated by arrows 18, 26, 28, 30, 32 and 34 may be employed by the exemplary robot 12. A bin 40 for holding workpieces or other objects 42 to be retrieved and/or worked on by the robot 12 may constitute at least a part of the exemplary industrial scene. An end effector 24 such as a gripping or grasping mechanism or other working tools can be attached to the moveable arm 16a and used to grasp an object 42 and/or perform other work tasks on the object 42 as desired. It should be understood that the term “bin” is exemplary in nature and as used herein means, without limitation, any container, carton, box, tray or other structure that can receive and/or hold workpieces, parts or other objects. Additional components 44 can be associated with the vision system 36. These components 44 can include lighting systems, reflector(s), refractor(s) and beam expander(s) or the like.
Referring now to
Teaching and training a robot to autonomously discover and understand an industrial scene and to perform robot work tasks such as extracting randomly arranged objects from a bin is a complex task. Given a 2D RGB (red, green blue) sensor image, the robot 12 can be trained to identify the picking and dropping bins 60a, 64a, and obstacles 62a, 62b within the scene 50. In addition, a 2D image can be combined with a calibrated 3D camera sensor to generate accurate spatial location of certain recognized objects. In some aspects, a 3D point cloud based scene reconstruction can be time consuming, however some portions of 3D point cloud data can be used reconstruct a scene for robot training in a virtual environment from the point cloud data to reconstruct a virtual 3D scene that can be used to train the robot in real time. The method for determining such robot commands in an autonomous fashion is described in more detail below.
The control method for defining an automated robot path starts from an initial viewpoint of the robot 12. The control method directs the robot 12 to pan the 3D sensor which can be mounted on the tip of the robot gripper 24 in one exemplary form. Next, an initial 3D model of the scene around the robot 12 is reconstructed with 3D sensor data and robot 12 movement information (3D sensor pose, time-stamp, etc.). The initial 3D model of the scene will be analyzed for parameters such as occupancy, missing information, occluded objects, unknown areas, etc. Scene reconstruction is continuously updated during robot 12 operation. Computer processing computation for the scene reconstruction can be done independently and in parallel to robot 12 work tasks such as picking or dropping work tasks. A new scan angle may only be generated if the system cannot recognize an object 42 based on the previously gathered information. New scans and robot 12 move path calculations may or may not be performed solely to complete the scene reconstruction process. The new scan angles are generated based the prior knowledge from the scene understanding. Based on the observed data, a posterior predictive distribution can be calculated and a maximum likelihood estimation of the probability can be used to determine a new view angle that would yield the most useful information to further the understanding of the robot 12.
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
In one aspect the present disclosure includes a method comprising: determining a predefined collision free robot path; moving a robot along the predefined robot path; scanning an industrial scene with a scanning sensor positioned on a robot while moving along the predefined robot path; storing scanned images of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in the memory; planning a next collision free robot path based on the 3D model; moving the robot along the next collision free robot path; scanning an industrial scene with the scanning sensor positioned on the robot while moving along the next robot path; and storing new scanned images in the memory; and reconstructing the 3D model of the industrial scene based on the new scanned images.
In refining aspects, the method further comprising repeating the planning, scanning, storing and reconstructing steps until a completed 3D industrial scene is constructed; further comprising performing a work task with the robot after completing the 3D industrial scene model; wherein the scanning sensor is a 3D camera; further comprising moving the scanning sensor relative to the robot when determining the predefined collision free path; wherein the moving of the scanning sensor includes panning, tilting, rotating and translating movement relative to the robot; wherein the moving of the scanning sensor includes moving an arm of the robot while a base of the robot remains stationary; further comprising planning the collision free path with a controller having a collision free motion planning algorithm; wherein the planning of the collision free path occurs in real time without offline computer analysis.
Another aspect of the present disclosure includes a method comprising: determining a predefined collision free robot path; scanning an industrial scene along the predefined collision free robot path with a scanning sensor positioned on a robot; storing scanned images of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in memory; detecting an object within the 3D model of the industrial scene; moving the robot along the collision free robot path to generate a next scanning viewpoint of the detected object; scanning the industrial scene to obtain new scanned images of the object; storing the new scanned images in the memory; and repeating the moving and scanning steps until the object is identified to a threshold certainty level.
In refining aspects of the present disclosure wherein the scanning includes capturing images with a 3D camera; further comprising performing a robot task on the object after the object has been identified to the threshold certainty level; wherein the robot task includes grasping the object; further comprising panning, tilting and rotating the scanning sensor to capture images from different vantage points to generate a new 3D model of the industrial scene; further comprising planning a next scanning path prior to generating the new 3D model of the industrial scene; wherein the planning includes analyzing the new 3D model with a controller having a collision free motion planner algorithm; and further comprising determining the next scanning path based on results from a collision free motion planning algorithm.
Another aspect of the present disclosure includes a method comprising: determining a predefined collision free robot path; scanning an industrial scene proximate a robot with a scanning sensor; storing scanned images of the industrial scene in a memory; constructing a 3D model of the industrial scene based on the images stored in memory; detecting an object within the industrial scene; determining whether the object is recognized with sufficient precision; determining whether a robot task can be performed on the object; and performing one or more robot tasks on the object after the object is recognized with sufficient certainty.
In refining aspects of the present disclosure the method further comprises generating a next scanning viewpoint and rescanning the industrial scene if the object is not recognized with sufficient certainty; further comprising generating a next scanning viewpoint and rescanning the industrial scene if the 3D model of the industrial scene is not sufficient for grasping analysis; further comprising generating a next scanning viewpoint and rescanning the industrial scene if the 3D model of the industrial scene is not complete; wherein the scanning of the industrial scene includes panning, tilting, rotating and translating the scanning sensor relative to the robot; further comprising planning the collision free path by a controller having a collision free motion planning algorithm; and wherein the planning occurs in real time without offline computer analysis.
While the application has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiments have been shown and described and that all changes and modifications that come within the spirit of the applications are desired to be protected. It should be understood that while the use of words such as preferable, preferably, preferred or more preferred utilized in the description above indicate that the feature so described may be more desirable, it nonetheless may not be necessary and embodiments lacking the same may be contemplated as within the scope of the application, the scope being defined by the claims that follow. In reading the claims, it is intended that when words such as “a,” “an,” “at least one,” or “at least one portion” are used there is no intention to limit the claim to only one item unless specifically stated to the contrary in the claim. When the language “at least a portion” and/or “a portion” is used the item can include a portion and/or the entire item unless specifically stated to the contrary.
Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings.
Number | Name | Date | Kind |
---|---|---|---|
6526373 | Barral | Feb 2003 | B1 |
6721444 | Gu et al. | Apr 2004 | B1 |
7236854 | Pretlove et al. | Jun 2007 | B2 |
7865267 | Sabe et al. | Jan 2011 | B2 |
8095237 | Habibi et al. | Jan 2012 | B2 |
8538685 | Johnson | Sep 2013 | B2 |
8688275 | LaFary | Apr 2014 | B1 |
8793205 | Fisher et al. | Jul 2014 | B1 |
8942468 | Toshev et al. | Jan 2015 | B1 |
8958912 | Blumberg et al. | Feb 2015 | B2 |
9031317 | Yakubovich et al. | May 2015 | B2 |
9050719 | Valpola et al. | Jun 2015 | B2 |
9102055 | Konolige | Aug 2015 | B1 |
9205886 | Hickman | Dec 2015 | B1 |
9227323 | Konolige | Jan 2016 | B1 |
9272417 | Konolige et al. | Mar 2016 | B2 |
9333649 | Bradski | May 2016 | B1 |
9424470 | Hinterstoisser | Aug 2016 | B1 |
9434072 | Buehler et al. | Sep 2016 | B2 |
9457477 | Rublee | Oct 2016 | B1 |
9471206 | Girardeau | Oct 2016 | B2 |
9592609 | LaFary | Mar 2017 | B2 |
9630320 | Konolige et al. | Apr 2017 | B1 |
9630321 | Bradski et al. | Apr 2017 | B2 |
9643316 | Krasny et al. | May 2017 | B2 |
9669544 | Buehler et al. | Jun 2017 | B2 |
9694499 | Gotou | Jul 2017 | B2 |
9746852 | Watts et al. | Aug 2017 | B1 |
9919421 | Rossano et al. | Mar 2018 | B2 |
10060857 | Bouchard | Aug 2018 | B1 |
10341639 | Wang | Jul 2019 | B2 |
10705528 | Wierzynski | Jul 2020 | B2 |
11006039 | Yu | May 2021 | B1 |
20040093122 | Galibraith | May 2004 | A1 |
20060016066 | Gaida | Jan 2006 | A1 |
20100179689 | Lin | Jul 2010 | A1 |
20110196533 | Scheurer et al. | Aug 2011 | A1 |
20120120264 | Lee | May 2012 | A1 |
20120121132 | Asahara et al. | May 2012 | A1 |
20120182392 | Kearns | Jul 2012 | A1 |
20120215354 | Krasny | Aug 2012 | A1 |
20120290130 | Kapoor | Nov 2012 | A1 |
20130178980 | Chemouny et al. | Jul 2013 | A1 |
20130343640 | Buehler et al. | Dec 2013 | A1 |
20140022353 | Hamersma | Jan 2014 | A1 |
20150012209 | Park et al. | Jan 2015 | A1 |
20150071488 | Wei | Mar 2015 | A1 |
20150328773 | Boca et al. | Nov 2015 | A1 |
20160016311 | Konolige | Jan 2016 | A1 |
20160089791 | Bradski et al. | Mar 2016 | A1 |
20160129592 | Saboo | May 2016 | A1 |
20160288330 | Konolige | Oct 2016 | A1 |
20170091999 | Blumenfeld et al. | Mar 2017 | A1 |
20170092000 | Schwimmer | Mar 2017 | A1 |
20170138192 | Wang et al. | May 2017 | A1 |
20170330415 | Sato | Nov 2017 | A1 |
20170355078 | Ur | Dec 2017 | A1 |
20180222050 | Vu | Aug 2018 | A1 |
20180232906 | Kim | Aug 2018 | A1 |
20180250820 | Shimodaira | Sep 2018 | A1 |
20180250821 | Shimodaira | Sep 2018 | A1 |
20180253516 | Shimano | Sep 2018 | A1 |
20180283017 | Telleria | Oct 2018 | A1 |
20180297204 | Krasny | Oct 2018 | A1 |
20180361586 | Tan | Dec 2018 | A1 |
20180364870 | Mei | Dec 2018 | A1 |
20190146515 | De Salvo | May 2019 | A1 |
20190176341 | Mei | Jun 2019 | A1 |
20190188477 | Mair | Jun 2019 | A1 |
20190206400 | Cui | Jul 2019 | A1 |
20190261566 | Robertson | Aug 2019 | A1 |
20190291277 | Oleynik | Sep 2019 | A1 |
20190351557 | Suzuki | Nov 2019 | A1 |
20190389062 | Truebenbach | Dec 2019 | A1 |
20200001458 | Zhang | Jan 2020 | A1 |
20200055195 | Ignakov | Feb 2020 | A1 |
20200061839 | Deyle | Feb 2020 | A1 |
20200094411 | Tan | Mar 2020 | A1 |
20200117212 | Tian | Apr 2020 | A1 |
20200117213 | Tian | Apr 2020 | A1 |
20200117898 | Tian | Apr 2020 | A1 |
20200147804 | Sugiyama | May 2020 | A1 |
20200184196 | Foster | Jun 2020 | A1 |
20200206928 | Denenberg | Jul 2020 | A1 |
20200250879 | Foster | Aug 2020 | A1 |
20200254611 | Sakui | Aug 2020 | A1 |
20200269429 | Chavez | Aug 2020 | A1 |
20200306980 | Choi | Oct 2020 | A1 |
20200311956 | Choi | Oct 2020 | A1 |
20200316782 | Chavez | Oct 2020 | A1 |
20200331146 | Vu | Oct 2020 | A1 |
20200331155 | Vu | Oct 2020 | A1 |
20200361093 | Furukawa | Nov 2020 | A1 |
20210039257 | Tonogai | Feb 2021 | A1 |
20210092464 | Gupta | Mar 2021 | A1 |
20210402595 | Beinhofer | Dec 2021 | A1 |
Number | Date | Country |
---|---|---|
102323822 | Jan 2012 | CN |
104484522 | Oct 2017 | CN |
206775653 | Dec 2017 | CN |
108537875 | Sep 2018 | CN |
110640730 | Jan 2020 | CN |
110660101 | Jan 2020 | CN |
111090688 | May 2020 | CN |
102010007458 | Aug 2011 | DE |
0380513 | Jun 1997 | EP |
2512756 | Oct 2012 | EP |
1854037 | Dec 2014 | EP |
2931485 | Oct 2015 | EP |
102178239 | Nov 2020 | KR |
WO 1989001850 | Mar 1989 | WO |
WO-2011031523 | Mar 2011 | WO |
2016172718 | Oct 2016 | WO |
2017220128 | Dec 2017 | WO |
WO 2019028075 | Feb 2019 | WO |
WO 2021101561 | May 2021 | WO |
Entry |
---|
“A system for semi-automatic modeling of complex environments;” Johnson, A.E. , Hoffman, R. , Osborn, J. ⋅ Hebert, M.; Proceedings. International Conference on Recent Advances in 3-D Digital Imaging and Modeling (Cat. No.97TB100134) (pp. 213-220); Jan. 1, 1997. |
“Collision free motion planning for robots by capturing the environment;” Philipp Blanke ⋅ Marvin Boltes, Simon Storms, Lars Lienenluke, Christian Brecher; 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) (vol. 1, pp. 636-642); Sep. 1, 2020. |
“‘Bring it to me’—generation of behavior-relevant scene elements for interactive robot scenarios;” Nils Einecke, Manuel Muhlig, Jens Schmudderich, Michael Gienger; 2011 IEEE International Conference on Robotics and Automation (pp. 3415-3422); Nov. 18, 2011. |
European Patent Office, Extended European Search Report in corresponding application No. 19182955.5, dated Nov. 25, 2019, 8 pp. |
Number | Date | Country | |
---|---|---|---|
20200001458 A1 | Jan 2020 | US |