Depth sensing robotic hand-eye camera using structured light

Information

  • Patent Grant
  • 11040452
  • Patent Number
    11,040,452
  • Date Filed
    Tuesday, May 29, 2018
    6 years ago
  • Date Issued
    Tuesday, June 22, 2021
    3 years ago
Abstract
The disclosed system includes a robot configured to perform a task on a workpiece. A camera having a field of view is operably connected to the robot. A light system is configured to project structured light onto a region of interest having a smaller area within the field of view. A control system is operably coupled to the robot and the camera is configured to determine a depth of the workpiece relative to a position of the robot using the structured light projected onto the workpiece within the region of interest.
Description
TECHNICAL FIELD

The present application generally relates to a robotic hand-eye camera having a field of vision, a control system operable for determining a region of interest within the field of vision and a light system for projecting structured light onto an object located within the region of interest.


BACKGROUND

Robots can be used with a camera system to determine a location of a work object relative to the robot. Typically, an entire field of view or “scene” is illuminated with one or more light sources to aid depth sensing of the camera. Some existing systems have various shortcomings relative to certain applications. Accordingly, there remains a need for further contributions in this area of technology.


SUMMARY

One embodiment of the present application is a unique system for sensing a location of an object in a robot work area or industrial scene. Other embodiments include apparatuses, systems, devices, hardware, methods, and combinations for sensing a location of an object relative to the robot using a camera system with structured light projected only on a portion of the field of view. Further embodiments, forms, features, aspects, benefits, and advantages of the present application shall become apparent from the description and figures provided herewith.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a schematic illustration of a robot system according to one exemplary embodiment of the present disclosure;



FIG. 2 is a prior art schematic illustration of structured light being projected onto an entire work area or field of vision of a camera;



FIG. 3 a schematic illustration of a region of interest located in portion of the field of vision of the camera as determined by a control system;



FIG. 4 is a schematic illustration of structured light being projected onto the region of interest for facilitating robot interaction with an object in the region of interest; and



FIG. 5 is a flow chart illustrating a method of operation.





DETAILED DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENTS

For the purposes of promoting an understanding of the principles of the application, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the application is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the application as described herein are contemplated as would normally occur to one skilled in the art to which the application relates.


Structured light systems can be used to enable a computerized control system to measure shape and position of three-dimensional (3D) objects. A structured light system includes a light source and a pattern generator. A camera can measure the appearance of the light patterns and light pattern variation projected onto the objects. The observed phase of a periodic light pattern is related to the topography or depth of the object that is being illuminated. The variation in light patterns can include variation in shapes, shades, intensities, colors, wavelengths and/or frequencies of the projected light.


As the field of robotics continues to advance, an increasing amount of focus is placed on the development of technologies that permit the robot to perform tasks quicker and with less computation requirements. Typically, structured light is projected onto or across the entire field of view of a vision system to aid a robot system in determining position and depth of one or more objects within the field of view. Structured light interference can be problematic if the field of view from multiple stereo cameras happen to overlap. Moreover, calculating depth based on image analysis of the entire field of view is computationally intensive. For these reasons, real-time 3D camera applications typically rely on fast, less accurate algorithms that require higher power and more expensive computer systems. The present disclosure provides a method to reduce computation time, reduce the chance for light reflection interference within the vision system, and reduce the potential of eye injuries due to wide reaching array of light projection.


Referring now to FIG. 1, an illustrative robot system 10 is shown in an exemplary working environment or industrial scene. It should be understood that the robot system shown herein is exemplary in nature and that variations in the robot and/or industrial scene is contemplated herein. The robot system 10 can include a robot 12 with a vision system 36 having one or more cameras 38. In one form, one or more of the cameras 38 can be mounted on one of the moveable arms 16a, 16b of the robot 12. In other forms, one or more cameras 38 may be positioned apart from the robot 12. A control system 14 including an electronic controller with a CPU, a memory, and input/output systems is operably coupled to the robot 12 and to the vision system 36. The control system 14 is operable for receiving and analyzing images captured by the vision system 36 and other sensor data used for operation of the robot 12. In some forms, the control system 14 is defined within a portion of the robot 12.


The robot 12 may include a movable base 20 and a plurality of movable portions connected thereto. The movable portions may translate or rotate in any desired direction. By way of example and not limitation, movable portions illustrated by arrows 18, 26, 28, 30, 32 and 34 may be employed by the exemplary robot 12. A bin 40 for holding workpieces or other objects 42 to be retrieved and/or worked on by the robot 12 may constitute at least a part of the exemplary industrial scene. An end effector 24 such as a gripping or grasping mechanism can be attached to the moveable arm 16a and used to grasp an object 42 and/or perform other work tasks on the object 42 as desired. It should be understood that the term “bin” is exemplary in nature and as used herein means, without limitation, any container, carton, box, tray or other structure that can receive and/or hold workpieces, parts or other objects. Additional components 44 can be associated with the vision system 36. These components 44 can include lighting systems, reflector(s), refractor(s), diffractive element(s) and beam expander(s) or the like.


Referring now to FIG. 2, a robot scene 48, according to a prior art embodiment, is illustrated wherein the work bin 40 can be a portion of the industrial robot scene 48 or the entirety of the industrial robot scene 48. A light source 50, such as a laser or other known lighting source, can project structured light into the industrial robot scene 48 so that an entire or complete field of view 54 of a camera 38 is filled with structured light 52 illustrated in the exemplary embodiment as dashed parallel lines. The field of view 54 of the camera 38 can include a portion of the entire industrial robot scene 48 or the entire industrial robot scene 48, however the computational time delay of analyzing all objects 42 within the field of view 54 is time consuming and can be challenging in a real-time robot work process.


Referring now to FIG. 3, a system for reducing a computational time of the control system 14 required to analyze the objects 42 within the field of view 54 is illustrated. The control system 14 is operable to determine a region of interest 56 illustrated by a box shaped pattern covering only a portion of the objects 42 positioned within the entire field of view 54. Once the region of interest 56 is determined by the control system 14, structured light 58 can be projected from a light source 50 into the region of interest 56, as shown in FIG. 4. The control system 14 need only analyze one or more objects 42 in the region of interest 56 defined by a portion of the field of view 54. The region of interest 56 illuminated by structured light 58 illustrated by linear line segments can be captured by a camera 38 of the vision system 36. The control system 14 then analyzes the captured image(s) and determines location and depth of one or more objects 42 within the region of interest 56. In this manner, computational analysis requirements for the control system 14 is reduced and therefore the speed in which the robot system 10 can perform a work task on an object 42 within a region of interest 56 will increase.


In one form, the structured light system can project a pattern of light onto the region of interest 56 and the control system 14 can compare certain features of the pattern in a captured image with locations of the features in a reference image to determine disparities that can be used to determine depth at each location. The region of interest 56 can be illuminated by a time-multiplexed sequence of patterns. Typically, two or more patterns are used to reconstruct a 3D image with sufficiently high resolution. For example, in order to acquire 20 depth frames per second (fps), a light projector 50 must be able to project patterns at a sufficiently rapid rate—typically greater than sixty (60) fps. Various light projectors may be used such as for example, laser generators, computer controlled projectors based on LCD (liquid crystal diode), DMD (digital micro mirror device) or LED (light emitting display) and the like. In one form, the structured light 58 may be a light pattern of parallel bars having various codes and the image may comprise a plurality of pixels that corresponds to the plurality of parallel bars of light. Other forms are contemplated herein.


Referring now to FIG. 5, a method 100 is disclosed for performing a work task on an object 42 using structured light 58 to aid in determining a position and depth of an object 42. At step 102, the structured light 58 is turned off. At step 104 the control system 14, including the vision system 36, identifies an object 42 and calculates a region of interest 56 within a field of view 54. The light source 50 then projects structured light 58 onto the region of interest 56 at step 106. At step 108, the control system 14 will calculate a location and depth of an object 42 within a region of interest 56 using only the pixels circumscribed by the bounding box defining the region of interest 56. At step 110, the robot 12 performs a robot task on an object 42 within a region of interest 56. Robot tasks can include, but are not limited to, grasping, moving, assembling or performing other work operations on the object 42. It should be understood that when the term robot, robot task, robot system or the like is used herein, the system is not limited to a single robot, but on the contrary may include multiple robots operating in the industrial scene.


In one aspect, the present disclosure includes a system comprising a robot configured to perform a robot task; a vision system including a camera operably connected to the robot, the camera operable to capture an image within a field of view; a controller operable for analyzing the image and determining a region of interest within the field of view; a light system configured to project structured light onto the region of interest; and wherein the control system is configured to determine a depth of a workpiece within the region of interest.


In refining aspects, wherein the region of interest has a smaller area than the field of view of the camera, wherein the control system determines a depth of a plurality of workpieces within the region of interest, wherein the structured light is defined by at least one of a plurality of patterns, shapes, shades, intensities, colors, wavelengths and/or frequencies, wherein the vision system includes one or more 3D cameras, wherein light system includes one or more laser beams projected onto the region of interest, further comprising a reflector positioned in a path of at least one of the laser beams, further comprising a refractor positioned in a path of at least one of the laser beams, further comprising an diffractive element positioned in a path of at least one of the laser beams, wherein the control system guides movement of the robot based on scanned images of workpieces within the region of interest and wherein at least a portion of the structured light projects from the robot.


Another aspect of the present disclosure includes a method comprising: scanning an industrial robot scene with at least one image sensor having a field of view; storing image data from the image sensor in a memory; analyzing the image data; determining a region of interest within the image data, wherein the region of interest has a smaller area than an area of the field of view; projecting structured light onto the region of interest; determining a depth of an object located within the region of interest based on analysis of the object illuminated by the structured light; transmitting the depth information to a controller operably coupled to a robot; and performing a task on the object with the robot.


In refining aspects, the method includes wherein the at least one image sensor is a camera, wherein the camera is a 3D video camera, wherein the projecting of structured light includes a laser beam projection, wherein the structured light is projected in different patterns, shapes, shades, intensities, colors, wavelengths and/or frequencies onto the region of interest and wherein the task includes gripping the object with a robot gripper.


Another aspect of the present disclosure includes a system comprising: an industrial scene that defines a work area for a robot; a vision system having a field of view in the industrial scene; a control system operably coupled to the robot, the control system configured to receive and analyze data transmitted from the vision system; means, with the control system, for determining a region of interest within a portion of the field of view; a light system configured to direct a structured light onto the region of interest; and means, with the control system, for determining a position and depth of an object within the region of interest relative to the robot.


In refining aspects, wherein the control system provides operational commands to the robot, wherein the light system includes a laser, wherein the structured light includes a variable output including one of a light pattern variation, light shape variation, light shade variation, light intensity variation, light color variation, light wavelength variation and/or light frequency variation, wherein the vision system includes a 3D camera, wherein portions of the vision system and the light system are mounted on the robot and wherein the robot performs a work task on the object based on controller analysis of the object having structured light projected thereon.


While the application has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiments have been shown and described and that all changes and modifications that come within the spirit of the applications are desired to be protected. It should be understood that while the use of words such as preferable, preferably, preferred or more preferred utilized in the description above indicate that the feature so described may be more desirable, it nonetheless may not be necessary and embodiments lacking the same may be contemplated as within the scope of the application, the scope being defined by the claims that follow. In reading the claims, it is intended that when words such as “a,” “an,” “at least one,” or “at least one portion” are used there is no intention to limit the claim to only one item unless specifically stated to the contrary in the claim. When the language “at least a portion” and/or “a portion” is used the item can include a portion and/or the entire item unless specifically stated to the contrary.


Unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings.

Claims
  • 1. A system comprising: a robot configured to perform a robot task;a vision system including a camera operably connected to the robot, the camera operable to capture an image within a field of view;a controller operable for analyzing the image and determining a region of interest within the field of view, the controller also operable for analyzing the image to identify an object within the region of interest;a light system configured to selectively project structured light onto the region of interest; andwherein the controller is configured to determine a depth of a workpiece within the region of interest;wherein the controller is structured to: configure the light system in an OFF condition such that structured light is not projected onto the region of interest;identify the object within the region of interest from an image captured while the light system is in the OFF condition;configure the light system in an ON condition such that structured light is projected onto the region of interest;calculate the depth of the workpiece within the region of interest from an image captured while the light system is in the ON condition.
  • 2. The system of claim 1, wherein the region of interest has a smaller area than the field of view of the camera.
  • 3. The system of claim 1, wherein the controller determines a depth of a plurality of workpieces within the region of interest.
  • 4. The system of claim 1, wherein the structured light is defined by at least one of a plurality of patterns, shapes, shades, intensities, colors, wavelengths and/or frequencies.
  • 5. The system of claim 1, wherein the vision system includes one or more 3D cameras.
  • 6. The system of claim 1, wherein light system includes one or more laser beams or coded light projected onto the region of interest.
  • 7. The system of claim 6 further comprising a reflector positioned in a path of at least one of the laser beams.
  • 8. The system of claim 6 further comprising a refractor positioned in a path of at least one of the laser beams.
  • 9. The system of claim 6 further comprising a diffractive element positioned in a path of at least one of the laser beams.
  • 10. The system of claim 1, wherein the controller guides movement of the robot based on scanned images of workpieces within the region of interest.
  • 11. The system of claim 1, wherein at least a portion of the structured light projects from the robot.
  • 12. A method comprising: scanning an industrial robot scene with at least one image sensor having a field of view;storing image data from the image sensor in a memory;analyzing the image data;determining a region of interest within the image data, wherein the region of interest has a smaller area than an area of the field of view;identifying an object within the region of interest when structured light is not projected onto the region of interest;selectively projecting structured light onto the region of interest;determining a depth of the object located within the region of interest based on analysis of the object illuminated by the structured light;transmitting depth information to a controller operably coupled to a robot; andperforming a task on the object with the robot.
  • 13. The method of claim 12, wherein the at least one image sensor is a camera.
  • 14. The method of claim 13, wherein the camera is a 3D video camera.
  • 15. The method of claim 12, wherein the projecting of structured light includes a laser beam projection.
  • 16. The method of claim 12, wherein the structured light is projected in different patterns, shapes, shades, intensities, colors, wavelengths and/or frequencies onto the region of interest.
  • 17. The method of claim 12, wherein the task includes gripping the object with a robot gripper, and wherein the identifying occurs before the projecting.
  • 18. A system comprising: an industrial scene that defines a work area for a robot;a vision system having a field of view in the industrial scene;a control system operably coupled to the robot, the control system configured to receive and analyze data transmitted from the vision system;means, with the control system, for determining a region of interest within a portion of the field of view;a light system configured to selectively direct a structured light onto the region of interest;means, with the control system, for identifying an object within the region of interest when structured light is not directed onto the region of interest; andmeans, with the control system, for determining a position and depth of the object within the region of interest relative to the robot when structured light is directed onto the region of interest.
  • 19. The robot of claim 18, wherein the control system provides operational commands to the robot.
  • 20. The robot of claim 18, wherein the light system includes a laser.
  • 21. The robot of claim 18, wherein the structured light incudes a variable output including one of a light pattern variation, light shape variation, light shade variation, light intensity variation, light color variation, light wavelength variation and/or light frequency variation.
  • 22. The robot of claim 18, wherein the vision system includes a 3D camera.
  • 23. The robot of claim 18, wherein portions of the vision system and the light system are mounted on the robot.
  • 24. The robot of claim 18, wherein the robot performs a work task on the object based on controller analysis of the object having structured light projected thereon.
US Referenced Citations (85)
Number Name Date Kind
6101455 Davis Aug 2000 A
6503195 Keller et al. Jan 2003 B1
6665588 Watanabe Dec 2003 B2
7151848 Watanabe Dec 2006 B1
7176440 Cofer et al. Feb 2007 B2
7724379 Kawasaki May 2010 B2
7957583 Boca Jun 2011 B2
7957639 Lee et al. Jun 2011 B2
8082064 Kay Dec 2011 B2
8095237 Habibi Jan 2012 B2
8437535 Boca May 2013 B2
8712678 Takahashi Apr 2014 B2
8723923 Bloom et al. May 2014 B2
8832954 Atwell et al. Sep 2014 B2
8947509 Bloom et al. Feb 2015 B2
8970676 Bloom et al. Mar 2015 B2
9001190 Olivier, III et al. Apr 2015 B2
9050728 Ban Jun 2015 B2
9256943 Appia et al. Feb 2016 B2
9319660 Liou Apr 2016 B2
9393696 Hayashi Jul 2016 B2
9492923 Wellman Nov 2016 B2
9497442 Russell et al. Nov 2016 B2
9507995 Konolige et al. Nov 2016 B2
9606237 Herschbach et al. Mar 2017 B2
9630321 Bradski Apr 2017 B2
9696427 Wilson et al. Jul 2017 B2
10020216 Fujimori Jul 2018 B1
10108829 Telling Oct 2018 B2
10190873 Yamagami Jan 2019 B1
10290118 Shivaram May 2019 B2
10302422 Inukai May 2019 B2
10527409 Yamauchi Jan 2020 B2
10569421 Shino Feb 2020 B2
20030018414 Watanabe Jan 2003 A1
20070115484 Huang May 2007 A1
20080069435 Boca Mar 2008 A1
20090033655 Boca Feb 2009 A1
20090097039 Kawasaki Apr 2009 A1
20110043808 Isozaki Feb 2011 A1
20120056982 Katz et al. Mar 2012 A1
20120113435 Suzuki May 2012 A1
20120218464 Ben-Moshe et al. Aug 2012 A1
20130114861 Takizawa May 2013 A1
20130278725 Mannan et al. Oct 2013 A1
20140063192 Sonoda Mar 2014 A1
20140104416 Giordano Apr 2014 A1
20140120319 Joseph May 2014 A1
20140132721 Martinez Bauza et al. May 2014 A1
20140192187 Atwell Jul 2014 A1
20140268108 Grau Sep 2014 A1
20150002662 Furihata Jan 2015 A1
20150062558 Koppal et al. Mar 2015 A1
20150077823 Hashiguchi Mar 2015 A1
20150146215 Kobayashi May 2015 A1
20150217451 Harada Aug 2015 A1
20150271466 Yamazaki Sep 2015 A1
20150300816 Yang Oct 2015 A1
20160063309 Konolige Mar 2016 A1
20160089791 Bradski Mar 2016 A1
20160091305 Yogo Mar 2016 A1
20160150219 Gordon May 2016 A1
20160182889 Olmstead Jun 2016 A1
20160253821 Romano Sep 2016 A1
20160330433 Shen Nov 2016 A1
20160366397 Lee et al. Dec 2016 A1
20170069052 Li Mar 2017 A1
20170089691 Seki Mar 2017 A1
20170124714 Sridhar May 2017 A1
20170132784 Yamada May 2017 A1
20170151672 Ando Jun 2017 A1
20170176178 Inukai Jun 2017 A1
20170236014 Schamp Aug 2017 A1
20170332862 Jun Nov 2017 A1
20170360286 Kaku Dec 2017 A1
20180253863 Wallack Sep 2018 A1
20180315205 Moribe Nov 2018 A1
20180318051 Lu Nov 2018 A1
20180372484 Fujita Dec 2018 A1
20190077013 Yoshida Mar 2019 A1
20190094017 Wakabayashi Mar 2019 A1
20190285402 Yamauchi Sep 2019 A1
20190320162 Usami Oct 2019 A1
20190389065 Horiguchi Dec 2019 A1
20190389068 Shimizu Dec 2019 A1
Foreign Referenced Citations (6)
Number Date Country
103925879 Jul 2014 CN
104613899 May 2015 CN
103558850 Oct 2017 CN
206568190 Oct 2017 CN
2728374 Dec 2016 EP
2017180003 Oct 2017 WO
Non-Patent Literature Citations (1)
Entry
Patent Cooperation Treaty, International Search Report and Written Opinion in corresponding application No. PCT/US19/33513, dated Aug. 9, 2019, 9 pp.
Related Publications (1)
Number Date Country
20190366552 A1 Dec 2019 US