The present teachings relate to a method for controlling a remote vehicle, and in particular, a remote vehicle control method in which a remote vehicle payload is automatically positioned to approach an object within a field of view of a sensor.
Remote vehicles are useful in a variety of military, law enforcement, and other applications to provide for unmanned access to buildings, particularly in situations, such as earthquakes, hurricanes, fires, etc., in which human access is dangerous and therefore preferably avoided. As buildings typically consist of a multitude of individual rooms, which may are separated by doors that may be in a closed position, it is desirable for remote vehicles to have the ability to manipulate doors to enter rooms. It is also desirable to provide quicker and easier door opening abilities, either through an enhanced device, enhanced software, or a combination of both, to simplify remote operation and allow the remote vehicle to inspect a structure more quickly.
Currently, some remote vehicles are operated by users at remote locations, who utilize, for example, cameras or other image sensing devices to determine how to manipulate the remote vehicle and end effectors of the remote vehicle to traverse spaces, perform tasks, access doors, approach doorknobs, open the doors, grasp objects etc. Manually-controlled performance of some or all of these tasks can be time-consuming and require precision by the operator in order to correctly approach, access, and manipulate doorknobs to open the doors on which the doorknobs are disposed. Minimizing the amount of manual control by the operator would significantly reduce the amount of time and effort involved in opening a door.
The present teachings provide a method of controlling a remote vehicle having an end effector and a two-dimensional image sensing device. The method includes obtaining an image of an object with the image sensing device, determining a ray from a focal point/pixel of the image sensed by the image sensing device and passing through the object, positioning the end effector of the remote vehicle at a point along the determined ray, and moving the end effector along the determined ray to approach the object.
The present teachings further provide a method of controlling a remote vehicle having an end effector and a two-dimensional image sensing device. The method includes displaying an image of an object obtained from the image sensing device and an object alignment symbol, the object alignment symbol representing a focal point of the image sensing device, and indicating a starting point of a ray originating from a focal point fixed in space with respect to the image sensing device, wherein the ray extends indefinitely because the depth of the object is not determinable from the two-dimensional image sensing device.
The present teachings also contemplate receiving an input to align the object alignment symbol with the displayed object image, causing a ray from the focal point of the image sensing device through the object to be determined based on the received input, causing the end effector of the remote vehicle to align with the determined ray, and causing the end effector to move along the determined ray to approach the object.
The present teachings additionally provide a method of controlling a remote vehicle having an end effector and an image sensing device. The method includes moving the image sensing device to a targeting pose to view an object within a field of view of the image sensing device, displaying an image of the object obtained from the image sensing device, moving the image sensing device to an object alignment position at which a focal point of the image sensed by the image sensing device is aligned with the displayed object, determining a ray from the focal point and extending through the object based on the displayed image, controlling the end effector of the remote vehicle to be positioned to align with the determined ray, and controlling the end effector to move along the determined ray to approach the object.
Additional objects and advantages of the present teachings will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present teachings. The objects and advantages of the present teachings will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present teachings, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an exemplary embodiment of the present teachings and together with the description, serve to explain the principles of the present teachings.
Reference will now be made in detail to the exemplary embodiments of the present teachings, examples of which are illustrated in the accompanying drawings.
The present teachings provide a method of controlling a remote vehicle having an end effector and a two-dimensional image sensing device. The method includes displaying an image of an object obtained from the image sensing device and an object alignment symbol. The object alignment symbol can represent a focal point of an image sensed by the image sensing device, and can indicate a starting point of a ray originating from a focal point fixed in space with respect to the image. The ray extends indefinitely because the depth of the object is not determinable from the two-dimensional image sensing device.
The object alignment symbol represents the origin of the predetermined ray that extends from the focal point of the image sensed by the image sensing device to the object. In use, the operator can move the image sensing device (e.g., a camera attached to a manipulator arm of a remote vehicle) until a predetermined pixel on the image plane is aligned with the object. The present teachings also contemplate the remote vehicle system utilizing an object identification software to recognize the object (e.g., door handle/knob) and align the predetermined pixel of the image plane with the object such that the ray is created. Using such an automated identification feature would aid the operator in completing the task of grasping the object. Once the object is identified via software, the, ray can be created and the gripper can begin moving toward the object. Alternatively, operator confirmation can be required before creation of the ray and/or before the gripper begins to move toward the object.
In certain embodiments, the operator can adjust the robot and the image sensing device such that the gripper is positioned along the ray extending from the pixel through the object. Alternatively, the gripper can be autonomously moved to a location along the ray and thereafter move toward the object.
In various embodiments of the present teachings, the alignment symbol can be overlayed by the OCU on top of video image sent by image sensing device. The alignment symbol can be located anywhere on the video image, although it is shown herein to be located at the center of the video image. The illustrated cross hair represents a particular point on the image plane, and is the end point of the ray along which the gripper can travel to come into contact with the object. The present teachings contemplate that the operator can select a location of the object alignment symbol on the image and therefore the origin of the ray extending through the object. The location of the object alignment symbol can be selected, for example, simply by clicking at a desired location on the screen.
The disclosure herein describes an exemplary embodiment of an application of the present teachings wherein an operator selects the object to grasp/grip. However, those skilled in the art will understand, as stated above, that selection of the object can be accomplished by an automated feature recognition function of the remote vehicle system. The system and method of the present teachings are described herein as being used for door opening, but can also be used for grasping objects, for example to move the object to another location, to slide a drawer open, or to adjust a position of an object, without the need for 3D image sensing to provide a 3D location of the object that includes depth.
The foregoing general description, the following detailed description, and the accompanying drawings, are exemplary and explanatory only and are not restrictive of the present teachings, as claimed. The following detailed description and accompanying drawings set forth the best mode of the present teachings. For the purpose of teaching inventive principles, some aspects of the best mode may be simplified or omitted where they would be known to those of ordinary skill in the art.
The appended claims specify the scope of the present teachings. Some aspects of the best mode may not fall within the scope of the present teachings as specified by the claims. Thus, those skilled in the art will appreciate variations from the best mode that fall within the scope of the present teachings. Those skilled in the art will also appreciate that the features described below can be combined in various ways to form multiple variations of the present teachings. As a result, the present teachings are not limited to the specific examples described below, but only by the claims and their equivalents.
The multi-unit manipulator arm 14 may include a first arm portion 16a that extends from the chassis 12. While
The multi-portion manipulator arm 14 may include a third arm portion 16c integrally connected with the second arm portion 16b via a joint that pivots with at least one degree of freedom. In some embodiments, the third arm portion 16c may extend from the second arm portion 16b at the front of the remote vehicle 10 toward the rear of the remote vehicle 10 as shown in
In an exemplary targeting pose, the arm 14 of the remote vehicle 10, particularly the third arm portion 16c, can be moved to approximately the height of the object, such as doorknob 30 to move the field of view of the image sensing device 2Q to, for example, an approximate expected height of an object based on a predetermined and pre-stored approximate expected height. In the alternative, the image sensing device 20 may be moved to a targeting pose in which the object appears within a field of view of the image sensing device 20 without using predetermined or pre-stored data, for example, by having the operator move the manipulator arm 14 to the targeting pose using, for example, data supplied by the image sensing device 20. In the targeting pose, the third arm portion 16c may be tilted down in order to give the image sensing device 20, which is positioned at the back of the third arm portion 16c, a clear view of the object 30. The image sensing device 20 obtains an image of the object 30 at operation 102 in
In certain embodiments of the present teachings, the method can assume that object is farther away than the starting gripper position. In accordance with the present teachings, once the ray extending through the object from the focal point of the image sensed by the image sensing device is determined, the image sensing device is no longer needed to complete the task (e.g., a door opening behavior) behavior. The end effector knows the coordinates of the ray and follows the ray until contacting the object. The coordinates of the ray can be determined because the system of the present teachings knows the position and orientation of the image sensing device with respect to the end effector; and the remote vehicle. The position and orientation of the image sensing device and the end effector are determined by forward kinematics. One skilled in the art will appreciate, however, that the method of the present teachings could continue to use the image sensing device by displaying an image of the end effector environment as the end effector moves forward to contact the target.
As shown in
In certain embodiments of the present teachings, the object alignment symbol 42 represents a focal point of the image sensed by the image sensing device 20, and the starting point for a ray that will extend through the object and guide the end effector to the object. Using the display shown in
As shown in
After the end effector 18 is moved onto the ray 32, at operation 110 of
Additional sensors may be used to aid in the remote vehicle door opening or in the detection that the end effector 18 has reached the object 30, such as contact-type sensors and non-contact sensors. For example, the detection sensor can include a force sensor, an IR range sensor, a contact sensor, or a mechanical limit switch at the end effector 18. Contact-type sensors generally rely, on physical interaction of some component of the sensor body directly with the target object, for example, a mechanical limit switch. Non-contact sensors utilize less intrusive methods, which are typically based on radiated and received forms of energy, for example, a capacitive proximity sensor. As an example, an infrared-based photo interrupt or “break-beam” sensor and an infrared proximity sensor may be used. When used together, the “break-beam” sensor and the infrared proximity sensor can provide objective feedback to support the task of approaching and grasping an object. The infrared proximity sensor may be used to determine the relative distance between the sensor and the object, such as a doorknob. The “break-beam” sensor can be used to determine when an object moves between fingers of a gripper-type end effector 18 and is ready to be grasped. Other exemplary detection sensors can include an ultrasonic ranger sensor, which can provide simple data regarding distance to a point and a laser range finder that sends a single beam to measure distance to a specific point. Therefore, data regarding the distance to the object or the surface on which the object is located and whether the object is ready to be grasped can help increase the accuracy and reliability of the end effector approach and positioning and decrease the time needed for the approach and object grasping. A range-type detector can, for example, be mounted to the side of the image sensing device 20.
Although not required for the method described herein, a LIDAR sensor can be utilized on a remote vehicle in accordance with the present teachings. LIDAR is a laser rangefinder that scans over a relatively large swath, and can create a 3D map of the remote vehicle's surroundings, which may be used to more accurately and quickly open a door. An example of a door-opening algorithm using learning-based vision strategies and probabilistic reasoning include a general algorithm to recognize door handles in unstructured environments using vision, for example as discussed in Klingbeil et al., “Learning to Open New Doors,” RSS Workshop on Robot Manipulation, 2008, the disclosure of which is incorporated herein by reference in its entirety.
As shown in
In a drive mode, the operator may drive the remote vehicle to approach an object 30. In a “drive” pose, a multi-unit manipulator arm 14 in accordance with the present teachings may be, for example, in a position similar to a stowed position, but with the arm slightly elevated so that the image sensing device is raised I predetermined amount above the chassis to provide image sensing data while keeping a center of gravity of the remote vehicle low. The operator can also select an “inspect mode” or a “move the gripper mode” allowing the operator to manually move the manipulator arm to position and manipulate the gripper. Advanced options may also be available, and the operator can also “quit” the main menu. As shown in the exemplary embodiment of
In embodiments utilizing a targeting pose, the manipulator arm 14 can, for example, be raised to a typical doorknob height, the image sensing device 20 can be pointed forward to view the end effector and its environment, and an object alignment symbol representing a focal point of the image sensed by the image sensing device 20 (see
At operation 204, by utilizing the GUI, for example through the main menu 60 of
After the object has been located at a focal point of the image sensed by the image sensing device at operation 206 of
After the auto-reach operation has terminated at operation 214, the end effector 18 or gripper may be adjusted when the operator selects an “adjust gripper position” 76 input selection in the exemplary GUI of
The present teachings contemplate that the manipulator arm 14, including the joints therein and its attachment to the remote vehicle chassis, includes enough degrees of freedom to all the manipulator arm to follow the ray 32 and to manipulate the object 30.
In certain embodiments of the present teachings, the image sensing device and the end effector may not be fixed in the same plane, for example they may not be connected via an arm 16c of
The remote vehicle illustrated in
The exemplary embodiment of
While it has been shown and described that an operator identifies the object and the surface on which the object is located, alternately, the object and the surface on which the object is located may be identified through computer vision techniques through the image received through the image sensing device 20 and one of the processors 80 or 82.
Some or all of the actions performed by the exemplary embodiments described herein can be performed under the control of a computer system executing computer-readable codes either in a computer-readable recording medium or in communication signals transmitted through a transmission medium. The computer-readable recording medium is any data storage device that can store data for a non-fleeting period of time such that the data can thereafter be read by a computer system. Examples of the computer-readable recording medium include, but are not limited to, read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. The communication signals transmitted through a transmission medium may include, for example, signals which modulate carrier waves transmitted through wired or wireless transmission paths.
The above description and associated figures explain the best mode of the present teachings. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the teachings disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit, present teachings being indicated by the following claims.
This application is a continuation of U.S. patent application Ser. No. 13/448,043 filed on Apr. 16, 2012, which claims priority to U.S. Provisional Patent Application No. 61/475,944, filed Apr. 15, 2011, titled Auto-Reach Method for a Remote Vehicle and to U.S. Provisional Patent Application No. 61/605,552, filed Mar. 1, 2012, titled Mobile Inspection Robot, the disclosures of which are incorporated herein by reference in their entireties.
This invention was made with government support under TACOM Contract No. W56HZV-09-C-0315 awarded by the U.S. Army Contracting Command, AMSCC-TAC-ASGD. The government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
3414137 | Fortin | Dec 1968 | A |
4412293 | Kelley | Oct 1983 | A |
4613269 | Wilder et al. | Sep 1986 | A |
4696501 | Webb | Sep 1987 | A |
5086901 | Petronis et al. | Feb 1992 | A |
5142677 | Ehlig et al. | Aug 1992 | A |
5155812 | Ehlig et al. | Oct 1992 | A |
5161846 | Yakou | Nov 1992 | A |
5295399 | Grant et al. | Mar 1994 | A |
5308221 | Shimokoshi et al. | May 1994 | A |
5313648 | Ehlig et al. | May 1994 | A |
5319789 | Ehlig et al. | Jun 1994 | A |
5319792 | Ehlig et al. | Jun 1994 | A |
5349687 | Ehlig et al. | Sep 1994 | A |
5443354 | Stone et al. | Aug 1995 | A |
5499306 | Sasaki et al. | Mar 1996 | A |
6377872 | Struckman | Apr 2002 | B1 |
6578893 | Soucy et al. | Jun 2003 | B2 |
6816755 | Habibi et al. | Nov 2004 | B2 |
6826450 | Watanabe et al. | Nov 2004 | B2 |
6845297 | Allard | Jan 2005 | B2 |
7125059 | Miyamoto | Oct 2006 | B2 |
7177722 | Kato et al. | Feb 2007 | B2 |
7209803 | Okamoto et al. | Apr 2007 | B2 |
7313464 | Perreault et al. | Dec 2007 | B1 |
7415321 | Okazaki et al. | Aug 2008 | B2 |
7645110 | Ogawa et al. | Jan 2010 | B2 |
7789443 | Gillespie et al. | Sep 2010 | B2 |
7818091 | Kazi et al. | Oct 2010 | B2 |
7877165 | Sugiyama et al. | Jan 2011 | B2 |
7983794 | Kawabe et al. | Jul 2011 | B2 |
8041457 | Ohno | Oct 2011 | B2 |
8155787 | Chalubert et al. | Apr 2012 | B2 |
8224485 | Unsworth | Jul 2012 | B2 |
8260457 | Yokoyama | Sep 2012 | B2 |
8315740 | Hasegawa et al. | Nov 2012 | B2 |
8322249 | Seavey et al. | Dec 2012 | B2 |
8326469 | Phillips et al. | Dec 2012 | B2 |
8392036 | Jacobsen et al. | Mar 2013 | B2 |
8426761 | Takahashi et al. | Apr 2013 | B2 |
8489236 | Fudaba et al. | Jul 2013 | B2 |
8577517 | Phillips et al. | Nov 2013 | B2 |
8606398 | Eakins et al. | Dec 2013 | B2 |
8781629 | Ota | Jul 2014 | B2 |
20050131582 | Kazi et al. | Jun 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20080188985 | Sakano | Aug 2008 | A1 |
20090232506 | Hudson et al. | Sep 2009 | A1 |
20090320637 | Doyle et al. | Dec 2009 | A1 |
20100017046 | Cheung et al. | Jan 2010 | A1 |
20100057252 | Kim | Mar 2010 | A1 |
20100068024 | Agens | Mar 2010 | A1 |
20100152899 | Chang et al. | Jun 2010 | A1 |
20100172733 | Chalubert et al. | Jul 2010 | A1 |
20100217436 | Jones et al. | Aug 2010 | A1 |
20100243344 | Wyrobek et al. | Sep 2010 | A1 |
20100312392 | Zimmermann | Dec 2010 | A1 |
20110087360 | Chen et al. | Apr 2011 | A1 |
20110223000 | Martinez et al. | Sep 2011 | A1 |
20110223001 | Martinez et al. | Sep 2011 | A1 |
20110231016 | Goulding | Sep 2011 | A1 |
20110288667 | Noda et al. | Nov 2011 | A1 |
20120072023 | Ota | Mar 2012 | A1 |
20120330463 | Schreiber et al. | Dec 2012 | A1 |
20130325182 | Setrakian et al. | Dec 2013 | A1 |
20130330162 | Horinouchi | Dec 2013 | A1 |
20140156149 | Feit | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
1 428 634 | Jun 2004 | EP |
1 555 508 | Jul 2005 | EP |
Entry |
---|
“Combined Arms Operations in Urban Terrain” [website pate online], U.S. Army Field Manual fm 3-06.11 Chapter 3, [retrieved on Aug. 5, 2009]. Retrieved from the internet at URL: http://www.globalsecurity.org/military/library/policy/army/fm/3-06-11/ch3.htm. |
Andreopoulos, A. Tsotsos, J.K. “A Framework for Door Localization and Door Opening Using a Robotic Wheelchair for People Living with Mobility Impairments” RSS 2007 Manipulation Workshop: Sensing and Adapting to the Real World, Atlanta, Jun. 30, 2007. |
Barnes, Mitch et al., “ThrowBot: Design Considerations for a Man-Portable Throwable Robot”, In Proceedings of SPIE vol. 5804, Mar. 2005. |
Bibby, M. Necessary, R., Robonaut [website page online], [retrieved on Aug. 5, 2009]. Retrieved from the internet at URL: http://robonaut.jsc.nasa.gov. |
Bruemmer, D.J., Few, D.A. Dapoor, c., Goza, M., “Dynamic Autonomy for Mobile Manipulation” In Proc. of the ANS/IEEE 11th Annual Conference on Robotics and Remote Systems for Hazardous Environments, Salt Lake City, UT, Feb. 12-15, 2006. |
Cheung, Carol et al. “UAV-UGV Collaboration with a PackBot UGV and Raven SUAV for Pursuit and Tracking of a Dynamic Target”, In Proceedings of SPIE Defense and Security conference, Orlando, FL, Mar. 2008. |
Jones, Chris et al. “Sentinel: An Operator Interface for the Control of Multiple Semi-Autonomous UGVs”, In Proceedings of the Association for Unmanned Vehicles Systems International, Orlando, FL, Aug. 2006. |
Lenser, Scott et al. “Practical problems in sliding scale autonomy: A case Study”, In Proceedings of SPIE Defense and Security Conference, Orlando, FL, Mar. 2008. |
Murray, Sean et al. “Continued Research in EVA, Navigation, Networking and Communications Systems” SAE Proceedings, International conference on Environmental Systems, Jun. 2008. |
Rhee, C., Chung, W., Kim, M., Shim, Y. Lee H. “Door opening control using the multi-fingered robotic hand for the indoor service robot” Robotics and Automation, 2004, proceedings ICRA '04, 2004 IEEE International Conference , pp. 40111-4016, vol. 4, Apr. 26-May 1, 2004. |
Rudakevych, Pavlo “Wave Control: A Method of Distributed Control for Repeated Unit Tentacles”, In Proceedings of SPIE vol. 3839, Aug. 1999. |
Rudakevych, Pavlo et al. “Micro Unattended Mobility System (MUMS)”, In Proceedings of SPIE vol. 3713, Jul. 1998. |
Rudakevych, Pavlo et al., “Integration of the Fido Explosives Detector onto the PackBot EOD UGV”, In Proceedings of SPIE vol. 6561, Mar. 2007. |
Rudakevych, Pavlo et al., “A man portable hybrid UAV/UGV system”, In Proceedings of SPIE vol. 6561, Mar. 2007. |
Rudakevych, Pavlo et al., “PackBot EOD Firing System”, In Proceedings of SPIE vol. 5804, Mar. 2005. |
Schoenfeld, Erik et al. “Door Breaching Robotic Manipulator” In Proceedings of SPIE Defense andSecurity Conference, Orlando, FL, Mar. 2008. |
Sword, Lee et al. “Mobility Enhancements for Ballistically Deployed Sensors”, In Proceedings of SPIE vol. 4393, Apr. 2001. |
Yamauchi, Brian “All-Weather Perception for Small Autonomous UGVs”, In Proceedings of SPIE Defense and Security Conference, Orlando, FL, Mar. 2008. |
Yamauchi, Brian et al. “Griffon: a man-portable hybrid UGV/UAV”, In Industrial Robot: An International Journal, vol. 31, No. 5, 2004. |
Yamauchi, Brian, “Autonomous Urban Reconnaissance Using Man-Portable UGVs”, In Proceedings of SPIE: Unmanned Ground Vehicle Technology VIII, Orlando, FL, Apr. 2006. |
Yamauchi, Brian, “PackBot: A Versatile :Platform for Military Robotics”, In Proceedings of SPIE vol. 5422: Unmanned Ground Vehicle Technology VI, Orlando, FL, Apr. 2004. |
Yamauchi, Brian, “The Wayfarer Modular Navigation Payload for Intelligent Robot Infrastructure”, In Proceedings of SPIE vol. 5804: Unmanned Ground Technology VII, Orlando, FL, Mar. 2005. |
Yamauchi, Brian, “Wayfarer: An Autonomous Navigation Payload for the PackBot”, In Proceedings of AUVSI Unmanned Vehicles North America 2005, Baltimore, MD, Jun. 2005. |
Yamauchi, Brian. “Daredevil: Ultra Wideband Radar Sensing for Small UGVs”, In Proceedings of SPIE: Unmanned Systems Technology IX, Orlando, FL, Apr. 2007. |
International Search Report and Written Opinion Corresponding to International Application No. PCT/US2012/033790; Date of Mailing: Sep. 12, 2012; 8 Pages. |
International Preliminary Report on Patentability Corresponding to International Application No. PCT/US2012/033790; Date of Mailing: Oct. 15, 2013; 6 Pages. |
Number | Date | Country | |
---|---|---|---|
20150273684 A1 | Oct 2015 | US |
Number | Date | Country | |
---|---|---|---|
61475944 | Apr 2011 | US | |
61605552 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13448043 | Apr 2012 | US |
Child | 14682428 | US |