This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2022-122589, filed on Aug. 1, 2022; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an object manipulation apparatus, a handling method, and a program product.
A robot system has been conventionally known, which automates an object handling work, such as a picking automation system for handling baggage or the like stacked in a physical distribution warehouse.
Such a robot system automatically calculates a grasping position or posture of an object and a boxing position and posture of an input destination on the basis of sensor data, such as image information, and actually executes grasping or boxing by a robot having a manipulation planning mechanism.
In recent years, with the development of a machine learning technology, a technology of realizing appropriate actuation of a robot by learning has been used.
An object manipulation apparatus according to an embodiment includes one or more hardware processors coupled to a memory and configured to function as a feature calculation unit a region calculation unit, and a grasp configuration (GC) calculation unit. The feature calculation unit serves to calculate a feature map indicating a feature of an image on the basis of captured image of grasping target objects. The region calculation unit serves to calculate, on the basis of the feature map, the expression of position and a posture of a handling tool by a first parameter on a circular anchor in the image. The GC region calculation unit calculates the approximate region of GC by convert a first parameter on a circular anchor to GC region. The GC calculation unit serves to calculate a GC pf the handling tool which is expressed as a second parameter indicating a position and a posture of the handling tool on the image on the basis of GC approximate region.
Exemplary embodiments of an object manipulation apparatus, a handling method, and a program product will be explained below in detail with reference to the accompanying drawings.
First, an outline of a system for object manipulation task including an object manipulation apparatus (picking robot), which is an example of an object manipulation robot, and a robot integrated management system will be described.
The sensor support portion 4 supports sensors (the article container sensor 5, the grasped article measuring sensor 6, the cargo collection container sensor 7, and the temporary storage space sensor 8).
The article container sensor 5 measures an internal state of an article container 101. The article container sensor 5 is, for example, an image sensor installed above the article container drawing portion 9.
The grasped article measuring sensor 6 is installed in the vicinity of the article container sensor 5, and measures an object grasped by the manipulator 1.
The cargo collection container sensor 7 measures an internal state of a cargo collection container. The cargo collection container sensor 7 is, for example, an image sensor installed above the cargo collection container drawing portion 11.
The temporary storage space sensor 8 measures an article put on a temporary storage space 103.
The article container drawing portion 9 draws the article container 101 in which target articles to be handled are stored.
The article container weighing machine 10 measures a weight of the article container 101.
The cargo collection container drawing portion 11 draws a cargo collection container 102 that contains articles taken out by the manipulator 1.
The cargo collection container weighing machine 12 measures a weight of the cargo collection container 102.
The article container sensor 5, the grasped article measuring sensor 6, the cargo collection container sensor 7, and the temporary storage space sensor 8 may be optional sensors. For example, sensors capable of acquiring image information, three-dimensional information and the like, such as an RGB image camera, a range image camera, a laser range finder, and a Light Detection and Ranging or Laser Imaging Detection and Ranging (LiDAR) can be used.
Note that, although not illustrated in the schematic diagram of
The manipulator 1 includes an arm portion and a handling (picking) tool portion 14.
The arm portion is an articulated robot that is driven by a plurality of servo motors. The articulated robot, whose typical example is a vertical articulated robot of six axes (axes 13a to 13f) as illustrated in
The handling tool portion 14 includes a force sensor and a pinching mechanism. The handling tool portion 14 grasps a grasping target object.
A robot integrated management system 15 is a system that manages the system for object manipulation task 100. The handling tool portion 14 can be attached to and detached from the arm portion that grasps the grasping target object, by using a handling tool changer. The handling tool portion 14 can be replaced with an optional handling tool portion 14 in accordance with an instruction from the robot integrated management system 15.
The processing unit 31 performs noise removal processing on image sensor information captured by the camera, background removal processing on information other than an object (for example, the article container and the ground), image resizing for generating an image to be input to the planning unit 32, and normalization processing. For example, the processing unit 31 inputs an RGB-D image to the planning unit 32 as processed image sensor information.
The planning unit 32 calculates, by deep learning, a candidate group of grasp configurations (GC) of the handling tool portion 14 in an image coordinate system, which has a highly possibility of success for grasping target object. The planning unit 32 converts each candidate into a 6D grasping posture in the world coordinate system. The 6D grasping posture includes three-dimensional coordinates indicating a position and three-dimensional coordinates indicating a orientation. The planning unit 32 evaluates a score of grasping easiness of each 6D grasping posture candidate and then calculates a candidate group having higher scores of easiness or the optimal candidate. Moreover, the planning unit 32 generates a trajectory from an initial posture of the manipulator 1 to grasp postures of candidates in the candidate group with higher scores or the grasp posture of the optimal candidate, and transmits the trajectory to the control unit 33.
The control unit 33 generates a time series of a position, a velocity, and an acceleration of each joint of the manipulator 1 on the basis of the trajectory received from the planning unit 32, and controls a behavior of causing the manipulator 1 to grasp the grasping target object. In addition, the control unit 33 makes the controller 3 repeatedly function until the grasping operation succeeds or an upper limit of the number of times of operation execution is reached.
The GC candidate calculation unit 321 calculates a GC candidate group by deep learning.
Returning to
The GC and the 6D posture can be mutually converted by Equations (1) to (4) above. Dinsertion amount is an insertion amount when grasping is performed by the handling tool portion 14, and is determined by a fixed value or the shape and size of the grasping target object 104.
Moreover, in addition to the posture, the opening width W of the handling tool portion 14 in the world coordinate system can be easily obtained by converting an end point of a line segment of a projection w on the image of the opening width W into world coordinates according to Equation (3) and calculating a distance between the end points.
The evaluation unit 323 evaluates a score of easiness when grasping the grasping target object 104 in the posture of the handling tool portion 14. The score of grasping easiness is calculated by, for example, a heuristic evaluation formula in which the possibility of success, stability, and safety of grasping are considered in combination. In addition, the score of the grasping easiness is also obtained by directly using deep learning (see, for example, JP 7021160 B2). The evaluation unit 323 sorts the scores of easiness in descending order, and calculates a candidate group having higher scores or a candidate having the highest score.
The generation unit 324 generates the trajectory from the initial posture of the manipulator 1 to the candidate group having higher scores or the optimum posture of the handling tool portion 14 described above by using a planer of route planning, for example, Movelt (Online, Searched on Jun. 29, 2022, Internet “URL:https://moveit.ros.org/”), and transmits the trajectory and the score of easiness to the control unit 33.
Upon receiving input of the RGB-D image from the processing unit 31, the feature calculation unit 3211 calculates a feature map of the image of target grasping objects. Specifically, the feature calculation unit 3211 enhances the accuracy of feature learning by using a neural network that fuses not only the last feature but also an intermediate feature. Note that, in the technology according to the related art, the feature is calculated by directly fusing the feature maps of last output layer (for example, the last feature map of a plurality of pieces of sensor information) calculated by the neural network, so that the role of the intermediate feature for the accuracy of a learning result has not been considered.
The feature calculation unit 3211 calculates the feature map by receiving input of a plurality of pieces of image sensor information, integrating a plurality of intermediate features extracted by a plurality of feature extractors from the pieces of image sensor information, and fusing features of the pieces of image sensor information including the intermediate features by convolution calculation. The pieces of image sensor information include, for example, a color image indicating a color of the image and a depth image indicating a distance from the camera to the object included in the image. The feature extractors are implemented by a neural network having an encoder-decoder model structure.
According to the embodiment, intermediate features (XRGBi,j and XDi,j; i,j={(0,0),(0,1),(0,2),(1,0),(1,1),(2,0),(2,1),(3,0),(4,0)}) obtained by the encoder are fused by the following Equation (5), and the feature map by the convolution calculation (Conv) is calculated.
Returning to
Considering the above, in the GC candidate calculation unit 321 of the present embodiment, the position heatmap calculation unit 3212 calculates a position heatmap indicating the success possibility of position for the handling tool to grasp target object 104, based on which box-shaped anchors only need to be generated in the area with high success possibility in an image instead of the entire image. The region calculation unit 3213 according to the embodiment is implemented by the neural network that detects the circular anchor on the feature map on the basis of the position heatmap and calculates a first parameter on the circular anchor. The region calculation unit 3213 generates the circular anchor in a region having a higher score of the position heatmap (a region larger than a threshold), whereby the region where the circular anchor is generated is narrowed down, and the calculation amount can be reduced. In addition, as illustrated in
The position heatmap calculation unit 3212 calculates the position heatmap by a neural network (for example, a fully connected neural network (FCN) and a U-Net) using an image as an input. Ground truth of the position heatmap is obtained from x and y of the GC. For example, a value of each point in the position heatmap is generated by calculating Gaussian distances from the position of each point to x and y of GC in the image.
In the present embodiment, the anchor having a circular shape is used. In a case of the circular shape, unlike a case of a box shape, it is not necessary to consider the angle, and it is sufficient if only circles of a plurality of sizes are generated, so that the number of parameters can be reduced. As a result, learning efficiency can be enhanced.
On the other hand, in the present embodiment, unlike previous studies angle degree of θ of CG is not directly regressed because there is a possibility that an inaccurate value of a loss function due to discontinuity occurs at the boundary, resulting in learning of the angle may become difficult. Considering this, the region calculation unit 3213 enhances learning performance by learning the center (Cx,Cy) and radius (R) of a circumscribed circle of the GC and coordinates (dRx,dRy) of a midpoint of a short side of the GC (for example, the center of “h”) with respect to the center of the circle instead of the angle θ.
Returning to
Next, the region calculation unit 3213 calculates, on the basis of the feature map calculated in step S1, the expression of position and posture of the handling tool portion 14 capable of grasping the grasping target object 104 by the first parameter on the circular anchor in the image (step S2). In the example in
Next, The GC region calculation unit 3214 calculated the approximate region of GC by convert a first parameter on a circular anchor to GC region.
Next, the GC calculation unit 3215 calculates the approximate region of GC of the handling tool portion 14 which is expressed as a second parameter indicating a position and a posture of the handling tool on the image on the basis of GC approximate region. Specifically, the GC calculation unit 3215 calculates the second parameter ({x, y, w, h, θ} in Equation (6) above) from the parameter (dRx and dRy in the example of
As described above, with the object manipulation apparatus (the manipulator 1, the housing 2, and the controller 3) according to the embodiment, it is possible to more effectively utilize the intermediate features (for example, see
In the technology according to the related art, it is necessary to learn the rotation angle of the box, which is an expression of the posture of the handling tool on the image. In order to learn the rotation angle, it is necessary to generate a large number of rotated candidate boxes or classify the rotation angle (convert the angle into a high-dimensional one-hot vector), and thus, the calculation amount is enormous. On the other hand, when learning the rotation angle, learning has been difficult because two rotation angles (for example, an expression of a box having a rotation angle of 0 degrees on the image is the same as an expression of a box having a rotation angle of 180 degrees on the image) exist for the same rotated box due to the symmetry of the box.
Finally, an example of a diagram illustrating an example of a hardware configuration of the controller 3 according to the embodiment will be described.
Note that the display device 304, the input device 305, and the communication device 306 do not have to be included. For example, in a case where the controller 3 is connected to another device, a display function, an input function, and a communication function of other devices may be used.
The control device 301 executes a computer program read from the auxiliary storage device 303 to the main storage device 302. The control device 301 is, for example, one or more processors such as a central processing unit (CPU). The main storage device 302 is a memory such as a read only memory (ROM) and a random access memory (RAM). The auxiliary storage device 303 is a memory card, a hard disk drive (HDD), or the like.
The display device 304 displays information. The display device 304 is, for example, a liquid crystal display. The input device 305 receives input of the information. The input device 305 is, for example, a hardware key or the like. Note that the display device 304 and the input device 305 may be a liquid crystal touch panel or the like having both of a display function and an input function. The communication device 306 communicates with another device.
The computer program executed by the controller 3 is a file having an installable or executable format. The computer program is stored, as a computer program product, in a non-transitory computer-readable recording medium such as a compact disc read only memory (CD-ROM), a memory card, a compact disc recordable (CD-R), and a digital versatile disc (DVD) and is provided.
The computer program executed by the controller 3 may be configured to be stored on a computer connected to a network such as the Internet and be provided by being downloaded via the network. Alternatively, the computer program executed by the controller 3 may be configured to be provided via a network such as the Internet without being downloaded.
In addition, the computer program executed by the controller 3 may be configured to be provided in a state of being incorporated in advance in a ROM or the like.
The computer program executed by the controller 3 has a module configuration including a function that can be implemented by the computer program among functions of the controller 3.
Functions implemented by the computer program are loaded into the main storage device 302 by reading and executing the computer program from a storage medium such as the auxiliary storage device 303 by the control device 301. In other words, the functions implemented by the computer program are generated on the main storage device 302.
Note that some of the functions of the controller 3 may be implemented by hardware such as an integrated circuit (IC). The IC is, for example, a processor executing dedicated processing.
Moreover, in a case of implementing the respective functions using a plurality of processors, each processor may implement one of the functions, or may implement two or more of the functions.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; moreover, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2022-122589 | Aug 2022 | JP | national |