The present invention relates to an information processor and an information processing method.
For inspection and picking in factory automation (FA), a known method for recognizing (detecting) an object in an image uses a model for the object based on three-dimensional (3D) computer-aided design (CAD) data to recognize the pose of the object and then estimates a gripping position for a robot hand. For a rigid object with a definite shape such as a bolt or a nut, a model for the object based on 3D CAD data is used to recognize the pose of each of the objects randomly placed in a container and estimate a gripping position for a robot hand.
Another technique called model-less gripping position recognition uses, instead of a model for an object, 3D measurement data for an object and the shape of a robot hand to recognize a position on the object for gripping. The technique is usable for rigid objects such as bolts or nuts, as well as for any objects for which a model based on 3D CAD data cannot be used. Such objects include non-rigid objects such as flexible cables and objects with an indefinite shape such as packaging for liquid detergents.
For example, the technique described in Patent Literature 1 uses a range image and a two-dimensional (2D) hand model to calculate a gripping pose that allows a segment extracted from the range image to be within the opening width of the hand and allows no collision with any surrounding segments.
When a gripping pose with multiple opening widths is calculated using a 2D hand model for a multi-finger hand, a 2D hand model is created for each opening width. The gripping pose is then recognized multiple times with each model. The time taken for calculation increases in proportion to the number of the opening widths. The processing time thus increases. When a gripping pose for gripping target objects with different orientations and shapes is searched for with one opening width, the largest possible opening width is to be set for the hand to grip all the target objects. The largest possible opening width may be set for a target object surrounded by no obstacles. However, a collision may occur with another object near a gripping target in, for example, picking randomly placed objects, thus limiting the number of obtainable candidate gripping poses.
Patent Literature 1: Japanese Patent No. 5558585
In response to the above issue, one or more aspects of the present invention are directed to a technique for model-less calculation of a gripping pose at high speed.
An information processor according to an aspect of the present invention is an information processor for calculating, fora robot hand including a plurality of fingers, a gripping pose at which the robot hand grips a target object. The information processor includes a candidate single-finger placement position detector that detects, based on three-dimensional measurement data obtained through three-dimensional measurement of the target object and hand shape data about a shape of the robot hand, candidate placement positions for each of the plurality of fingers of the robot hand, a multi-finger combination searcher that searches for, among the candidate placement positions for each of the plurality of fingers, a combination of candidate placement positions to allow gripping of the target object, and a gripping pose calculator that calculates, based on the combination of candidate placement positions for each of the plurality of fingers, a gripping pose at which the robot hand grips the target object.
The information processor according to the above aspect of the present invention causes the candidate single-finger placement position detector to detect, based on 3D measurement data obtained through 3D measurement of a target object and hand shape data about the shape of a robot hand including a plurality of fingers, a candidate placement position for each of the fingers of the robot hand for calculating a gripping pose at which the robot hand grips the target object. Among the candidate placement positions detected in this manner for the fingers, the multi-finger combination searcher then searches for combinations of placement positions that satisfy criteria including a criterion about the positional relationship between the fingers of the robot hand and allow gripping of the target object. Based on the combinations searched for the fingers in this manner, the gripping pose calculator calculates a gripping pose based on the order of priority and other information. This eliminates repeated calculation of gripping poses for each of multiple hand shape models with different opening widths. For any number or any arrangement of fingers of a multi-finger hand, this structure allows model-less calculation of the gripping pose at high speed by detecting candidate placement positions for each finger and searching for combinations of such placement positions.
In the above aspect of the present invention, the candidate single-finger placement position detector may further detect, based on three-dimensional measurement data about the target object obtained at an angle changed relative to the plurality of fingers for which candidate placement positions are to be detected, candidate placement positions for each of the plurality of fingers of the robot hand.
This eliminates changes in the process of detecting a candidate placement position for each finger in accordance with the angles of the fingers relative to a target object. Instead, a candidate placement position for each finger is detected in 3D measurement data with an angle changed relative to the fingers. This allows the process of detecting a candidate placement position for each finger in a uniform manner, thus allowing high-speed calculation of a gripping pose with different angles relative to the target object.
In the above aspect of the present invention, the candidate single-finger placement position detector may detect an edge in a depth direction of a range image represented by the three-dimensional measurement data and detect, based on the detected edge, candidate placement positions for each of the plurality of fingers.
In this manner, a placement position for each finger of the robot hand is detected based on the edge in the depth direction of the range image. A position that the robot hand can easily grip is thus detected as a candidate placement position. Of the edges in the depth direction with respect to the image plane of the range image, any one with a constant intensity may be detected as a candidate placement position for a finger.
In the above aspect of the present invention, the candidate single-finger placement position detector may detect, based on the hand shape data, candidate placement positions for each of the plurality of fingers to avoid collision at a position of the edge.
The robot hand at the calculated gripping pose can thus avoid collision in a reliable manner.
In the above aspect of the present invention, the multi-finger combination searcher may calculate, for the combination of candidate placement positions for each of the plurality fingers, a holdable height indicating an overlap, in the depth direction, between edges corresponding to the candidate placement positions for each of the plurality of fingers, and search for, based on the holdable height, a combination of candidate placement positions for each of the plurality of fingers.
A larger holdable height allows the robot hand to grip the target object accurately. Thus, a combination of candidate placement positions for the fingers of the robot hand is searched for based on the holdable height to allow calculation of a gripping pose that achieves a high success rate for gripping. A threshold may be set for the holdable height, and any combination with a calculated holdable height that exceeds the threshold may be determined as a combination of candidate placement positions for the fingers.
In the above aspect of the present invention, the multi-finger combination searcher may calculate, for the combination of candidate placement positions for each of the plurality fingers, an inner recess height indicating a recess between edges corresponding to the candidate placement positions for each of the plurality of fingers, and search for, based on the inner recess height, a combination of candidate placement positions for each of the plurality of fingers.
When the inner recess height is large, two or more target objects are highly likely to be between the candidate placement positions for the fingers. With a combination of such candidate placement positions, the robot hand is more likely to grip two or more target objects. Thus, a combination of candidate placement positions for the fingers of the robot hand is searched for based on the inner recess height to allow calculation of a gripping pose that achieves a high success rate for gripping. A threshold may be set for the inner recess height, and any combination with a calculated inner recess height less than the threshold may be determined as a combination of candidate placement positions for the fingers.
An information processing method according to another aspect of the present invention is a method for calculating, for a robot hand including a plurality of fingers, a gripping pose at which the robot hand grips a target object. The information processing method includes detecting, based on three-dimensional measurement data obtained through three-dimensional measurement of the target object and hand shape data about a shape of the robot hand, candidate placement positions for each of the plurality of fingers of the robot hand, searching for, among the candidate placement positions for each of the plurality of fingers, a combination of candidate placement positions to allow gripping of the target object, and calculating, based on the combination of candidate placement positions for each of the plurality of fingers, a gripping pose at which the robot hand grips the target object.
The information processing method according to the above aspect of the present invention causes the candidate single-finger placement position detector to detect, based on 3D measurement data obtained through 3D measurement of a target object and hand shape data about the shape of a robot hand including a plurality of fingers, a candidate placement position for each of the fingers of the robot hand for calculating a gripping pose of the robot hand to grip the target object. Among the candidate placement positions detected in this manner for the fingers, combinations of placement positions that satisfy criteria including a criterion about the positional relationship between the fingers of the robot hand and allow gripping of the target object are searched for. Based on the combinations searched for the fingers in this manner, a gripping pose is calculated based on the order of priority and other information. This eliminates repeated calculation of gripping poses for each of multiple hand shape models with different opening widths. For any number or any arrangement of fingers of a multi-finger hand, this structure allows model-less calculation of the gripping pose at high speed by detecting candidate placement positions for each finger and searching for combinations of such placement positions.
The information processing method according to the above aspect of the present invention may also be provided as a program for causing a computer to implement the method or a non-transitory storage medium storing the program.
The technique according to the above aspects of the present invention allows model-less calculation of a gripping pose at high speed.
Example uses of the present invention will now be described with reference to the drawings.
The present invention is applicable to an information processor 21 included in a gripping position recognition apparatus 2 in
With a known gripping pose calculation method for a multi-finger hand, a gripping pose calculation process may be performed multiple times for two-dimensional (2D) hand models each corresponding to an opening width of each finger. The process is thus time-consuming. When the gripping pose is searched for with one opening width, the largest possible width is set (refer to
The technique according to the embodiments of the present invention detects candidate finger placement positions for each of the multiple fingers and searches for, among the candidate finger placement positions for the fingers, one or more combinations that allow gripping the target. This allows finger placement position detection or combination searches with multiple finger opening widths (refer to
Examples of the multi-finger hand include a two-finger hand and a three-finger hand. A multi-finger hand may have a different number of fingers. A hand is also referred to as a gripper or an end effector.
The gripping position recognition apparatus 2 including the information processor 21 according to a first embodiment of the present invention will now be described with reference to
The gripping position recognition apparatus 2 is installed on a production line for, for example, product assembly or processing. The gripping position recognition apparatus 2 recognizes, based on data received from the sensor unit 20 and data about the shape of the multi-finger hand 26, the gripping pose of the robot 27 with respect to a target object 29 placed in the tray 28. Recognition target objects (hereafter also referred to as target objects) 29 are randomly placed in the tray 28.
The gripping position recognition apparatus 2 mainly includes the sensor unit 20 and the information processor 21. The sensor unit 20 and the information processor 21 are connected to each other with wires or wirelessly. The information processor 21 receives an output from the sensor unit 20. The information processor 21 performs various processes using data received from the sensor unit 20. Examples of the processes performed by the information processor 21 include distance measurement (ranging), 3D shape recognition, object recognition, and scene recognition. The recognition result from the gripping position recognition apparatus 2 is output to, for example, the PLC 25 or a display 22, and is used for controlling the robot 27, for example.
The sensor unit 20 includes at least a camera for capturing optical images of target objects 29. The sensor unit 20 may further include any component (e.g., a sensor, an illuminator, and a projector) to be used for 3D measurement of the target objects 29. For measuring the depth using stereo matching (also referred to as stereo vision or a stereo camera system), for example, the sensor unit 20 includes multiple cameras. For active stereo that projects a random dot pattern onto the target object 29, the sensor unit 20 further includes a projector for projecting structured light onto the target objects 29. For 3D measurement using pattern projection with space encoding, the sensor unit 20 includes a projector for projecting patterned light and cameras. Another method may be used to generate 3D information about the target objects 29, such as photometric stereo, a time-of-flight (TOF) method, or phase shifting.
The information processor 21 is, for example, a computer including a central processing unit (CPU), a random-access memory (RAM), a nonvolatile storage (e.g., a hard disk drive, or a solid-state drive or SSD), an input device, and an output device. In this case, the CPU loads the program stored in the nonvolatile storage into the RAM and executes the program to implement various components described later. The information processor 21 may have another configuration. The components may be entirely or partly implemented by a dedicated circuit such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC), or by cloud computing or distributed computing.
An example gripping position recognition process performed with the method of information processing performed by the information processor 21 will now be described with reference to the flowchart in
In step S101, the candidate single-finger placement position detector 211 obtains, as 3D measurement data about the target objects, a range image with depth values (depth information) associated with respective points (pixels) in a 2D image, and hand shape data. In the present embodiment described below, a two-finger hand 261 including two fingers 2611 and 2612 as shown in
In step S102, a coefficient k that defines the rotation angle of the range image (described later) is set to 0, where k is an integer greater than or equal to 0 and less than or equal to N.
In step S103, the candidate single-finger placement position detector 211 rotates the range image obtained in step S101 by the angle kΔθ. When a unit angle Δθ by which the range image rotates is set to a smaller value, the processing is performed more times for the rotated image, thus increasing the processing load. When the unit angle Δθ is set to a larger value, fewer candidates are obtained, possibly disabling detection of an optimum gripping pose. Based on such conditions, the unit angle Δθ is preset or set by the user through an input operation. In the present embodiment, the unit angle Δθ is set to 15 degrees, and the rotation is counterclockwise. Rotating the range image in this manner changes the angle of the multi-finger hand 26 relative to the target object, allowing calculation of the gripping poses at different angles relative to the target object. Moreover, rotating the range image in this manner eliminates the operation of changing the direction for detection of a candidate single-finger placement position in x-direction and y-direction.
When the unit angle Δθ is set to 15 degrees, 180 degrees divided by 15 degrees equals 12, to which N is set (described later). The value of k is increased from 0 to 12 in increments of 1 to detect single-finger placement positions in range images with rotation angles increased from 0 to 165 degrees in increments of 15 degrees. The two-finger hand 261 in the present embodiment has the two fingers 2611 and 2612 facing each other at an angle of 180 degrees. While facing each other at an angle of 180 degrees, the two fingers 2611 and 2612 move to have the distance between them increased or decreased to grip a target object. Thus, any range image rotated at 180 degrees or more is equivalent to its corresponding range image in which the two fingers 2611 and 2612 are replaced with each other. The processing may be eliminated for such a range image.
In step S104, the candidate single-finger placement position detector 211 detects left edges in the range image of the target objects 29 rotated by the angle kΔθ in step S103. The process for the range image IM1 with a rotation angle of 0 degrees (k=0) will be described as an example. For the range image IM1 in
Edge detection will now be described with reference to
The range image IM1 in
An edge herein refers to adjoining pixels having a larger difference in the range between them. Edge detection may be performed along x-axis in the range image in
ER1 are detected along portions of the range image corresponding to right edges of the deep-fried chicken pieces 291.
In step S105, the candidate single-finger placement position detector 211 performs a collision avoidance process for the left edges in the range image. The collision avoidance process for the left edges will now be described with reference to
In contrast, when the left finger 2611 is at the position on a left edge EL12 shown in
The collision avoidance process for the right finger 2612 is performed in the same manner as the collision avoidance process for the left finger 2611 described above. As shown in
In step S106, the candidate single-finger placement position detector 211 detects right edges in the range image of the target objects 29 rotated by the angle kΔθ in step S103. The details of the detection method for right edges are the same as those for left edges and will not be described.
In step S107, the candidate single-finger placement position detector 211 performs a collision avoidance process for the right edges in the range image. The details of the collision avoidance process for right edges are the same as those for left edges and will not be described.
In step S108, the multi-finger combination searcher 212 performs a multi-finger combination search process. An example multi-finger combination search process will be described with reference to the flowchart in
In steps S801 and S811, the multi-finger combination searcher 212 repeats the processing in steps S802 to S809 for all values in y-axis within a target area.
In steps S802 and S810, the multi-finger combination searcher 212 repeats the processing in steps S803 to S809 for all left edges at the current y-coordinate.
In steps S803 and S809, the multi-finger combination searcher 212 repeats the processing in steps S804 to S807 for all right edges within the opening width defined with the current left edge.
In step S804, the multi-finger combination searcher 212 determines, for a left edge at the current y-coordinate, whether each of the right edges within the opening width satisfies a criterion for a holdable height. The opening width between the left finger 2611 and the right finger 2612 will now be described with reference to
The holdable height criterion is used to determine whether a holdable height calculated from the range image exceeds a predetermined threshold. For each of a left edge and a right edge detected in the range image by the candidate single-finger placement position detector 211, an upper end and a lower end are determined. The distance, or a height, between either of the upper ends having a longer range and either of the lower ends having a shorter range can be recognized as a height at which the target object is holdable by the left finger and the right finger used in combination. This height is thus referred to as a holdable height. A larger holdable height defined as above allows the two-finger hand 261 to grip the target object 29 more easily. The upper end and the lower end of each edge can be determined by calculating a point at which the difference in the range is less than or equal to a predetermined value.
More specifically, in step S804, when the holdable height is less than a predetermined threshold, the multi-finger combination searcher 212 determines that the holdable height criterion is not satisfied and advances the processing to step S805 to reject the combination of the left edge and the right edge.
In step S804, when the holdable height is greater than or equal to the predetermined threshold, the multi-finger combination searcher 212 determines that the holdable height criterion is satisfied and advances the processing to step S806.
In step S806, the multi-finger combination searcher 212 determines, for a left edge at the current y-coordinate, whether each of the right edges within the opening width satisfies a criterion for an inner recess height.
The inner recess height criterion is used to determine whether a height of a recess located between the holding portions calculated from the range image is less than or equal to a predetermined threshold. For a left edge and a right edge detected in a range image by the candidate single-finger placement position detector 211, the inner recess height is a height defined by either of the upper end of the left edge and the upper end of the right edge having a shorter range and a point between the upper end of the left edge and the upper end of the right edge having the longest range.
When the inner recess height is large, the left finger and the right finger may grip two or more target objects. However, a single target object is to be gripped in a reliable manner. Thus, in step S806, when the inner recess height is greater than a predetermined threshold, the multi-finger combination searcher 212 determines that the inner recess height criterion is not satisfied and advances the processing to step S805 to reject the combination of the left edge and the right edge. In the example shown in
When the inner recess height is less than or equal to the predetermined threshold in step S806, the multi-finger combination searcher 212 advances the processing to step S807 to register the combination of the left edge and the right edge as a current candidate combination.
As described above, the processing in steps S804 to S807 is performed for all the right edges within the opening width defined with the current left edge (step S809). The processing in steps S803 to S809 is then performed for all the left edges at the current y-coordinate (step S810). The processing in steps S802 to S810 is then performed for all values in y-axis within a target area (step S811). The multi-finger combination search process ends.
Referring back to the flowchart in
When k<N−1 is determined in step S109, k+1 is substituted fork (step S110), and the processing in step S103 and subsequent steps is repeated.
When k<N−1 is determined not to hold in step S109, the processing advances to step S111.
In step S111, multi-finger search results for the range images with rotation angles from 0 to (N−1)Δθ in increments of Δθ are integrated, and candidate multi-finger combinations are prioritized.
Multiple evaluation indices may be used for prioritizing integrated candidate multi-finger combinations.
In
In
As described above, the prioritization is performed using a combination of the three evaluation indices, or specifically the range to the target object 29, the straightness of portions of the target object 29 in contact with the inner side surfaces 2611a and 2612a of the fingers 2611 and 2612, and the holdable height. The multiple evaluation indices may be combined by calculating the weighted sum of the evaluation indices or totaling discrete evaluation results of the evaluation indices. A target object with a higher integrated evaluation index value combining the evaluation indices has a higher priority.
In step S112, gripping poses prioritized in step S111 are output to the PLC 25. The PLC 25 controls the robot 27 and the multi-finger hand 26 in accordance with the prioritized gripping poses to grip a target object 29.
In the present embodiment, a three-finger hand 262 including three fingers 2621, 2622, and 2623 as shown in
A gripping position recognition apparatus in a second embodiment has the same configuration as the gripping position recognition apparatus in the first embodiment except the structure of the multi-finger hand. The same components as those in the first embodiment are given the same reference numerals and will not be described in detail.
An example gripping position recognition process performed with the method of information processing performed by the information processor 21 will now be described with reference to the flowchart in
The processing in steps S101 to S103 is the same as in the first embodiment and will not be described.
In step S104, left edges in the range image rotated by the angle kΔθ are detected. In step S105, a collision avoidance process is performed. The processing in the present embodiment is the same as the processing in the first embodiment. However, for the three-finger hand 262, left edge detection in the range image is performed differently and will now be described.
For the two-finger hand 261, a portion of the target object 29 to be gripped by the finger 2611 is detected as a left edge in the range image, and a portion of the target object 29 to be gripped by the finger 2612 is detected as a right edge in the same range image. The combinations of a left edge and a right edge are used for searching for combinations of candidate placement positions for the two fingers 2611 and 2612. For the three-finger hand 262, when a portion of the target object 29 to be gripped by the finger 2621 is detected as a left edge in the range image, neither of the other fingers is movable in a direction parallel to x-axis with respect to the y-coordinate of the detected left edge in the same range image. As shown in
In subsequent step S201, right edges (first right edges) are detected in the range image rotated clockwise by 60 degrees after being obtained in step S103. The right edges are detected with the same method as in step S106 in the first embodiment except that the range image rotated clockwise by 60 degrees is used. The method will not be described in detail.
In step S202, the collision avoidance process is performed for the right edges detected in step S201. The collision avoidance process for the right edges is the same as in step S107 in the first embodiment and will not be described in detail.
In step S203, right edges (second right edges) are detected in the range image rotated counterclockwise by 60 degrees after being obtained in step S103. The right edges are detected with same method as in step S106 in the first embodiment except that the range image rotated counterclockwise by 60 degrees is used. The method will not be described in detail.
In step S204, the collision avoidance process is performed for the right edges detected in step S203. The collision avoidance process for the right edges is the same as in step S107 in the first embodiment and will not be described in detail.
In step S205, the multi-finger combination searcher 212 performs a multi-finger combination search process. An example multi-finger combination search process will be described with reference to the flowchart in
The processing in steps S801 and S802 is the same as in the first embodiment and will not be described.
In steps S901 and S905, the processing in steps S902 and S804 to S807 is repeated for all first right edges and second right edges within the opening width defined with the current left edge. The opening width is the same as in the first embodiment except that the three-finger hand 262 has two opening widths between the finger 2621 and the finger 2622 and between the finger 2621 and the finger 2623.
In step S902, the determination is performed as to whether, for a first right edge and a second right edge within the opening width defined with the left edge, horizontal lines extending from a point (edge point) included in each edge intersect with one another with a predetermined offset or less between them.
When the determination result is affirmative (Yes) in step S902, the processing advances to step S804.
When the determination result is negative (No) in step S902, the processing advances to step S805 to reject the combination of the left edge, the first right edge, and the second right edge.
The processing in steps S804 and S806 is the same as in the first embodiment and will not be described in detail. For the three-finger hand 262, the holdable height criterion and the inner recess height criterion can be determined for each of the three fingers with respect to each of the other two fingers. The criteria may be satisfied for all the three combinations of two fingers, or for at least one or two combinations of the fingers.
When the inner recess height criterion is determined to be satisfied in step S806, the processing advances to step S807 to register the left edge, the first right edge, and the second right edge as a candidate combination.
As described above, the processing in steps S902, and S804 to S807 is repeated for all first right edges and second right edges being within the opening width defined with the current left edge.
Subsequently, the processing in steps S901 to S905 is repeated for all the left edges at the current y-coordinate.
After the processing in steps S802 to S811 is repeated for all y-coordinates, the multi-finger combination search process ends, and the processing advances to step S109.
The processing in steps S109 to S112 is the same as in the first embodiment and will not be described in detail. In the present embodiment, in step S104, left edges are detected in a range image rotated by the angle kΔθ in step S103. In step S201, first right edges are detected in the range image rotated clockwise by 60 degrees. In step S203, second right edges are detected in the range image rotated counterclockwise by 60 degrees.
In the present embodiment, similarly to the first embodiment, when the unit angle Δθ is set to 15 degrees, 120 degrees divided by 15 degrees equals 8, and thus N is set to 8. The value of k is increased from 0 to 8 in increments of 1 to detect single-finger placement positions in range images with rotation angles increased from 0 to 105 degrees in increments of 15 degrees. The three-finger hand 262 has the three fingers 2621, 2622, and 2623 facing one another at an angle of 120 degrees relative to one another. While facing one another at an angle of 120 degrees, the three fingers 2621, 2622, and 2623 move to have the distance between them increased or decreased to grip a target object. Thus, a range image rotated at 120 degrees or more is equivalent to its corresponding range image with each of the three fingers 2621, 2622, and 2623 replaced by its adjacent finger. The processing may be eliminated for such a range image.
The present embodiment describes the three-finger hand 262. Prioritized gripping poses can also be calculated in the same manner for a multi-finger hand with four or more fingers. For the three-finger hand 262, left edges, first right edges, and second right edges are detected to search for multi-finger combinations. Other combinations of a left edge or a right edge from those described above may be detected. A target range image is rotated to have each finger moving in x-direction to grip a left edge or a right edge of a target object in accordance with the arrangement of the fingers included in the multi-finger hand. A left edge or a right edge is detected in the rotated range image. Among horizontal lines passing through points included in the left edges or right edges detected in the manner described above (lines in x-direction in rotated range images), lines intersecting one another with a predetermined offset or less between them in the target range image are detected. Then, combinations of edges that satisfy criteria including the holdable height criterion are registered as candidate multi-finger combinations and prioritized based on a predetermined evaluation index. In this manner, prioritized gripping poses can also be calculated for a multi-finger hand with four or more fingers.
The elements in the aspects of the present invention below are identified with reference numerals used in the drawings to show the correspondence between these elements and the components in the embodiments.
An information processor (21) for calculating, for a robot hand (261) including a plurality of fingers (2611, 2612), a gripping pose at which the robot hand (261) grips a target object (29), the information processor (21) comprising:
a candidate single-finger placement position detector (211) configured to detect, based on three-dimensional measurement data obtained through three-dimensional measurement of the target object (29) and hand shape data about a shape of the robot hand (261), candidate placement positions for each of the plurality of fingers (2611, 2612) of the robot hand (261);
a multi-finger combination searcher (212) configured to search for, among the candidate placement positions for each of the plurality of fingers (2611, 2612), a combination of candidate placement positions to allow gripping of the target object (29); and
a gripping pose calculator (213) configured to calculate, based on the combination of candidate placement positions for each of the plurality of fingers (2611, 2612), a gripping pose at which the robot hand (261) grips the target object (29).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/032049 | 8/15/2019 | WO |