This is a U.S. national stage application of PCT Application No. PCT/CN2019/128936 under 35 U.S.C. 371, filed Dec. 27, 2019 in Chinese, claiming priority to Chinese Patent Application No. 201910064184.0, filed Jan. 23, 2019, all of which are hereby incorporated by reference.
The present invention relates to the technical field of computer vision and industrial automation, in particular to a method for grasping texture-less metal parts based on contrast-invariant bunch of lines descriptor image matching.
Grasping of texture-less metal parts has always been an important research interest in the field of computer vision and industrial automation, and is required in many application scenarios such as part recognition.
The most common method for grasping textured objects is to extract and then match feature points (such as SIFT (scale-invariant feature transform) or SURF (speeded up robust features)) in templates and real images, and is high in efficiency and accurate. However, this method is not suitable for texture-less metal parts the valid feature points of which cannot be extracted.
Most existing matching-based texture-less part grasping method are typically implemented as follows: part contours in real part images are extracted and compared with template contours, the most similar template is used as a correctly matched template, and grasping is carried out according to the pose corresponding to this template. The common methods include: directly comparing corresponding pixels of two contour images (the template image and the real image); or, extracting some features (such as moment features) of the two contour images to calculate the similarity of the features. However, all these methods calculate the contours as a whole and may be influenced by external factors; the calculation complicity is high, the matching accuracy is low, and the final success rate of grasping is affected.
In recent years, some scholars have put forward a method for matching and grasping texture-less objects by means of adjacent line segments (Bundle Of Lines Descriptor, BOLD). This descriptor-based matching method can accurately complete line segment matching of images, is robust to rotation, horizontal movements and scale variations, and can obtain results satisfying grasping precision to complete grasping. However, due to the BOLD's requirement for a description of line segments in the gradient direction, the BOLDs cannot be accurately matched in case of a change to the image contrast, thus resulting in a grasping failure.
To overcome the defects of the aforesaid matching methods, the present invention provides a method for grasping texture-less metal parts based on BOLD image matching. The present invention puts forward a novel line segment direction definition method and improves the calculation method of distance functions for matching, thus being suitable for more general conditions and satisfying actual application requirements.
As shown in
Step 1: photographing a real texture-less metal part placed in a real environment by a real physical camera to obtain a real image; photographing a texture-less metal part CAD model imported in a computer virtual scene by a virtual camera to obtain CAD template images; extracting a foreground part of the input real image and the input CAD template images, calculating a covariance matrix of the foreground part, and establishing the direction of a temporary coordinate system;
Wherein, the CAD model is a network model such as a triangular mesh.
Step 2: processing the real image and all the CAD template images by means of a line segment detector (LSD), extracting edges in the real image and all the CAD template images and using the edges as line segments, traversing all the line segments in each image, and setting directions of the line segments in the temporary coordinate system;
Step 3: for each image, traversing all the line segments, and constructing a descriptor of each line segment according to an angle relation between the line segment and k nearest line segments;
Step 4: in case of different k values for the descriptors of the line segments in the real image and the CAD template images, matching the descriptors of different line segments in the real image and the CAD template images to obtain line segment pairs; and
Step 5: recognizing a processed pose by means of perspective n lines (PNL) according to matched line segment pairs to obtain a pose of the real texture-less metal part, and then inputting the pose of the real texture-less metal part into a mechanical arm to grasp the part.
The texture-less metal part is a polyhedral metal part with a flat and smooth surface and free of pits, protrusions and textures.
Specifically, in Step 1, the foreground part of the images is extracted and used as a foreground image, a covariance matrix of the foreground image is calculated to obtain two feature values of the covariance matrix and feature vectors corresponding to the two feature values, the feature vector corresponding to a larger feature value is taken as an x-axis positive direction of the temporary coordinate system, and the other feature vector is taken as a y-axis positive direction of the temporary coordinate system.
Traversing all the line segments to set directions of the line segments in Step 2 is performed specifically as follows: a temporary coordinate system is established with any point on each line segment as the origin of the temporary coordinate system; then, if the line segment passes through a first quadrant, the line segment points to the first quadrant of the temporary coordinate system; if the line segment passes through a second quadrant, the line segment points to the second quadrant of the temporary coordinate system; or, if the line segment does not pass through the first quadrant or the second quadrant, the line segment points to the first quadrant and the second quadrant of the temporary coordinate system.
In Step 3, the k nearest line segments of each line segment are selected in order according to the distances between midpoints of the line segments. That is, for each line segment, the distances between the midpoint of this line segment and the midpoints of all the other line segments are calculated, and k line segments with shortest distances are selected as the k nearest line segments.
k values for the line segments in each image are identical, and k values for the line segments in different images may be identical or different.
Specifically, in Step 3:
3.1: with two line segments si and sj as one line segment and one nearest line segment thereof, a first angle α and a second angle β are calculated according to the following formula, as shown in
wherein, si and sj are vector representations of two line segments in the same image, respectively, and vector directions are determined by the directions of the line segments in the temporary coordinate system obtain in Step 2; n is a unit vector perpendicular to an image plane, ∥a∥ represents the module length of a vector a, mi and mj represent midpoints of the line segments si and sj, respectively, and tij represents a vector that points from mi to mj.
3.2: for each line segment of the images, the first angles α and the second angles β between the line segment and k nearest line segments are obtained according to Step 3.1, that is, a constant contrast-based BOLD of each line segment is constructed by k pairs of first angles α and second angles β, which form a matrix to represent the descriptor.
In actual implementation, each pair of first angle α and second angle β can be discretely accumulated into a 2D joint histogram, and in this specification, the discrete step length is set to π/12, and the 2D joint histogram is the descriptor of the line segment.
Specifically, in Step 4:
4.1: different k values for generating the descriptors of the line segments in the real image and the CAD template images are k1 and k2, respectively;
If k1=k2, the Euclidean distance between the descriptor of one line segment in the real image and the descriptor of each line segment in the CAD template images is calculated according to the following formula, two line segments corresponding to the nearest descriptors are selected and are regarded as matched to constitute a line segment pair:
wherein, d is the Euclidean distance between two descriptors, di1 is the i-th element in the descriptor of the line segment in a first image, di2 is the i-th element in the descriptor of the line segment in a second image, i is the ordinal of elements in the descriptor, and n is the total number of elements in the descriptor;
If k1<k2, processing is performed as follows:
First, the descriptor of each image is calculated and corrected according to the following formula:
wherein, dim is an element at the i-th position of the line segment in an m-th image,
Then, the descriptors of all the images are normalized:
wherein, m is the descriptor of the line segment in one image and is 1 or 2. When k=1, m=1, and when k=2, m=2.
Finally, the Euclidean distance between the descriptor of one line segment in the real image and the descriptor of each line segment in the CAD template images is calculated according to the following formula, and two line segments corresponding to the nearest descriptors are selected and are regarded as matched:
wherein, d is the Euclidean distance between two descriptors, di1 is the i-th element in the descriptor of the line segment in a first image, di2 is the i-th element in the descriptor of the line segment in a second image, i is the ordinal of elements in the descriptor, and n is the total number of elements in the descriptor;
Finally, in Step 4, after line segment pairs of all the line segments in the real image are found in the CAD template images, mismatches of all the line segments are removed by means of RANSAC (random sample consensus), and a finally obtained line segment pair is used as a line segment matching result.
The present invention has the following beneficial effects:
1) The present invention solves the problem of mismatches of texture-less metal parts caused by the variation of the background contrast.
2) The present invention improves the calculation method of distance functions for matching, so that the calculation method can adapt to different k values and can satisfy actual application requirements.
3) The present invention solves the problem that accurate matching of parts cannot be realized when the random illumination and part pose change, and can calculate the poses of parts in an industrial environment more robustly and accurately, thus greatly improving the success rate of part grasping.
The present invention will be further explained below in conjunction with the accompanying drawings and embodiments. The flow diagram of the present invention is illustrated by
A specific embodiment and an implementation process thereof of the present invention are as follows:
This embodiment is implemented with a U-shaped bolt as a texture-less metal part.
Step 1: a real texture-less metal part placed in a real environment is photographed by a real physical camera to obtain a real image; a texture-less metal part CAD model imported in a computer virtual scene is photographed by a virtual camera to obtain CAD template images; a foreground part of the input real image and the input CAD template images is extracted through a grabcut algorithm, a covariance matrix of the foreground part is calculated, and the direction of a temporary coordinate system is established.
The real image and the CAD template images are specifically processed as follows: a covariance matrix of a foreground image of an input image, feature values thereof, and corresponding vector features are calculated, the feature vector corresponding to a larger feature value is taken as an x-axis positive direction of the temporary coordinate system, and the other feature vector is taken as a y-axis positive direction of the temporary coordinate system, as shown in
Step 2: the real image and all the CAD template images are processed by means of a line segment detector (LSD), edges in the real image and all the CAD template images are extracted and used as line segments, all the line segments in each image are traversed, and directions of the line segments in the temporary coordinate system are set.
As shown in
Step 3: for each image, all the line segments are traversed, and a descriptor of each line segment is constructed according to an angle relation between the line segment and k nearest line segments;
As shown in
3.1: with two line segments si and sj as one line segment and a nearest line segment thereof, a first angle α and a second angle β are calculated according to the following formula, as shown in
3.2: for each line segment in the images, the first angles α and the second angles β of the line segment and k nearest line segments are obtained according to Step 3.1, that is, a constant contrast-based BOLD of each line segment is constructed by k pairs of first angles α and second angles β, which form a matrix to represent the descriptor.
In actual implementation, each pair of first angle α and second angle β can be discretely accumulated into a 2D joint histogram, and in this specification, the discrete step length is set to π/12, and the 2D joint histogram is the descriptor of the line segment.
Step 4: in case of different k values for generating the descriptors of line segments in the real image and the CAD template images, the descriptors of different line segments in the real image and the CAD template images are matched to obtain line segment pairs;
Finally, mismatches are removed through an RANSAC algorithm, an output matching result is shown in
Step 5: a processed pose is recognized by means of perspective n lines (PNL) according to the matched line segment pairs to obtain a pose of the real texture-less metal part, and then the pose of the real texture-less metal part is input to a mechanical arm to grasp the part.
The preferred embodiments mentioned above are used to disclose the present invention, and are not intended to limit the present invention. Those ordinarily skilled in the art can make different modifications and embellishments without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention is defined by the claims.
Number | Date | Country | Kind |
---|---|---|---|
201910064184.0 | Jan 2019 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/128936 | 12/27/2019 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/151454 | 7/30/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20150363663 | Tombari | Dec 2015 | A1 |
20180314909 | Brendel | Nov 2018 | A1 |
Entry |
---|
Tombari, Federico, Alessandro Franchi, and Luigi Di Stefano. “BOLD features to detect texture-less objects.” Proceedings of the IEEE International Conference on Computer Vision. 2013. (Year: 2013). |
Number | Date | Country | |
---|---|---|---|
20210271920 A1 | Sep 2021 | US |