The present invention relates to a contour shape recognition method, and belongs to the field of shape recognition technologies.
Contour shape recognition is an important research direction in the field of machine vision. One of the main research topics in the machine vision is to carry out target recognition by using the shape features of an object, with the main achievement as fully extracting target shape features by improving a shape matching algorithm or designing an effective shape descriptor, so as to perform better similarity measurement. This has been widely applied in engineering, e.g., radar, infrared imaging detection, image and video matching and retrieval, robot self-navigation, scene semantic segmentation, texture recognition, and data mining.
Generally, for the representation and retrieval of contour shapes, target contour features are extracted based on manually designed shape descriptors, such as Shape Contexts, Shape Vocabulary and Bag of contour fragments. However, the shape information extracted by means of the manual descriptors is usually incomplete, and cannot guarantee insensitivity to local changes, blockage, overall deformation, and other changes of a target shape. Furthermore, adding too many descriptors would lead to redundant feature extraction and high computational complexity. As a result, the recognition accuracy and efficiency are low. In recent years, as convolutional neural networks have achieved good results in image recognition tasks, they have begun to be applied in shape recognition tasks. However, since the contour shape lacks surface texture, color, and other information of an image, the recognition effect would be low if the convolutional neural network is directly applied.
In view of the above problems of the shape recognition algorithm, how to provide a target recognition method that can comprehensively represent the shape of the target contour while performing accurate classification is an urgent problem to be solved by those skilled in the art at present.
The present invention is provided to solve the problems in the prior art, and the technical solution is as follows.
A contour shape recognition method includes the following steps:
Preferably, in step 1, extracting the salient feature points of the contour of the shape sample is that:
the contour of each shape sample is composed of a series of sampling points, and for any shape sample S,
S={px(i),py(i)|i∈[1,n]},
wherein px(i),py(i) indicates coordinates of a contour sampling point p(i) in a two-dimensional plane, and n indicates the length of the contour;
the salient feature points are extracted by evolving a contour curve of the shape sample, and during each evolution process, a point that contributes the least to target recognition is deleted, wherein the contribution of each point p(i) is defined as:
wherein b(i,i−1) indicates the length of a curve between points p(i) and p(i−1), b(i,i+1) indicates the length of a curve between points p(i) and p(i+1), B(i) indicates an angle between a line segment p(i)p(i−1) and a line segment p(i)p(i+1), the length b is normalized according to the perimeter of the contour, and the larger the value of K(i) the greater the contribution of the point p(i) to a shape feature;
in order to avoid extracting too many or too few salient feature points of the contour, a region-based adaptive end function F(t) is introduced:
wherein S0 is the area of an original shape. St is an area resulting from t times evolutions, and no indicates the total number of points on the contour of the original shape; and after the value of the end function F(t) exceeds a set threshold, extracting the salient feature points of the contour ends.
Further, in step 2, a method for calculating the shape feature function of the shape sample in the semi-global scale specifically includes:
using three types of shape descriptors M:
M={sk(i),lk(i),ck(i)|k∈[1,m],i∈[1,n]},
wherein sk, lk, ck are three invariants, namely, a normalized area s, a normalized arc length l, and a normalized barycentric distance c, at a scale k, k is a scale label, and m is the total number of scales; defining descriptors of the three shape invariants respectively:
making a preset circle C1(i) with an initial radius
by taking the contour sampling point p(i) as a circle center, i.e., a target contour point, wherein the preset circle is an initial semi-global scale of the target contour point; after the preset circle C1(i) is acquired, calculating the three types of shape descriptors as follows:
in the case of calculating a s1(i) descriptor, denoting the area of a region Z1(i), which has a direct connection relationship with the target contour point p(i), in the preset circle C1(i) as s1*(i), then:
s1*(i)=∫C
wherein B(Z1(i),x) is an indicator function, which is defined as
a ratio of the area of Z1(i) to the area of the preset circle C1(i) is used as an area parameter s1(i) for a multiscale invariant descriptor of the target contour point p(i):
wherein a value range of s1(i) should be between 0 and 1;
when calculating a c1(i) descriptor, firstly calculating a barycenter of a region having a direct connection relationship with the target contour point p(i), to be specific, averaging coordinate values of all pixel points in the region, and the obtained result is the coordinate values of the barycenter of the region, which can be expressed as:
wherein w1(i) indicates the barycenter of the region,
then, calculating a distance c1*(i) between the target contour point p(i) and the barycenter w1 (i), which can be expressed as,
c1*(i)=∥p(i)−w1(i)∥,
finally, using a ratio of the radius of ci*(i) to the radius of the preset circle C1(i) of the target contour point p(i) as a barycenter parameter c1(i) of the multiscale invariant descriptor of the target contour point p(i):
wherein the value range of c1(i) should be between 0 and 1;
wherein the value range of l1(i) should be between 0 and 1;
calculating to acquire the shape feature function of the shape sample S at the semi-global scale having a scale label k=1 and an initial radius
M
1
={s
1(i),l1(i),c1(i)|i∈[1,n]}
Further, in step 3, a method for calculating the shape feature function of the shape sample in the full-scale space specifically includes:
selecting a single pixel as a continuous scale change spacing in the full-scale space since a digital image takes one pixel as the smallest unit, that is, for a kth scale label, setting a radius rk of a circle Ck(i):
that is, in the case of an initial scale k=1,
and thereafter, reducing the radius rk for m−1 times at an equal amplitude by taking one pixel as a unit, until reaching the smallest scale k=m; and calculating to acquire the shape feature functions of the shape sample S at all scales:
M={sk(i),lk(i),ck(i)|k∈[1,m],i∈[1,n]}.
Further, in step 4, the shape feature functions at various scales are respectively stored into the matrix, and are combined in a continuous scale change order to acquire the three types of shape feature grayscale map representations of the shape sample in the full-scale space:
G={s,l,c}
wherein s, l, c each indicate a grayscale matrix with a size of m×n,
Further, in step 5, the three types of shape feature grayscale map representations of the shape sample are synthesized, as the three channels of RGB, into a color feature representation image, which acts as tensor representation Tm×n×3 of the shape sample S,
wherein
Further, in step 6, a structure for constructing the two-stream convolutional neural network includes a two-stream input layer, a pre-training layer, fully connected layers and an output layer, wherein the pre-training layer is composed of the first four modules of a VGG16 network model, and parameters acquired after the four modules are trained in an imagenet data set are used as initialization parameters, and three fully connected layers are connected after the pre-training layer;
in the pre-training layer, a first module specifically includes two convolution layers and one maximum pooling layer, wherein each of the convolution layers has 64 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a second module specifically includes two convolution layers and one maximum pooling layer, wherein each of the convolution layers has 128 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a third module specifically includes three convolution layers and one maximum pooling layer, wherein each of the convolution layers has 256 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a fourth module specifically includes three convolution layers and one maximum pooling layer, wherein each of the convolution layers has 512 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a calculation formula for each convolution layer is:
CO=ϕrelu(WC·CI+θC),
wherein ϕtan h is a relu activation function, θC is a bias vector of the convolutional layer, WC is a weight of the convolutional layer, CI is an input of the convolutional layer, and CO is an output of the convolutional layer;
a module of the fully connected layers specifically includes three fully connected layers, wherein a first fully connected layer contains 4096 nodes, a second fully connected layer contains 1024 nodes, a third fully connected layer contains N nodes, with N representing the number of types contained in a sample data set, and a calculation formula for the first two fully connected layers is:
FO=ϕtan h(WF·FI+θF),
wherein ϕtan h is a tan h activation function, θF is a bias vector of the fully connected layers, WF is a weight of the fully connected layers, FI is an input of the fully connected layers, and FO is an output of the fully connected layers;
the last fully connected layer is an output layer, has an output calculated with a formula as follows:
YO=ϕsoftmax(WY·YI+θY),
wherein ϕsoftmax is a softmax activation function. θY is a bias vector of the output layer. WY is a weight of the output layer, Y1 is an input of the output layer, and YO is an output of the output layer; and each neuron of the output layer represents a corresponding shape category.
Further, in step 7, a method for achieving classified recognition of the contour shape specifically includes: inputting all training samples into the two-stream convolutional neural network to train the two-stream convolutional neural network model; inputting the test sample into the trained two-stream convolutional neural network model; and determining a shape category, corresponding to a maximum value among output vectors, as a shape type of the test sample, thereby achieving the classified recognition of the contour shape.
The present invention provides a novel method for contour shape representation, and designs a novel method for shape classification by using a two-stream convolutional neural network; the provided contour shape representation is based on the capture of a target feature in a full-scale space, and this feature vector representation is simple and is suitable for performing target classification by using the convolutional neural network; and by means of the continuous convolution calculation of the neural network, the features of each sampling point in the full-scale space are extracted, and meanwhile, a feature relationship between adjacent sampling points is captured. Compared with the method in the background art where only corresponding salient point features are calculated and compared for matching, the present invention can provide more comprehensive comparison for all the information represented by the original shape; the provided two-stream convolutional neural network model makes full use of the feature information represented by the descriptors in the full-scale space, and also makes use of the original information of an original target for assistance, which effectively increases the discriminability in the target description of a shape.
The technical solutions in the embodiments of the present invention will be described clearly and completely below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the embodiments described are merely some instead of all of the embodiments of the present invention. Based on the embodiments in the present invention, any other embodiments acquired by a person of ordinary skills in the art without making creative efforts shall fall within the protection scope of the present invention.
As shown in
1. As shown in
S={px(i),py(i)|i∈[1,100]},
Wherein px(i),py(i) indicates coordinates of a contour sampling point p(i) in a two-dimensional plane.
2. As shown in
making a preset circle C1(i) with an initial radius
by taking a contour sampling point p(i) as a circle center, i.e., a target contour point, the preset circle being an initial semi-global scale of the target contour point. After the preset circle C1(i) is acquired according to the above steps, a part of the target shape would necessarily fall within the preset circle, as schematically shown in
s1*(i)=∫C
wherein B(Z1(i), x) is an indicator function, which is defined as:
a ratio of the area of Z1(i) to the area of the preset circle C1(i) is used as an area parameter s1(i) for a multiscale invariant descriptor of the target contour point p(i):
and the value range of s1(i) should be between 0 and 1.
In the case of calculating the barycenter of a region having a direct connection relationship with the target contour point p(i), it specifically includes averaging the coordinate values of all pixel points in the region to acquire a result that is the coordinate values of the barycenter of the region. This process can be expressed as:
wherein w1(i) indicates the barycenter of the area.
Calculating a distance c1*(i) between the target contour point p(i) and the barycenter w1(i) can be expressed as:
c1*(i)=∥p(i)−w1(i)∥,
a ratio of c1*(i) to the radius of the preset circle C1(i) of the target contour point p(i) is used as a barycenter parameter c1(i) of the multiscale invariant descriptor of the target contour point p(i):
and the value range of c1(i) should be between 0 and 1.
After the preset circle is acquired according to the above steps, one or more arc segments would necessarily fall within the preset circle after the contour of the target shape is cut by the preset circle, as shown in
and the value range of l1(i) should be between 0 and 1.
Based on the above steps, the feature function of the shape sample S at the semi-global scale having a scale label k=1 and the initial radius
is calculated:
M1={s1(i),l1(i),c1(i)|i∈[1,100]},
The feature functions calculated at this layer of scale are stored into a feature vector.
3. As shown in
That is, in the case of an initial scale k=1,
and thereafter, the radius rk is reduced 99 times at an equal amplitude by taking one pixel as a unit, until reaching the smallest scale k=100. The feature functions of the shape sample S in the full-scale space are obtained by calculation:
M={sk(i),lk(i),ck(i)k∈[1,100],i∈[1,100]}.
4. As shown in
G={s,l,c},
wherein s, l, c each indicate a grayscale matrix with a size of m×n.
5. As shown in
wherein
6. A two-stream convolutional neural network is constructed, including a two-stream input layer, a pre-training layer, fully connected layers, and an output layer. The present invention normalizes the size of an original contour shape to 100*100. Then, both the original shape and its corresponding feature representation image are simultaneously input into a two-stream convolutional neural network structure model for training. In the present invention, an sgd optimizer is used; a learning rate is set to 0.001; a delay rate is set to 1e-6; a cross entropy is selected as a loss function; the weight of a two-stream feature is set to 1:1; softmax is selected as a classifier; and 128 is selected as the batch size. As shown in
In the pre-training layer, a first module specifically comprises two convolution layers and one maximum pooling layer, wherein each of the convolution layers has 64 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a second module specifically comprises two convolution layers and one maximum pooling layer, wherein each of the convolution layers has 128 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a third module specifically comprises three convolution layers and one maximum pooling layer, wherein each of the convolution layers has 256 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2; a fourth module specifically comprises three convolution layers and one maximum pooling layer, wherein each of the convolution layers has 512 convolution kernels, with a size of 3×3, and the pooling layer has a size of 2×2. The calculation formula for each layer of convolution is:
CO=ϕrelu(WC·C1+θC).
wherein ϕrelu is a relu activation function, θC is a bias vector of the convolutional layer, WC is a weight of the convolutional layer, CI is an input of the convolutional layer, and CO is an output of the convolutional layer.
A module of the fully connected layers specifically includes three fully connected layers, wherein a first fully connected layer contains 4096 nodes, a second fully connected layer contains 1024 nodes, a third fully connected layer contains 70 nodes. The calculation formula for the first two fully connected layers is:
FO=ϕtan h(WF·FI+θF),
wherein ϕtan h is a tan h activation function, θF is a bias vector of each of the fully connected layers, WF is a weight of each of the fully connected layers, FI is an input of each of the fully connected layers, and FO is an output of each of the fully connected layers:
the last fully connected layer is an output layer, which has an output calculated with a formula as follows:
YO=ϕsoftmax(WY·YI+θY),
wherein ϕsoftmax is a softmax activation function, θY is a bias vector of the output layer, WY is a weight of the output layer, YI is an input of the output layer, and YO is an output of the output layer; and each neuron of the output layer represents one corresponding shape category.
7. All training samples are input into the two-stream convolutional neural network to train the two-stream convolutional neural network model; the test sample is input into the trained two-stream convolutional neural network model; and a shape category corresponding to a maximum value among output vectors is determined as a shape type of the test sample, thereby achieving the classified recognition of the shape.
Although the present invention is illustrated in detail with reference to the foregoing embodiments, those skilled in the art would also have been able to make modifications on the technical solutions recorded in the foregoing embodiments, or make equivalent replacement on some of the technical features therein. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principle of the present invention shall be incorporated within the protection scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
202010777341.5 | Aug 2020 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/093615 | 5/13/2021 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2022/028031 | 2/10/2022 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20170206431 | Sun et al. | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
107103323 | Aug 2017 | CN |
107203742 | Sep 2017 | CN |
110991465 | Apr 2020 | CN |
111898621 | Nov 2020 | CN |
Number | Date | Country | |
---|---|---|---|
20230047131 A1 | Feb 2023 | US |