The present invention relates to the technical field of image recognition technology, in particular, to a boundary detection device and method thereof.
In the image recognition technology with computers, the recognition for the boundary and contour of an image is quite basic and important; for example, how to clearly define the boundaries and contours from the images for the computer to determine the working range of a machine is important, such as how the robotic arm pick up items at a fixed point, and how the lawn mower determine the range of mowing. Therefore, the inventor thinks how to improve the quality for the boundary detection and recognition of the image is very important, thus thinking about ways to improve.
The problem solved by the present invention is to improve the boundary detection and recognition of the image and other related problems.
According to a first embodiment, a boundary detection device is provided in the present invention. The boundary detection device includes a camera drone and an image processing unit. The camera drone, for shooting a region to obtain an aerial image data. The image processing unit, communicatively connected to the camera drone, wherein the image processing unit is configured to convert the aerial image data from a RGB color space to an XYZ color space according to a formula
then convert the aerial image data from the XYZ color space to a Lab color space according to a formula:
to obtain a Lab color image data, and then operate a brightness feature data and a color feature data according to the Lab color image data; then, the image processing unit picks first to eighth circular masks, each of the circular masks having a boundary line to divide the mask region into two left and right semicircles with different colors, and boundary lines of the first to eighth circular masks are separated from a boundary line of the first circular shield by 22.5° clockwise in sequence. The image processing unit employs the first to eighth circular masks to perform a light and shadow intensity operation on each image point in the Lab color image data to obtain a texture feature data. The image processing unit performs operations according to the brightness feature data, the color feature data, and the texture feature data to obtain a first image boundary contour data.
According to a second embodiment, a boundary detection method is provided in the present invention. The method includes steps of:
(1) shooting a region to obtain an aerial image data with a camera drone, and sending the aerial image data to an image processing unit;
(2) converting, with the image processing unit, the aerial image data from a RGB color space to an XYZ color space according to a formula
then convert the aerial image data from the XYZ color space to a Lab color space according to a formula:
(3) operating, with the image processing unit, a brightness feature data and a color feature data according to the Lab color image data;
(4) picking, with the image processing unit, first to eighth circular masks, each of the circular masks having a boundary line to divide the mask region into two left and right semicircles with different colors, wherein boundary lines of the first to eighth circular masks are separated from a boundary line of the first circular shield by 22.5° clockwise in sequence; employing, with the image processing unit, the first to eighth circular masks to perform a light and shadow intensity operation on each image point in the Lab color image data to obtain a texture feature data;
(5) performing, with the image processing unit, operations according to the brightness feature data, the color feature data, and the texture feature data to obtain a first image boundary contour data.
Compared with the prior art, the present invention has the following creative features:
Eight circular masks having boundary lines with different angles are used to perform light and shadow intensity operations on each image point, so that when operations are performed for the image data according to the brightness feature data, the color feature data, and the texture feature data to obtain a first image boundary contour data, the first image boundary contour data has a better contour curve and hence is closer to a boundary contour of a real image. In particular, when the image is subjected to multilevel thresholding for contour analysis, the present invention may get a better contour analysis effect, so as to improve the quality of the overall boundary detection and contour recognition of the image.
In order to make the purpose and advantages of the invention clearer, the invention will be further described below in conjunction with the embodiments. It should be understood that the specific embodiments described here are only used to explain the invention, and are not used to limit the invention.
It should be understood that in the description of the invention, orientations or position relationships indicated by terms upper, lower, front, back, left, right, inside, outside and the like are orientations or position relationships are based on the direction or position relationship shown in the drawings, which is only for ease of description, rather than indicating or implying that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the invention.
Further, it should also be noted that in the description of the invention, terms “mounting”, “connected” and “connection” should be understood broadly, for example, may be fixed connection and also may be detachable connection or integral connection; may be mechanical connection and also may be electrical connection; and may be direct connection, also may be indirection connection through an intermediary, and also may be communication of interiors of two components. Those skilled in the art may understand the specific meaning of terms in the invention according to specific circumstance.
The present invention is a boundary detection device and method; first, the boundary detection device is described, which includes:
a camera drone 1:
with reference to
an image processing unit 2:
with reference to
then convert the aerial image data from the XYZ color space to a Lab color space according to a formula:
to obtain a Lab color image data, and then operate a brightness feature data and a color feature data according to the Lab color image data. With reference to
Next, the image processing unit 2 employs the first to eighth circular masks 3A to 3H to perform a light and shadow intensity operation on each image point in the Lab color image data to obtain a texture feature data. Subsequently, the image processing unit 2 performs operations according to the brightness feature data, the color feature data, and the texture feature data to obtain a first image boundary contour data.
The present invention mainly utilizes 8 circular masks 3A to 3H to perform light and shadow intensity operations on each image point in the Lab color image data, so that the first image boundary contour data has better boundary contour detection and recognition effect. Further, whether the present invention is used for image analysis in terms of multilevel thresholding or binarization, the invention may further improve the overall recognition and detection effect to solve the shortcomings of the background technology.
After the first image boundary contour data is established, in order to highlight an important contour in the image, the present invention may further use the method of noise setting to use the image other than the important contour as the background to highlight the important contour as the foreground; therefore, with reference to
and then performs a noise adjustment operation on the first image boundary contour data to finally obtain a second image boundary contour data according to the noise parameter value and the noise standard deviation value.
When the present invention is used for automatic grass maintenance, and pruning, the part of the second image boundary contour data that belongs to the grass ground may be recognized, and then the coordinate position may be marked for subsequent automatic grass maintenance, and pruning. To this end, the present invention may be further implemented as below: the camera drone 1 is provided with a first positioning unit 11, and the first positioning unit 11 may be configured to measure latitude and longitude coordinates of the camera drone 1, so that the aerial image data includes a latitude and longitude coordinate data; the second image boundary contour data comprises a grass ground contour block 8; a processing unit 4 finds out a comparison image data on a google map 5 according to the latitude and longitude coordinate data, and the comparison image data corresponds to the second image boundary contour data; the processing unit 4 finds out a latitude and a longitude of the grass ground contour block 8 according to the comparison image data and the second image boundary contour data to obtain a grass ground contour latitude and longitude data.
Since the google map 5 has the latitude and longitude information of each image location, the contour latitude and longitude of the grass ground contour block 8 in the second image boundary contour data may be found in a simplest way through the present invention, so that the lawn may be automatically maintained, and pruned through automated robots.
With reference to
After the above grass ground contour latitude and longitude data is obtained by the present invention, the grass ground contour latitude and longitude data may be used to make the lawn mower 6 automatically perform actions such as mowing within the grass range, and a very accurate positioning effect may be obtained through the virtual base station real-time kinematic 7 during the action, so that the overall positioning error is in the centimeter level, and the overall mowing effect is better.
With reference to
With reference to
According to Article 31 of the Patent Law, the specification also proposes a boundary detection method; since the advantages and characteristics related description of the boundary detection method are similar to the foregoing boundary detection device, the following description only introduces the boundary detection method, and the description of the related advantages and characteristics will not be repeated. The boundary detection method includes steps of:
(1) shooting a region to obtain an aerial image data with a camera drone 1, and sending the aerial image data to an image processing unit 2;
(2) converting, with the image processing unit 2, the aerial image data from a RGB color space to an XYZ color space according to a formula
then convert the aerial image data from the XYZ color space to a Lab color space according to a formula:
(3) operating, with the image processing unit 2, a brightness feature data and a color feature data according to the Lab color image data;
(4) picking, with the image processing unit 2, first to eighth circular masks 3A to 3H, each of the circular masks 3A to 3H having a boundary line 31A to 31H to divide each of the circular masks 3A to 3H into two left and right semicircles with different colors, wherein boundary lines 31A to 31H of the first to eighth circular masks 3A to 3H are separated from a boundary line 31A of the first circular mask 3A by 22.5° clockwise in sequence; employing, with the image processing unit 2, the first to eighth circular masks 3A to 3H to perform a light and shadow intensity operation on each image point in the Lab color image data to obtain a texture feature data;
(5) performing, with the image processing unit 2, operations according to the brightness feature data, the color feature data, and the texture feature data to obtain a first image boundary contour data.
The step (5) is added with a step (6) of: with the image processing unit 2, picking a noise parameter value, operating the noise standard deviation value according to
and then performing a noise adjustment operation on the first image boundary contour data to finally obtain a second image boundary contour data according to the noise parameter value and the noise standard deviation value.
In the step (1), the camera drone 1 is provided with a first positioning unit 11, the first positioning unit 11 measures latitude and longitude coordinates of the camera drone 1 while the camera drone 1 is shooting for the aerial image data to comprise a latitude and longitude coordinate data; in the step (5), the first image boundary contour data includes a grass ground contour block 8; the step (6) is added with a step (7) of: with a processing unit 4, finding out a comparison image data on a google map 5 according to the latitude and longitude coordinate data, the comparison image data corresponding to the second image boundary contour data, the processing unit 4 finding out a contour latitude and a longitude of the grass ground contour block 8 according to the comparison image data and the second image boundary contour data to obtain a grass ground contour latitude and longitude data.
The step (7) is further added with a step (8) of: connecting communicatively the lawn mower 6 to the processing unit 4, and providing the lawn mower 6 with a second positioning unit 61, wherein the second positioning unit 61 may be configured to be communicatively connected to a virtual base station real-time kinematic 7 (VBS-TRK) for acquiring a dynamic latitude and longitude coordinate data of the lawn mower 6; the lawn mower 6 moves according to the dynamic latitude and longitude coordinate data and the grass ground contour latitude and longitude data.
Between the step (7) and the step (8), a step (9) of, is further added: with the processing unit 4, setting a spiral motion path from the outside to the inside according to the grass ground marker block, and the processing unit 4 finding out a spiral motion path longitude and latitude data of the spiral motion path according to the comparison image data; in the step (8), the lawn mower 6 moves along the spiral motion path according to the dynamic latitude and longitude coordinate data and the spiral motion path longitude and latitude data.
The above are only preferred embodiments of the invention rather than limits to the invention. Those skilled in the art may make various modifications and changes to the invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the invention all should be included in the protection scope of the invention.