The present invention relates to a tip attachment discrimination device that discriminates a type of a tip attachment of a work machine.
For example, Patent Literature 1 describes a technique in which a distance sensor measures a distance distribution including a tip attachment (an attachment in Patent Literature 1) and recognizes the tip attachment based on the distance distribution (see claim 1 of Patent Literature 1).
In the technique described in Patent Literature 1, the type of the tip attachment and the like are recognized based on the distance distribution, and a distance sensor is used to measure the distance distribution. The distance sensor, however, may have higher cost than monocular cameras. When discriminating the type of the tip attachment, it is important to secure the accuracy of the discrimination.
Therefore, an object of the present invention is to provide a tip attachment discrimination device that can accurately discriminate the type of the tip attachment without using the distance distribution.
A tip attachment discrimination device according to one aspect of the present disclosure is a tip attachment discrimination device of a work machine including: a lower travelling body; an upper slewing body provided above the lower travelling body; and a work device including a tip to which one of different types of tip attachments is attached in a replaceable manner, the work device being attached to the upper slewing body. The tip attachment discrimination device includes: a camera attached to the upper slewing body and configured to capture an image within a movable range of the tip attachment; a work device posture sensor configured to detect a posture of the work device; and a controller, in which the controller: sets a detection frame in an area including the tip attachment with respect to the image captured by the camera based on the posture of the work device detected by the work device posture sensor; and discriminates the type of the tip attachment based on the image of the tip attachment within the detection frame.
With the above-described configuration, it is possible to accurately discriminate the type of the tip attachment without using the distance distribution.
With reference to
The tip attachment discrimination device 1 is a device that automatically discriminates a type of a tip attachment 25, and is provided in a work machine 10. The work machine 10 includes a construction machine that performs work such as construction work. As the construction machine, for example, the work machine 10 such as a hydraulic excavator, a hybrid shovel, a crane, or the like can be employed. The work machine 10 includes a lower travelling body 11, an upper slewing body 13, a work device 20, a work device posture sensor 30, and a camera 40. Furthermore, the work machine 10 includes a monitor 50 and a controller 60 shown in
As shown in
The work device 20 is a device that is attached to the upper stewing body 13 and performs work. The work device 20 includes a boom 21, an arm 23, and the tip attachment 25. The boom 21 is rotatably attached to the upper stewing body 13. The arm 23 is rotatably attached to the boom 21.
The tip attachment 25 is provided at a tip of the work device 20. The tip attachment 25 is replaceable with different types of tip attachments. The types of the tip attachments 25 include a bucket (example shown in
The work device posture sensor 30 is a sensor that detects a posture of the work device 20 shown in
Here, the boom angle sensor 31 may include, for example, a sensor that detects an expansion and contraction amount of the boom cylinder that drives the boom 21. In this case, the boom angle sensor 31 is required at least to convert the expansion and contraction amount of the boom cylinder into the boom 21 angle and output the boom 21 angle to the controller 60. Alternatively, the boom angle sensor 31 may output the detected expansion and contraction amount to the controller 60, and the controller 60 may convert the expansion and contraction amount into the boom 21 angle. The configuration of detecting the angle by detecting the expansion and contraction amount of the cylinder is also applicable to the arm angle sensor 33 and the tip attachment angle sensor 35. The arm angle sensor 33 detects the angle of the aim 23 with respect to the boom 21 (arm 23 angle). The tip attachment angle sensor 35 detects the angle of the tip attachment 25 with respect to the arm 23 (tip attachment 25 angle).
The camera 40 (image capturing device) is configured to capture an image within a movable range of the tip attachment 25. The camera 40 captures an image of the work device 20 and surroundings thereof. The camera 40 is preferably configured to capture the entire range assumed as the movable range of the tip attachment 25. The camera 40 may be attached to the upper slewing body 13, for example, may be attached to the cab 13c (for example, upper left front), and may be attached to, for example, a portion of the upper stewing body 13 other than the cab 13c. The camera 40 is fixed to the upper slewing body 13. The camera 40 may be configured to be movable (for example, pivotable) with respect to the upper slewing body 13. The camera 40 may include, for example, a monocular camera. In order to reduce the cost of the camera 40, the camera 40 is preferably a monocular camera. The camera 40 preferably has, for example, a zoom function such as an optical zoom function. Specifically, a zoom position (focal length) of the camera 40 is preferably continuously variable between a telephoto side and a wide-angle side. Note that
The monitor 50 displays various information items. The monitor 50 may display an image captured by the camera 40, for example, as shown in
As shown in
(Operation)
With reference to the flowchart shown in
In step S11, the camera 40 captures an image including the tip attachment 25. Here, the camera 40 is required at least to capture an image including the tip attachment 25 successively in terms of time. The controller 60 acquires the image captured by the camera 40 as shown in
In step S13, the work device posture sensor 30 detects the posture of the work device 20. In more detail, the boom angle sensor 31 detects the boom 21 angle, the arm angle sensor 33 detects the arm 23 angle, and the tip attachment angle sensor 35 detects the tip attachment 25 angle. Then, the first controller 61 of the controller 60 acquires posture information on the work device 20 detected by the work device posture sensor 30. The first controller 61 calculates a relative position of the reference position 25b with respect to the upper sleeving body 13 based on the boom 21 angle and the arm 23 angle. The first controller 61 can calculate the rough position of the tip attachment 25 based on the position of the reference position 25b and the tip attachment 25 angle. Details of this calculation will be described later.
In step S20, the first controller 61 sets the detection frame F in the camera image Im as shown in
(Setting of Detection Frame F)
The position, size, shape, and the like of the detection frame F in the camera image Im are set as follows. The detection frame F is set such that the entire external shape of the tip attachment 25 is included inside the detection frame F.
A background portion outside the external shape of the tip attachment 25 in the camera image Im is unnecessary information, that is, noise when discriminating the tip attachment 25. Therefore, the detection frame F is preferably set so as to minimize the background portion within the detection frame F. That is, the detection frame F is preferably set at a size as small as possible and such that the entire external shape of the tip attachment 25 fits inside the detection frame F. For example, the tip attachment 25 preferably appears in the central portion within the detection frame F.
(Setting of Detection Frame F Based on Posture of Work Device 20)
The position and size of the tip attachment 25 appearing in the camera image Im change depending on the posture of the work device 20. For example, as shown in
Therefore, the detection frame F is set based on the posture of the work device 20. For example, the detection frame F is set based on the position of the reference position 25b in the camera image Im. For example, the position of the reference position 25b in the camera image Im is calculated based on the boom 21 angle and the arm 23 angle. For example, the position of the reference position 25b in the camera image Im is acquired based on the position of the reference position 25b with respect to the upper slewing body 13 or the camera 40 shown in
Specifically, the reference position 25b is calculated as follows, for example. The first controller 61 reads, from a memory (not shown), a reference position determination table in which correspondence between the boom 21 angle, the arm 23 angle, and the reference position 25b in the camera image Im is determined in advance. Then, the first controller 61 is required at least to acquire the reference position 25b by identifying the reference position 25b corresponding to the boom 31 angle detected by the boom angle sensor 31 and the arm 23 angle detected by the arm angle sensor 33 from the reference position determination table.
Here, the reference position determination table is created in advance, for example, by a simulation using the specified work machine 10. By this simulation, the camera 40 captures the work device 20 while changing each of the boom 21 angle and the arm 23 angle. Then, the position of the reference position 25b is identified in each of the obtained camera images Im, and a plurality of data sets in which the reference position 25b is associated with the boom 21 angle and the arm 23 angle is generated and stored in the position determination table. As described above, the position determination table is created. Note that this work may be performed by a person or by image processing.
Also, the detection frame F is set in the camera image Im as described below. The first controller 61 reads, from a memory (not shown), a detection frame determination table in which correspondence between the boom 21 angle, the arm 23 angle, the tip attachment 25 angle, and detection frame information indicating the size of the detection frame F is determined in advance. Here, the detection frame information includes, for example, the length of the vertical side and the length of the horizontal side of the detection frame F, positioning information indicating a position where the reference position 25b is to be positioned within the detection frame F, and other information. Then, the first controller 61 identifies, from the detection frame determination table, the detection frame information corresponding to the boom 21 angle detected by the boom angle sensor 31, the arm 23 angle detected by the aim angle sensor 33, and the tip attachment 25 angle detected by the tip attachment angle sensor 35. Then, the first controller 61 is required at least to set the detection frame F indicated by the identified detection frame information in the camera image Im. At this time, the first controller 61 is required at least to set the detection frame F such that the reference position 25b is positioned at a position within the detection frame F indicated by the positioning information included in the detection frame information.
Here, the detection frame determination table is created in advance, for example, by a simulation using the specified work machine 10 to which the specified tip attachment 25 such as a bucket is attached. By this simulation, the camera 40 captures the work device 20 while changing each of the boom 21 angle, the arm 23 angle, and the tip attachment 25 angle. Then, a certain area including the tip attachment 25 is extracted from each of the obtained camera images Im, and the extracted area is set as the detection frame F. Here, as the detection frame F, for example, a quadrilateral area circumscribing the tip attachment 25 in the camera image Im may be employed, or a quadrilateral area slightly larger in size than the circumscribing quadrilateral may be employed. This work may be performed by a person or by image processing.
In this way, the first controller 61 sets the detection frame F based on the posture of the work device 20. Therefore, the first controller 61 does not need to use an object detection algorithm, which is a process for detecting the tip attachment 25, in the entire area of the camera image Im. Therefore, a calculation load of the first controller 61 can be reduced accordingly. Moreover, since it is not necessary to use the object detection algorithm in the entire area of the camera image Im, the detection position of the tip attachment 25 that is subject to type discrimination is not erroneously recognized. For example, it is assumed that a tip attachment 25 different from the tip attachment 25 attached to the arm 23 is positioned within the angle of view of the camera 40 and appears in the camera image Im. In this case, the other tip attachment 25, which is not attached to the work machine 10, is not subject to type discrimination. Also, in this case, the other tip attachment 25, which is positioned away from the reference position 25b, appears outside the detection frame F in the camera image Im. Therefore, the present embodiment can prevent the other tip attachment 25 from becoming subject to type discrimination.
(Setting of Detection Frame F Based on Structure Information on Work Machine 10)
The position and size of the tip attachment 25 appearing in the camera image Im change depending on the structure of the work machine 10. For example, the position, size, and the like of the tip attachment 25 in the camera image Im change depending on the length of the boom 21 and the length of the arm 23. Moreover, for example, the type of the tip attachment 25 that is assumed to be provided in the work device 20 changes depending on the size of the work machine 10 (for example, “XX ton class”). Then, the position, size, and the like of the tip attachment 25 in the camera image Im change.
Therefore, the detection frame F is preferably set based not only on a detection value of the work device posture sensor 30 but also on structure information indicating the structure of the work machine 10. The structure information is included in, for example, main specifications of the work machine 10. The structure information may be, for example, set (stored) in advance by the first controller 61, or may be acquired by some kind of method. The structure information includes, for example, information on the upper slewing body 13, information on the boom 21, and information on the arm 23. The structure information includes, for example, the size (dimension) and relative position of each of the upper slewing body 13, the boom 21, and the arm 23. The structure information includes the position of the camera 40 with respect to the upper slewing body 13. The controller 60 can calculate the posture of the work device 20 more accurately by using not only the detection value of the work device posture sensor 30 but also the structure information on the work machine 10. For example, the controller 60 can calculate the reference position 25b more accurately. As a result, the background portion within the detection frame F can be reduced, and the accuracy of type discrimination of the tip attachment 25 can be improved.
When setting the detection frame F by using the structure infoiniation on the work machine 10, the first controller 61 can perform processing as in the following [Example A1] or [Example A2].
[Example A1] First, the rough detection frame F is set based on the posture of the work device 20 without using the structure information on the work machine 10. Thereafter, the detection frame F may be corrected based on the structure information on the work machine 10.
Specifically, the first controller 61 first determines the size of the detection frame F with reference to the detection frame determination table described above. Next, the first controller 61 is required at least to correct the size of the detection frame F by calculating a ratio of weight information on the specified work machine 10 used when creating the detection frame detei inination table to weight information included in the structure information on the work machine 10, and multiplying the size of the detection frame F identified from the detection frame determination table by the ratio. Note that the weight information is information indicating the size of the work machine 10, such as “XX ton class” described above.
[Example A2] The detection frame F may be set from the beginning based on the structure information on the work machine 10 and the posture of the work device 20 without performing the correction as in [Example A1] described above. Note that the shape of the detection frame F is rectangular in the example shown in
Specifically, the first controller 61 calculates the reference position 25b in the three-dimensional coordinate system of the work machine 10 by using the length of the boom 21 and the length of the arm 23 included in the structure information, and the boom 21 angle detected by the boom angle sensor 31 and the arm 23 angle detected by the arm angle sensor 33. Then, the first controller 61 calculates the reference position 25b in the camera image Im by projecting the reference position 25b in the three-dimensional coordinate system onto a captured surface of the camera 40. Then, the first controller 61 is required at least to set the detection frame F in the camera image Im by using the detection frame determination table described above. At this time, the controller 60 may correct the size of the detection frame F as shown in Example A1.
Note that even without the structure information on the work machine 10, the structure of the work machine 10 is roughly determined and is limited to a certain range. Therefore, even when the controller 60 does not acquire the structure information on the work machine 10, the controller 60 can set the detection frame F to include the tip attachment 25.
(Change in Detection Frame F)
The first controller 61 sequentially changes the setting of the detection frame F according to the change in the posture of the work device 20. Specifically, for example, the detection frame F is changed as follows. When the position of the reference position 25b in the camera image Im changes, the first controller 61 changes the position of the detection frame F according to the changed position of the reference position 25b. When the reference position 25b moves away from the camera 40 and the tip attachment 25 appearing in the camera image Im becomes smaller, the first controller 61 makes the detection frame F smaller. Similarly, when the reference position 25b comes closer to the camera 40 and the tip attachment 25 appearing in the camera image Im becomes larger, the controller 60 makes the detection frame F larger. When the angle of the tip attachment 25 with respect to the arm 23 changes and it is assumed that the aspect ratio of the tip attachment 25 appearing in the camera image Im changes, the first controller 61 changes the aspect ratio of the detection frame F. Note that in the detection frame determination table described above, a quadrilateral area circumscribing the tip attachment 25 appearing in the camera image Im or a quadrilateral area slightly larger in size than the circumscribing quadrilateral is set as the detection frame F. Therefore, if the detection frame F is set using the detection frame determination table, the size of the detection frame F is set smaller as the reference position 25b moves away from the camera 40, and the size of the detection frame F is set larger as the reference position 25b comes closer to the camera 40.
In step S31, the first controller 61 determines whether the position of the tip attachment 25 is a position that can be in a dead angle for the camera 40 as shown in
At the time of step S31, the type of the tip attachment 25 is unknown, and the structure (dimension, shape, and the like) of the tip attachment 25 is unknown. Therefore, even if the posture of the work device 20 is known, it is unknown whether the tip attachment 25 is actually disposed on the Z2 side of the ground plane A. Therefore, for example, the predetermined posture condition may be the posture of the work device 20 in which the largest tip attachment 25 among the tip attachments 25 assumed to be provided in the work device 20 is disposed on the Z2 side of the ground plane A. For example, the predetermined posture condition may be set based on the distance from the ground plane A to the reference position 25b.
Specifically, on the assumption that the assumed largest tip attachment 25 has been attached, the first controller 61 determines the position of the tip of the tip attachment 25 from the boom 21 angle, the arm 23 angle, and the tip attachment 25 angle respectively detected by the boom angle sensor 31, the arm angle sensor 33, and the tip attachment angle sensor 35. Then, when the distance in the up-and-down direction between the position of the tip of the tip attachment 25 and the reference position 25b is longer than the distance in the up-and-down direction from the reference position 25b to the ground plane A, the first controller 61 may determine that the predetermined posture condition is satisfied.
As shown in
Note that in the flowchart shown in
In step S33, the first controller 61 determines a corresponding distance L corresponding to the distance from the camera 40 to the tip attachment 25. When the corresponding distance L is too long, in the camera image Im shown in
At the time of step S33, the type of the tip attachment 25 is unknown, and the structure of the tip attachment 25 is unknown. Therefore, the actual distance from the camera 40 to the tip attachment 25 is unknown. Therefore, in the determination of step S33, the corresponding distance L corresponding to the actual distance from the camera 40 to the tip attachment 25 is used. For example, the corresponding distance L is a distance in the front-and-rear direction from the camera 40 to the reference position 25b. The same is true of step S35. Alternatively, the corresponding distance L may be, for example, a distance in the front-and-rear direction between the camera 40 and the largest tip attachment 25 among the tip attachments 25 assumed to be provided in the work device 20. The same is true of step S35.
When the corresponding distance L is equal to or shorter than a first predetermined distance determined in advance (YES in step S33), the process proceeds to step S35 in order to perform type discrimination of the tip attachment 25. A value of the first predetermined distance is set in the first controller 61 in advance. The first predetermined distance is set according to whether the accuracy of discriminating the tip attachment 25 can be secured. For example, the first predetermined distance is set according to the performance of the camera 40, discriminating capability of the second controller 62, and the like. The same is true of a second predetermined distance used in step S35. Note that, for example, when a zoom function of the camera 40 is used, it is only required that the accuracy of discrimination of the tip attachment 25 can be secured with the zoom position being on the most telephoto side. The first predetermined distance is 5 m in the example shown in
When the corresponding distance L is longer than the first predetermined distance (NO in step S33), the first controller 61 does not perform type discrimination of the tip attachment 25. In this case, the current flow is finished, and the process returns to, for example, “start.” In this way, when the corresponding distance L corresponding to the distance from the camera 40 to the tip attachment 25 is long and there is a possibility that the accuracy of type discrimination of the tip attachment 25 may not be secured, type discrimination of the tip attachment 25 is not performed. Therefore, erroneous discrimination can be eliminated, and unnecessary processing can be eliminated.
In step S35, the first controller 61 determines whether to set the zoom position of the camera 40 at a position on the telephoto side from the most wide-angle side based on the corresponding distance L. When the corresponding distance L is equal to or longer than the second predetermined distance (YES step S35), the process proceeds to step 537. A value of the second predetermined distance is set by the controller 60 in advance. The second predetermined distance is shorter than the first predetermined distance. The second predetermined distance is 3 m in the example shown in
In step S37, the first controller 61 sets the zoom position of the camera 40 at a position on the telephoto side from the most wide-angle side. As the corresponding distance L increases, the zoom position of the camera 40 is set on the telephoto side more, and the image including the detection frame F is enlarged. This control is performed when the corresponding distance L is equal to or shorter than a first predetermined value (YES in S33) (for example, 5 m or shorter) and equal to or longer than a second predetermined value (YES in S35) (for example, 3 m or longer). By setting the zoom position of the camera 40 on the telephoto side, the image of the tip attachment 25 becomes clearer than when the image of the tip attachment 25 is enlarged as it is and magnified, and the accuracy of type discrimination of the tip attachment 25 can be improved.
Note that when the zoom position of the camera 40 is set on the telephoto side in step S37, the first controller 61 is required at least to change the size of the detection frame F according to a telephoto ratio. In this case, the first controller 61 is required at least to read, from a memory, a table in which correspondence between the telephoto ratio and an enlargement ratio of the detection frame F according to the telephoto ratio is defined in advance, refer to the table to identify the enlargement ratio of the detection frame F according to the telephoto ratio, and enlarge the detection frame F that is set in step S20 by the identified enlargement ratio. In this table, for example, the enlargement ratio of the detection frame F is stored in the camera image Im captured by telephotography such that the size of the detection frame F is enlarged to a size that includes the entire area of the image of the tip attachment.
In step S40, the second controller 62 of the controller 60 discriminates the type of the tip attachment 25. This discrimination is performed based on the image of the tip attachment 25 within the detection frame F. The discrimination is performed by comparing a feature amount of the tip attachment 25 acquired from the image of the tip attachment 25 within the detection frame F with a feature amount that is set in advance by the second controller 62. The feature amount used for the discrimination is, for example, a contour shape (external shape) of the tip attachment 25.
In more detail, the first controller 61 shown in
The first controller 61 outputs the cut out images to the second controller 62. In a memory of the second controller 62, a feature amount of a reference image serving as a reference for type discrimination of the tip attachment 25 is stored in advance in association with a type name of the tip attachment 25. The reference image includes images of various postures of various types of tip attachments 25. The second controller 62 acquires the image within the detection frame F input from the first controller 61 as an input image, and calculates the feature amount from the input image. Here, as the feature amount, for example, a contour shape of the image within the detection frame F can be employed. The second controller 62 is required at least to extract the contour shape of the image within the detection frame F by applying, for example, a predetermined edge detection filter to the acquired input image, and calculate the contour shape as the feature amount.
Then, the second controller 62 discriminates the type of the tip attachment 25 by comparing the feature amount of the input image with the feature amount of the reference image. As tendencies of the feature amount of the input image and the feature amount of the reference image match more, the accuracy of type discrimination of the tip attachment 25 increases. Moreover, as the number of reference images increases and an amount of learning increases, the accuracy of type discrimination of the tip attachment 25 increases. Then, the second controller 62 outputs a discrimination result to the first controller 61.
Specifically, the second controller 62 is required at least to identify the feature amount of the reference image having the highest similarity to the feature amount of the input image among the feature amounts of the reference image stored in the memory, and output the type name of the tip attachment 25 associated with the identified feature amount of the reference image to the first controller 61 as the discrimination result.
The feature amount of the reference image is generated in advance by performing machine learning on a plurality of images of the tip attachment 25 having different postures for each of different types. As the machine learning, for example, a neural network, clustering, Bayesian network, support vector machine, and the like can be employed. As the feature amount, in addition to the outline shape, for example, the Haar-LIKE feature amount, pixel difference feature amount, edge of histogram (EOM feature amount, histogram of oriented gradients (HOG) feature amount, and the like can be employed.
Alternatively, the second controller 62 stores, in a memory, a neural network obtained by performing machine learning on a plurality of images of the tip attachment 25 using the type name of the tip attachment 25 as a teacher signal. Then, the second controller 62 may input the input image acquired from the first controller 61 into the neural network, and output the type name of the tip attachment 25 output from the neural network as the discrimination result to the first controller
Note that when a mode is employed in which a plurality of images of the detection frame F is input from the first controller 61, the second controller 62 is required at least to compare each of the feature amounts of the plurality of images of the detection frame F with each of the feature amounts of the plurality of reference images stored in a memory to determine the type of the tip attachment 25 by majority decision. That is, the second controller 62 is required at least to determine the type of the tip attachment 25 most often discriminated in the discrimination result for each of the plurality of images of the detection frame F as the final type of the tip attachment 25.
Here, a capturing angle of the camera 40 with respect to the tip attachment 25 is limited more when the camera 40 shown in
In step S50, the first controller 61 outputs the discrimination result input from the second controller 62 to the monitor 50. In this case, the first controller 61 may output the discrimination result to the monitor 50 by outputting a display command for displaying the discrimination result to the monitor 50. Here, the monitor 50 may display, for example, a character string indicating the type name of the tip attachment 25, an icon that graphically indicates the type of the tip attachment 25, or both the character string and the icon.
Note that the discrimination result may be used for interference prevention control of the work machine 10. Specifically, the first controller 61 determines the tip position of the tip attachment 25 by using the discrimination result of the tip attachment 25, the boom 21 angle, the arm 23 angle, and the tip attachment 25 angle. Then, when the first controller 61 determines that the tip position is positioned in an interference prevention area that is set around the work machine 10, the first controller 61 is required at least to execute interference prevention control such as reducing the operation speed of the work device 20 or stopping the operation of the work device 20.
(Comparison with Technology Using Distance Sensor)
An examination is performed into a case where type discrimination of the tip attachment 25 shown in
Furthermore, the distance sensor such as a time of flight (TOF) sensor has a narrow angle of view, and thus has a more limited detection range than the monocular camera. Therefore, it is considered to measure the distance distribution around the tip attachment 25 by using the distance sensor, for example, with the work device 20 in a specified limited posture, such as a posture in which the tip attachment 25 is in contact with the ground. However, in this case, when discriminating the type of the tip attachment 25, it is necessary to set the work device 20 in the specified posture, taking much time. Meanwhile, in the present embodiment, when discriminating the type of the tip attachment 25, the posture of the work device 20 can be set in almost any posture. Therefore, in the present embodiment, the degree of freedom of posture of the work device 20 when discriminating the type of the tip attachment 25 is high. In more detail, in the present embodiment, except for a state where type discrimination of the tip attachment 25 is not performed as in a case of YES in S31 and NO in S33 of
(Advantageous Effects)
Advantageous effects of the tip attachment discrimination device 1 shown in
(First Advantageous Effect of the Invention)
The tip attachment discrimination device 1 includes the work device 20, the camera 40, the work device posture sensor 30, and the controller 60. The work device 20 is attached to the upper slewing body 13 of the work machine 10. The work device 20 includes a tip (tip of the work device 20) including a plurality of types of tip attachment 25 in a replaceable manner. The camera 40 is attached to the upper stewing body 13 and can capture an image within a movable range of the tip attachment 25. The work device posture sensor 30 detects the posture of the work device 20.
[Configuration 1-1] The controller 60 sets the detection frame F (see
[Configuration 1-2] The controller 60 discriminates the type of the tip attachment 25 based on the image of the tip attachment 25 within the detection frame F.
In the above-described [Configuration 1-2], the controller 60 performs type discrimination of the tip attachment 25 based on the image. Therefore, the controller 60 can discriminate the type of the tip attachment 25 without using the distance distribution. As a result, the cost of the camera 40 can be reduced more than when the camera 40 needs to acquire the distance distribution.
Meanwhile, it can be said that there is less information for discrimination by distance information when type discrimination of the tip attachment 25 is performed based on the image than when type discrimination is performed using the distance distribution. Therefore, even if there is little information for discrimination, it is important to secure the accuracy of type discrimination of the tip attachment 25. Here, the appearance of the tip attachment 25 in the camera image Im (for example, position, size, shape, and the like) changes depending on the posture of the work device 20.
Therefore, in the above-described [Configuration 1-1], the controller 60 sets the detection frame F including the tip attachment 25 based on the posture of the work device 20. Therefore, the controller 60 can set the detection frame F suitable for type discrimination of the tip attachment 25. For example, the controller 60 can set the detection frame F such that the entire tip attachment 25 is included and the background portion around the tip attachment 25 is minimized. Therefore, it is possible to make the accuracy of type discrimination of the tip attachment 25 better than when the detection frame F is not set based on the posture of the work device 20. Therefore, the tip attachment discrimination device 1 can accurately perform type discrimination of the tip attachment 25 even without using the distance distribution.
(Second Advantageous Effect of the Invention)
[Configuration 2] The camera 40 is fixed to the upper slewing body 13.
With the above-described [Configuration 2], the capturing angle of the camera 40 with respect to the tip attachment 25 is limited more than when the camera 40 is not fixed to the upper stewing body 13. Therefore, an amount of information required for type discrimination of the tip attachment 25 can be reduced.
(Third Advantageous Effect of the Invention)
[Configuration 3] The controller 60 sequentially changes the setting of the detection frame F according to a change in the posture of the work device 20 detected by the work device posture sensor 30.
With the above-described [Configuration 3], after the detection frame F is set, even if the posture of the work device 20 changes, the controller 60 can perform type discrimination of the tip attachment 25.
(Fourth Advantageous Effect of the Invention)
[Configuration 4] The controller 60 sets the detection frame F based on the structure information on the work machine 10.
With the above-described [Configuration 1-1] and [Configuration 4], the controller 60 sets the detection frame F based on the posture of the work device 20 detected by the work device posture sensor 30 and the structure information on the work machine 10. Therefore, the controller 60 can set the detection frame F more suitable for type discrimination of the tip attachment 25 than when the detection frame F is set based only on the posture of the work device 20.
(Fifth Advantageous Effect of the Invention)
[Configuration 5] The camera 40 has a zoom function. The controller 60 calculates the distance from the tip attachment 25 to the camera 40 based on the posture of the work device 20 detected by the work device posture sensor 30, and sets the zoom position of the camera 40 on the telephoto side as the distance increases.
With the above-described [Configuration 5], even if the distance from the tip attachment 25 to the camera 40 becomes longer, by setting the zoom position of the camera 40 on the telephoto side, the resolution of the image of the tip attachment 25 within the detection frame F can be increased. Therefore, the accuracy of type discrimination of the tip attachment 25 can be improved.
(Sixth Advantageous Effect of the Invention)
As shown in
[Configuration 6-1] When the posture of the work device 20 detected by the work device posture sensor 30 does not satisfy the predetermined posture condition (NO in step S31 of
[Configuration 6-2] When the posture of the work device 20 detected by the work device posture sensor 30 satisfies the predetermined posture condition (YES in step 531 of
When the tip attachment 25 can be disposed on the Z2 side with respect to the ground plane A, at least part of the tip attachment 25 may be in a dead angle for the camera 40. Then, type discrimination of the tip attachment 25 cannot be performed or the accuracy of discrimination cannot be secured in some cases. Therefore, the tip attachment discrimination device 1 has the above-described [Configuration 6-2]. Therefore, it is possible to inhibit the controller 60 from erroneously discriminating the type of the tip attachment 25, and to eliminate unnecessary processing of the controller 60. Also, the above-described [Configuration 6-2] makes it possible to perform type discrimination of the tip attachment 25 in a state where it is easy to secure the accuracy of type discrimination of the tip attachment 25. As a result, the accuracy of type discrimination of the tip attachment 25 can be improved.
(Seventh Advantageous Effect of the Invention)
The controller 60 acquires the corresponding distance L corresponding to the distance from the tip attachment 25 to the camera 40 based on the posture of the work device 20 detected by the work device posture sensor 30.
[Configuration 7-1] When the corresponding distance L is equal to or shorter than the first predetermined distance determined in advance (predetermined distance) (when YES in step S33 of
[Configuration 7-2] When the corresponding distance L is longer than the first predetermined distance (when NO in step S33 of
There is a possibility that, as the corresponding distance L increases and the distance from the camera 40 to the tip attachment 25 increases, in the camera image Im (see
(Modification)
The above-described embodiment may be modified in various manners. For example, connections between blocks in the block diagram shown in
Some components of the tip attachment discrimination device I may be provided outside the work machine 10. For example, the second controller 62 shown in
Number | Date | Country | Kind |
---|---|---|---|
2018-007324 | Jan 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/044473 | 12/4/2018 | WO | 00 |