This application claims priority to Japanese Patent Application No. 2022-154676 filed on Sep. 28, 2022, the entire contents of which are incorporated by reference herein.
The present disclosure relates to an article detection device and an article detection method.
For example, a technique described in PCT International Publication No. 2020/189154 is known as an article detection device according to the related art. The article detection device described in PCT International Publication No. 2020/189154 includes an image acquiring unit configured to acquire a surroundings image by imaging the surroundings of the article detection device, an information image preparing unit configured to prepare an information image by converting information on a loading/unloading target portion of an article to an easily recognizable state, and a calculation unit configured to calculate a position and a posture of the loading/unloading target portion.
In the technique described in PCT International Publication No. 2020/189154, the article detection device can calculate a position and a posture of a loading/unloading target portion of an article such as a pallet. However, since a hole in a pallet into which a fork is inserted extends in a longitudinal direction, the fork may not be able to be properly inserted into the hole when the pallet is tilted at a predetermined pitch angle. Accordingly, there is demand for easily acquiring a pitch angle of an article.
Therefore, an objective of the present disclosure is to provide an article detection device, a calibration method, and an article detection method that can easily acquire a pitch angle of an article which is a loading/unloading target.
According to an aspect of the present disclosure, there is provided an article detection device that detects an article to be loaded/unloaded, the article detection device including: an image acquiring unit configured to acquire a surroundings image by imaging surroundings of the article detection device; a first information image preparing unit configured to prepare a first information image by converting information on a load/unloading target portion of the article to an easily recognizable state based on the surroundings image; a first calculation unit configured to calculate at least one of a position and a posture of the load/unloading target portion based on the first information image; a second information image preparing unit configured to prepare a second information image by converting information on a pitch angle detection portion of the article to an easily recognizable state; and a second calculation unit configured to extract at least two edge candidates for the article extending in a specific direction and included in the second information image based on a calculation result from the first calculation unit and to calculate a three-dimensional direction vector indicating a pitch angle of the article from the edge candidates.
In this article detection device, the first calculation unit can calculate at least one of a position and a posture of a load/unloading target portion based on the first information image. The article detection device includes the second information image preparing unit configured to prepare a second information image by converting information on the pitch angle detection portion of the article to an easily recognizable state. Here, when at least two edge candidates extending in a specific direction are extracted from the second information image, a three-dimensional direction vector indicating the specific direction can be calculated. A three-dimensional direction vector indicating the pitch angle of the article can be calculated based on the three-dimensional direction vector indicating the specific direction. Accordingly, the article detection device includes the second calculation unit configured to extract at least two edge candidates for the article extending in the specific direction from the second information image based on the calculation result from the first calculation unit and to calculate the three-dimensional direction vector indicating the pitch angle of the article based on the edge candidates. As a result, it is possible to easily calculate a pitch angle of an article from a surroundings image without using a particular sensor or the like. Accordingly, it is possible to easily acquire a pitch angle of an article which is a loading/unloading target.
The second information image preparing unit may prepare the second information image in a first case in which luggage is piled on the article and a second case in which luggage is not piled on the article in preparing the second information image, the second information image preparing unit may prepare the second information image with a zenith direction of the image acquiring unit as a central axis in the first case, and the second information image preparing unit may prepare the second information image with an optical axis direction of the image acquiring unit as a central axis in the second case. Accordingly, the second calculation unit can calculate a pitch angle of an article regardless of whether luggage is piled on the article.
The second calculation unit may extract the edge candidates from one side and the other side in a transverse direction of a loading/unloading target in the second information image. Accordingly, the second calculation unit can easily extract edge candidates from easily extractable positions.
The second information image preparing unit may prepare both the second information image with the zenith direction as a central axis and the second information image with the optical axis direction as a central axis, and the second calculation unit may employ a result of the second information image with the optical axis direction as a central axis when results of both the second information images are effective. When the zenith direction is set as the central axis, the background includes many edges and thus accuracy is poor. Accordingly, the second calculation unit can more accurately calculate a pitch angle by preferentially employing the result of the second information image with the optical axis direction as the central axis.
The second calculation unit may perform Hough transformation of edge points in a specific area for extracting the edge candidates in the second information image and determine that the edge candidates are present in the specific area when a length of an acquired straight line is equal to or greater than a predetermined length. Accordingly, the second calculation unit can easily determine whether an edge candidate is present.
According to another aspect of the present disclosure, there is provided a calibration method for an image acquiring unit used in an article detection device, the calibration method including: a step of disposing a calibration panel on a fork; a step of aligning a central portion of the panel with the origin of a fork coordinate system; a step of preparing a second information image by converting information on directions of longitudinal and transverse edges of the panel to an easily recognizable state; a step of extracting a longitudinal edge of the panel included in the second information image; and a step of extracting a transverse edge of the panel included in the second information image.
With this calibration method, a positional relationship between a fork coordinate system and a camera coordinate system can be ascertained from a positional relationship of the panel in the camera coordinate system by disposing the panel on the fork and aligning them. The step of extracting a longitudinal edge of the panel included in the second information image and the step of extracting a transverse edge of the panel included in the second information image are performed. Accordingly, it is possible to more accurately ascertain the positional relationship of the panel in the camera coordinate system. As a result, it is possible to accurately convert the camera coordinate system to the fork coordinate system.
According to another aspect of the present disclosure, there is provided an article detection method of detecting an article to be loaded/unloaded, the article detection method including: an image acquiring step of acquiring a surroundings image by imaging the surroundings of the article detection device; a first information image preparing step of preparing a first information image by converting information on a load/unloading target portion of the article to an easily recognizable state based on the surroundings image; a first calculation step of calculating at least one of a position and a posture of the load/unloading target portion based on the first information image; a second information image preparing step of preparing a second information image by converting information on a pitch angle detection portion of the article to an easily recognizable state; and a second calculation step of extracting at least two edge candidates for the article extending in a specific direction and included in the second information image based on a calculation result from the first calculation step and calculating a three-dimensional direction vector indicating a pitch angle of the article from the edge candidates.
With the article detection method, the same operations and advantages as in the article detection device can be achieved.
According to the present disclosure, it is possible to provide an article detection device, a calibration method, and an article detection method that can easily acquire a pitch angle of an article which is a loading/unloading target.
Hereinafter, an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
The mobile member 2 includes a pair of right and left reach legs 4 extending forward. Right and left front wheels 5 are rotatably supported by the right and left reach legs 4. A rear wheel 6 is a rear single wheel and is a driving wheel serving as a turning wheel. A rear part of the mobile member 2 is constituted as a standing type cab 12. A loading/unloading lever 10 for a loading/unloading operation and an accelerator lever 11 for a forward/reverse moving operation are provided in an instrument panel 9 in the front of the cab 12. A steering 13 is provided on the top surface of the instrument panel 9.
The loading/unloading device 3 is provided on the front side of the mobile member 2. When a reach lever of the loading/unloading lever 10 is operated, a reach cylinder (not illustrated) operates telescopically and thus the loading/unloading device 3 moves forward and reversely in a predetermined stroke range along the reach leg 4. The loading/unloading device 3 includes a two-stage mast 23, a lift cylinder 24, a tilt cylinder (not illustrated), and a fork 25. When a lift lever of the loading/unloading lever 10 is operated, the lift cylinder 24 operates telescopically and thus the mast 23 slides up and down, and the fork 25 moves up and down with the sliding of the mast.
The article detection device 100 of the forklift 50 according to this embodiment will be described below in more detail with reference to
The control unit 110 is connected to the imaging unit 32 and acquires an image captured by the imaging unit 32. The imaging unit 32 images the surroundings of the vehicle body 51 of the forklift 50. The imaging unit 32 is provided in a backrest of the fork 25 in the example illustrated in
The article detection device 100 is a device that detects an article to be loaded/unloaded. The control unit 110 of the article detection device 100 performs control for automatically driving the forklift 50. The control unit 110 detects an article in a stage before the forklift 50 approaches the article to be loaded/unloaded and ascertains a position and a posture of a load/unloading target portion of the article. The control unit 110 ascertains a pitch angle of the article. Accordingly, the control unit 110 approaches the article such that the forklift 50 can smoothly load/unload the article and performs control such that the fork 25 is inserted into the load/unloading target portion.
In order to insert the fork 25 into a hole 61b of the pallet 61, a height position (a position in the vertical direction), a reach position (a position in the longitudinal direction), and a tilt angle (a slope with respect to a horizontal direction) of the fork 25 needs to be controlled according to a position and a posture of the front surface 61a (the hole 61b) of the pallet 61 and a pitch angle θ1 of the pallet 61. In this specification, the position of the pallet 61 is a position in a three-dimensional space of the origin of the pallet coordinate system X3/Y3/Z3. The posture of the pallet 61 is a yaw posture around a vertical axis of the front surface 61a. The pitch angle θ1 of the pallet 61 is an angle of the X3 axis with respect to the horizontal direction. First, the article detection device 100 acquires the position, the posture, and the pitch angle θ1 of the pallet 61 in the camera coordinate system X1/Y1/Z1 to detect the pallet 61 using the imaging unit 32. Then, the article detection device 100 converts the position, the posture, and the pitch angle θ1 of the pallet 61 to the fork coordinate system X2/Y2/Z2. Accordingly, the forklift 50 is controlled such that the tip of the fork 25 is disposed at an entrance of the hole 61b and the tilt angle of the fork 25 is substantially equal to the pitch angle θ1 of the pallet 61 in a state immediately before the fork 25 is inserted into the hole 61b.
The control unit 110 includes an electronic control unit (ECU) comprehensively managing the device. The ECU is an electronic control unit including a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), a controller area network (CAN) communication circuit. The ECU realizes various functions, for example, by loading a program stored in the ROM to the RAM and causing the CPU to execute the program loaded to the RAM. The ECU may include a plurality of electronic control units. As illustrated in
The image acquiring unit 101 acquires a surroundings image by imaging the surroundings of the vehicle body 51 of the forklift 50.
The surroundings image is an image acquired by a fish-eye camera. That is, the imaging unit 32 is constituted by a fish-eye camera. The fish-eye camera is a camera that includes a general fish-eye lens and can monocularly capture an image with a wide field of view of about 180°.
A lens of a camera constituting the imaging unit 32 is not limited to a fish-eye lens. The imaging unit 32 has only to have such a viewing angle that an image of the pallet 61 can be acquired both at a position at which the forklift 50 gets apart from the rack 60 and at a position at which the forklift 50 gets close to the rack 60. That is, the imaging unit 32 has only to be a camera with a wide field of view that can simultaneously image a front view and a side view of the forklift 50. The imaging unit 32 may employ a wide-angle camera as long as it can capture an image with a wide field of view. The imaging unit 32 may capture an image with a wide field of view by combining a plurality of cameras facing a plurality of directions.
The feature plane setting unit 102 sets a feature plane SF (see
The information image preparing unit 103 prepares a first information image by converting information on the front surface 61a of the pallet 61 to an easily recognizable state based on the surroundings image. The information image preparing unit 103 prepares an information image using the feature plane SF.
The calculation unit 104 detects the pallet 61 to be loaded/unloaded based on the first information image. The calculation unit 104 calculates the position and the posture of the front surface 61a of the pallet 61 to be loaded/unloaded based on the information image.
The driving control unit 107 controls a position or a posture of the vehicle body 51 based on the information on the position and the posture of the front surface 61a of the pallet 61 calculated by the calculation unit 104. The driving control unit 107 may be configured as a control unit separate from the control unit 110 of the article detection device 100. In this case, the control unit 110 of the article detection device 100 outputs a calculation result to the control unit of the driving control unit 107, and the driving control unit 107 performs driving control based on the calculation result from the article detection device 100.
Here, the article detection device 100 performs preparation of the first information image and calculation of the position and the posture of the front surface 61a of the pallet 61 using a known method. The article detection device 100 uses a known method of generating an image obtained by projecting the surroundings image onto a projection plane (the feature plane SF) installed in arbitrary position and posture in the camera coordinate system with a designated resolution (mm/pixel). The article detection device 100 uses a known method of detecting the pallet 61 from the first information image on the projection plane installed in the vicinity of the front surface of the pallet 61 and estimating the substantially position and posture thereof in the camera coordinate system. The article detection device 100 uses a known method of estimating a position of a pallet and a posture around a vertical axis with the substantial position and posture in the camera coordinate system as initial values. A method described in PCT International Publication No. WO2020/189154 A1 may be used as such a known method.
The feature plane SF (projection plane) and the first information image will be described below in detail with reference to
The feature plane SF is a planar projection plane which is virtually set in a three-dimensional space to prepare an information image. The position or posture of the feature plane SF is information which is known in the setting step. The information image is an image obtained by converting information acquired at a position at which the surroundings image has been acquired to an easily recognizable state. The information acquired at a position at which the surroundings image has been acquired includes information such as positions and sizes of constituent parts of the rack 60 and the pallet 61 when seen from that position. The information image preparing unit 103 which will be described later prepares an information image by projecting the surroundings image to the feature plane SF.
The feature plane SF is a projection plane to which features of the front surface 61a of the pallet 61 are projected. Accordingly, the feature plane SF is set such that features of the front surface 61a of the pallet 61 appear in the information image projected to the feature plane SF. That is, the feature plane SF is a projection plane which is set to a position at which the features of the front surface 61a of the pallet 61 can appear accurately. In the information image of the front surface 61a of the pallet 61 projected to the feature plane SF set in this manner, information indicating features of the front surface 61a appears in a manner in which it can be easily recognized in an image recognizing process. Features of the front surface 61a mean outer features specific to the front surface 61a by which the front surface can be distinguished from other objects in the image. Information indicating features of the front surface 61a includes shape information or size information which can identify the front surface 61a.
For example, the front surface 61a of the pallet 61 has features such as a rectangular shape extending in the width direction and two holes 61b. Since the front surface 61a and the holes 61b of the pallet 61 are displayed to be distorted in the surroundings image (see
Here, the information image can most accurately represent shape target features and dimensional features of the front surface 61a when the feature plane SF is set as the front surface 61a of the pallet 61 to be loaded/unloaded. However, in a step in which the pallet 61 to be loaded/unloaded has not been identified (when an article state is not known), the feature plane SF cannot be set as the front surface 61a of the pallet 61. Accordingly, the feature plane setting unit 102 sets the feature plane SF to portions of structures near the pallet 61. The feature plane setting unit 102 sets the feature plane SF to the front surface 60a of the rack 60 as illustrated in
As illustrated in
In
The calculation unit 104 (see
The adjustment unit 106 (see
Referring back to
The information image preparing unit 103 prepares a panoramic image illustrated in
Specifically, the calculation unit 104 detects edge points in the panoramic image illustrated in
The information image preparing unit 103 prepares the second information image based on the surroundings image of the pallet 61 detected using the first information image. In preparing the second information image, the information image preparing unit 103 prepares the second information image in a first case (see
The calculation unit 104 extracts at least two edge candidates for the pallet 61 extending in a specific direction from the second information image and calculates a three-dimensional direction vector indicating the pitch angle of the pallet 61 from the edge candidates. The calculation unit 104 extracts the edge candidates from one side (left side) and the other side (right side) in the transverse direction of the loading/unloading target from the second information image. Here, the loading/unloading target includes both a case including only the pallet 61 in which luggage 62 is not piled and a case including a combination of the pallet 61 in which luggage 62 is piled and the luggage 62.
Specifically, when luggage 62 having a rectangular parallelepiped shape such as a cardboard box is piled on the pallet 61 as illustrated in
First, the calculation unit 104 can acquire the position and the posture of the pallet 61 in the camera coordinate system using the first information image. Accordingly, the calculation unit 104 can predict appearance areas E1A and E2A of the edges 62a and 62b of the luggage 62 on the pallet 61 in the panoramic image (see
As illustrated in
As illustrated in
First, the calculation unit 104 can acquire the position and the posture of the pallet 61 in the camera coordinate system using the first information image. Accordingly, the calculation unit 104 can predict appearance areas E1B and E2B of the edges 61c and 61d of the top surface of the pallet 61 in the panoramic image (see
As illustrated in
Here, scores when Hough transformation is performed on the edge points in the appearance areas E1A, E2A, E1B, and E2B correspond to lengths of the detected two-dimensional projection lines. Since the longitudinal lengths of the appearance areas E1A, E2A, E1B, and E2B are ideal lengths, whether there are edges in the appearance areas E1A, E2A, E1B, and E2B can be determined based on a ratio of the scores to the longitudinal lengths. Accordingly, the calculation unit 104 performs Hough transformation of the edge points in a specific area for extracting edge candidates in the second information image and determines that there are edge candidates in the specific area when the acquired length of the straight line is equal to or greater than a predetermined distance.
When luggage 62 is not actually piled on the pallet 61 but processing is performed based on the assumption that luggage 62 is piled thereon, the calculation unit 104 can determine that luggage 62 is not piled because sufficient edges are not present in the appearance areas E1A and E2A as illustrated in
As described above, the information image preparing unit 103 prepares both the second information image with the zenith direction CL1 as a central axis and the second information image with the optical axis direction CL2 as a central axis. The calculation unit 104 employs the result of the second information image with the optical axis direction CL2 as a central axis when an effective result is obtained in any of the second information images.
The calculation unit 104 converts information of the position and the posture of the pallet 61 in the camera coordinate system acquired from the first information image and the direction of the holes 61b acquired from the second information image to the fork coordinate system based on a relationship acquired through calibration between the camera coordinate system and the fork coordinate system. Accordingly, the calculation unit 104 calculates the pitch angle θ1 of the pallet 61 from the direction of the holes 61b in the fork coordinate system.
The calibration between the camera coordinate system and the fork coordinate system will be described below. As illustrated in
In this embodiment, a rectangular panel 40 with a known size is placed on the fork 25 before work by the forklift 50 is started. At this time, the center of a transverse edge 40a on the front side matches the origin of the fork coordinate system, transverse edges 40a and 40b are parallel to the Y2 axis of the fork coordinate system, and longitudinal edges 40c and 40d are parallel to the X2 axis of the fork coordinate system. That is, a panel coordinate system with X4/Y4/Z4 axes indicating the position and the posture of the panel 40 (see
Then, as illustrated in
In the same was as calculating the three-dimensional direction vectors of the right and left edges of the top surface of the pallet 61, the information image preparing unit 103 simulatively considers the optical axis direction of the imaging unit 32 as the vertical direction, converts the surroundings image to a panoramic image as illustrated in
In order to calculate direction vectors of the transverse edges 40a and 40b on the front and rear sides of the panel 40, the information image preparing unit 103 simulatively considers the transverse direction of the imaging unit 32 as the vertical direction, converts the surroundings image to a panoramic image as illustrated in
The calculation unit 104 calculates a direction vector of the remaining one axis from the vertical direction vector and the transverse direction vector of the panel 40 and updates the posture of the panel in the camera coordinate system. In this way, since the posture is accurately measured, the difference between the projection image and the top view of the panel 40 can be resolved. In order to enhance position estimation accuracy, the posture of the panel in the camera coordinate system is fixed to the updated posture, and the information image preparing unit 103 and the calculation unit 104 calculate the position of the panel 40 in the camera coordinate system based on the first information image again. In the panel coordinate system indicating the calculated position and posture of the panel 40 in the camera coordinate system, the calculation unit 104 calculates the position and the posture of the fork coordinate system in the camera coordinate system in consideration of an offset from the center of the panel 40 to the origin of the fork coordinate system. Here, the panel coordinate system can be converted to the fork coordinate system by offsetting the panel coordinate system by half the length in the longitudinal direction of the panel 40.
Details of an article detection method according to this embodiment will be described below with reference to
As illustrated in
Details of the second information image preparing step S4 and the second calculation step S5 will be described below with reference to
The calculation unit 104 determines whether a score after the Hough transformation has been performed is equal to or greater than a threshold value (Step S50). When it is determined in Step S50 that the score is less than the threshold value, the calculation unit 104 sets up a luggage-absence flag (Step S60) and then performs the routine illustrated in
As illustrated in
The calculation unit 104 determines whether a score after the Hough transformation has been performed is equal to or greater than a threshold value (Step S150). When it is determined in Step S150 that the score is less than the threshold value, the calculation unit 104 sets up the luggage-presence flag (Step S160) and then performs the routine illustrated in
As illustrated in
After Steps S220 and S240 have been performed, the calculation unit 104 converts the position and the posture of the pallet 61 and the hole direction H of the pallet in the camera coordinate system to the fork coordinate system (Step S260). Accordingly, the calculation unit 104 acquires the position, the posture, and the pitch angle of the pallet 61 in the fork coordinate system (Step S270). When the process of Step S270 ends, the routine illustrated in
A calibration method of the imaging unit 32 which is used in the article detection device 100 will be described below with reference to
Operations and advantages of the article detection device 100, the article detection method, and the calibration method according to this embodiment will be described below.
In the article detection device 100 according to this embodiment, the calculation unit 104 (a first calculation unit) can calculate a position and a posture of a front surface 61a of a pallet 61 which is a load/unloading target portion based on a first information image. The article detection device 100 includes the information image preparing unit 103 (a second information image preparing unit) that prepares a second information image by converting information on a pitch angle detection portion of the pallet 61 to an easily recognizable state based on the calculation result from the calculation unit 104. Accordingly, the information image preparing unit 103 can prepare the second information image suitable for calculating the pitch angle of the pallet 61. Here, as illustrated in
The information image preparing unit 103 may prepare the second information image in the first case in which luggage is piled on the article and the second case in which luggage is not piled on the article in preparing the second information image, prepare the second information image with the zenith direction CL1 of the imaging unit 32 as a central axis in the first case, and prepare the second information image with the optical axis direction CL2 of the imaging unit 32 as a central axis in the second case (see
The calculation unit 104 may extract the edge candidates from one side and the other side in the transverse direction of the loading/unloading target in the second information image. Accordingly, the calculation unit 104 can easily extract edge candidates at easily extractable positions such as right and left edges of the luggage 62 or upper and lower edges of the top surface of the pallet 61.
The information image preparing unit 103 may prepare both the second information image with the zenith direction CL1 as a central axis and the second information image with the optical axis direction CL2 as a central axis, and the calculation unit 104 may employ a result of the second information image with the optical axis direction CL2 as a central axis when results of both the second information images are effective. When the zenith direction CL1 is set as the central axis, many edges are included in the background and thus accuracy is poor. Accordingly, the calculation unit 104 can more accurately calculate the pitch angle by preferentially employing the result of the second information image with the optical axis direction CL2 as the central axis.
The calculation unit 104 may perform Hough transformation of edge points in appearance areas E1A, E2A, E1B, and E2B (the specific areas) for extracting the edge candidates in the second information image and determine that the edge candidates are present in the appearance areas E1A, E2A, E1B, and E2B when a length of an acquired straight line is equal to or greater than a predetermined length. Accordingly, the calculation unit 104 can easily determine whether there is an edge candidate.
The calibration method according to this embodiment is a calibration method for the imaging unit 32 used in the article detection device 100. The calibration method includes a step of disposing a calibration panel 40 on a fork 25, a step of aligning a central portion of the panel 40 with the origin of the fork coordinate system, a step of preparing a second information image by converting information on directions of longitudinal and transverse edges of the panel 40 to an easily recognizable state, a step of extracting longitudinal edges 40c and 40d of the panel 40 included in the second information image, and a step of extracting transverse edges 40a and 40b of the panel 40 included in the second information image.
With this calibration method, a positional relationship between the fork coordinate system and the camera coordinate system can be ascertained from the positional relationship of the panel 40 in the camera coordinate system by disposing the panel 40 on the fork 25 and aligning them. The step of extracting the longitudinal edges 40c and 40d of the panel 40 included in the second information image and the step of extracting the transverse edges 40a and 40b of the panel 40 from the second information image are performed. Accordingly, it is possible to more accurately ascertain the positional relationship of the panel 40 in the camera coordinate system. As a result, it is possible to accurately convert the camera coordinate system to the fork coordinate system.
The article detection method according to this embodiment is an article detection method of detecting an article to be loaded/unloaded. The article detection method includes an image acquiring step S1 of acquiring a surroundings image by imaging the surroundings of the article detection device, a first information image preparing step S2 of preparing a first information image by converting information on a load/unloading target portion of the article to an easily recognizable state based on the surroundings image, a first calculation step S3 of calculating at least one of a position and a posture of the load/unloading target portion based on the first information image, a second information image preparing step S4 of preparing a second information image by converting information on a pitch angle detection portion of the article to an easily recognizable state, and a second calculation step S5 of extracting at least two edge candidates for the article extending in a specific direction and included in the second information image based on a calculation result from the first calculation step S3 and calculating a three-dimensional direction vector indicating a pitch angle of the article from the edge candidates.
With this article detection method, the same operations and advantages as in the article detection device 100 can be achieved.
A test for ascertaining the advantages of this embodiment was carried out. In the test, surroundings images acquired by an omnidirectional camera (Ricoh ThetaV) fixed to an upper center of a backrest of a reach type forklift in a test space were processed by a notebook PC (Thinkpad X1 Extreme).
The routines illustrated in
The present disclosure is not limited to the aforementioned embodiment.
For example, the industrial vehicle is not limited to a forklift, and may be a manual forklift (a pallet jack) including a power source such as a battery. The article is not limited to a pallet.
When the size is known, the shape of the panel 40 may be a square. In this case, methods of calculating longitudinal edges and transverse edges are the same as those when the panel 40 has a rectangular shape.
When portions corresponding to the longitudinal edges 40c and 40d or the transverse edges 40a and 40b can be detected, the shape of the panel is not limited to a rectangle or a square. For example, a figure with a known size may be drawn on the top surface of a panel, and a longitudinal direction vector in the camera coordinate system of the right and left longitudinal edges of the figure and a transverse direction vector in the camera coordinate system of the upper and lower transverse edges of the figure may be calculated on the basis of the second information image. The drawn figure is preferably a rectangle or a square, but the shape is not particularly limited as long as it has portions corresponding to the longitudinal edges 40c and 40d or the transverse edges 40a and 40b.
In a more specific calibration method, a panel on which a figure is drawn is placed on a fork, and a reference position set in the figure is aligned with the origin of the fork coordinate system. Then, a second information image is prepared by converting information on a pitch angle detection portion of the figure to an easily recognizable state, and edges of the figure extending in a specific direction are extracted from the second information image. Finally, the specific direction in the second information image is converted to another direction, and edges of the figure are extracted.
[Article 1]
An article detection device that detects an article to be loaded/unloaded, the article detection device including:
[Article 2]
The article detection device according to Article 1, wherein the second information image preparing unit prepares the second information image in a first case in which luggage is piled on the article and a second case in which luggage is not piled on the article in preparing the second information image,
[Article 3]
The article detection device according to Article 1 or 2, wherein the second calculation unit extracts the edge candidates from one side and the other side in a transverse direction of a loading/unloading target in the second information image.
[Article 4]
The article detection device according to Article 2, wherein the second information image preparing unit prepares both the second information image with the zenith direction as a central axis and the second information image with the optical axis direction as a central axis, and
[Article 5]
The article detection device according to any one of Articles 1 to 4, wherein the second calculation unit performs Hough transformation of edge points in a specific area for extracting the edge candidates in the second information image and determines that the edge candidates are present in the specific area when a length of an acquired straight line is equal to or greater than a predetermined length.
[Article 6]
A calibration method for an image acquiring unit used in an article detection device, the calibration method including:
[Article 7]
An article detection method of detecting an article to be loaded/unloaded, the article detection method including:
Number | Date | Country | Kind |
---|---|---|---|
2022-154676 | Sep 2022 | JP | national |