This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2020-140801, filed on Aug. 24, 2020, the entire contents of which are incorporated herein by reference.
The embodiment discussed herein is related to moving body speed derivation method and moving body speed derivation program.
There is a technique of detecting presence of an approaching vehicle in advance by performing image processing on an image captured by a camera mounted on a host vehicle. In the technique, a rate of temporal change in an azimuth angle of an end portion of a target vehicle in a horizontal direction in the case where the end portion is viewed from the host vehicle is calculated based on the captured time-series images, and whether there is a possibility that the target vehicle will come into contact with the host vehicle is determined based on a measured distance and the temporal change rate of the azimuth angle.
Examples of the related art include Japanese Laid-open Patent Publication No. 2012-113573.
According to an aspect of the embodiments, a moving body speed derivation method that causes at least one computer to execute a process, the process includes deriving a reference point in a second moving body region that is an image of a second moving body, from one image capturing the second moving body among a plurality of time-series images captured by an imaging device installed in a first moving body; deriving a position of the second moving body based on a distance between the imaging device and a feature point in the second moving body corresponding to the reference point in the second moving body region; and deriving speed of the second moving body based on a change amount of the position of the second moving body corresponding to the second moving body region included in the plurality of time-series images.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
A change in a traveling angle of the target vehicle with respect to a traveling direction of the host vehicle may cause the end portion of the target vehicle in the horizontal direction to move out of sight of the camera and make the camera unable to capture the end portion.
According to one aspect of the present disclosure, an object is to enable continuous derivation of speed of a second moving body by using images captured from a first moving body.
The image input unit 22 receives moving image data that is an example of time-series images captured by the imaging device 12, and converts moving images to digital images if the moving images are analog images or converts the moving images to gray images if the moving images are color images. The imaging device 12 is, for example, a monocular video camera included in a drive recorder.
The second moving body recognition unit 24 recognizes the second moving body M2 included in the moving image data acquired from the image input unit 22 by using a model learned to recognize a moving body to be processed. For example, the second moving body recognition unit 24 stores, in a temporary storage device, coordinates of a circumscribed rectangle OR of the recognized second moving body M2 illustrated in an upper left portion of
The road region extraction unit 26 extracts a road region by applying semantic segmentation to the moving image data acquired from the image input unit 22 while using a model learned in advance to extract the road region. For example, the road region extraction unit 26 may assign “1” to a pixel included in a road region RA illustrated in an upper right portion of
The reference point derivation unit 28 derives a reference point based on information on the road region RA and the coordinates of the circumscribed rectangle OR stored in the temporary storage device by the second moving body recognition unit 24. The reference point is a point in the image of the second moving body M2 that is likely to continue to be included in the field of view of the imaging device.
The second moving body speed derivation unit 30 derives the position of the second moving body M2 in an actual space based on a distance between the imaging device 12 and a feature point that is a point in the second moving body M2 corresponding to the reference point. The position of the imaging device 12 is derived from the position of the first moving body M1 in which the imaging device 12 is installed. The position of the first moving body M1 in the actual space may be, for example, an absolute position acquired by using a global navigation satellite system (GNSS), and the second moving body speed derivation unit 30 acquires information on the absolute position via the communication device 14.
The second moving body speed derivation unit 30 derives the speed of the second moving body M2 based on a change amount of the position of the second moving body in the actual space corresponding to the image of the second moving body M2 included in each of the time-series frames of the moving image data. For example, the second moving body speed derivation unit 30 stores, in the external storage device 16, a list in which the derived speed of the second moving body M2 is recorded in association with a frame number or time.
As illustrated in
The CPU 51 is an example of a processor that is hardware. The CPU 51, the primary storage device 52, the secondary storage device 53, and the external interface 54 are coupled to one another via a bus 59. The CPU 51 may be a single processor or may be a plurality of processors. For example, a graphics processing unit (GPU) may be used instead of the CPU 51.
The primary storage device 52 is, for example, a volatile memory such as a random-access memory (RAM). The secondary storage device 53 is, for example, a non-volatile memory such as a hard disk drive (HDD) or a solid-state drive (SSD).
The secondary storage device 53 includes a program storage area 53A and a data storage area 53B. As an example, the program storage area 53A stores programs such as a moving body speed derivation program. The data storage area 53B may function as, for example, a primary storage device that stores intermediate data generated by execution of the moving body speed derivation program.
The CPU 51 reads the moving body speed derivation program from the program storage area 53A and loads the read program into the primary storage device 52. The CPU 51 loads and executes the moving body speed derivation program and to operate as the image input unit 22, the second moving body recognition unit 24, the road region extraction unit 26, the reference point derivation unit 28, and the second moving body speed derivation unit 30 in
The programs such as the moving body speed derivation program may be stored in an external server and loaded into the primary storage device 52 via a network. The programs such as the moving body speed derivation program may be stored in a non-transitory recording medium such as a Digital Versatile Disc (DVD) and loaded into the primary storage device 52 via a recording medium reading device.
External devices are coupled to the external interface 54 and the external interface 54 manages exchange of various kinds of information between the external devices and the CPU 51.
The moving body derivation process is repeatedly performed on frames t−n, t−n+1, . . . , t−1, t, t+1, . . . , t+m−1, and t+m. Signs t, n, and m are arbitrary integers. An imaging time difference between the adjacent frames may be, for example, 1/30 seconds. When an image at a first time point corresponds to the frame t, an image at a second time point corresponds to the frame t−1.
When the determination in step 204 is yes, in step 206, the CPU 51 scans the circumscribed rectangle rightward from the left end toward the horizontal center, from the lower end toward the upper end. In step 208, the CPU 51 determines whether a non-road region, for example, a second moving body region is detected.
When the determination in step 208 is yes, for example, when the non-road region is detected, in step 214, the CPU 51 sets a position of the detection as a reference point of the frame t and terminates the reference point derivation process. For example, the CPU 51 stores the frame number and the coordinates of the reference point in the temporary storage device in association with each other. When the CPU 51 detects no non-road region in step 208, the process returns to step 206 to continue the scanning. The set reference point C is illustrated in a lower portion of
When the determination in step 204 is no, for example, when the reference point derived in the frame t−1 is present on the right side of the horizontal center of the circumscribed rectangle, in step 210, the CPU 51 scans the circumscribed rectangle leftward from the right end toward the horizontal center, from the lower end toward the upper end. In step 212, the CPU 51 determines whether the non-road region is detected.
When the determination in step 212 is yes, for example, when the non-road region is detected, in step 214, the CPU 51 sets a position of the detection as the reference point of the frame t and terminates the reference point derivation process. For example, the CPU 51 stores the frame number and the coordinates of the reference point in the temporary storage device in association with each other. When the CPU 51 detects no non-road region in step 212, the process returns to step 210 to continue the scanning.
In an example illustrated in a lower left portion of
In the present embodiment, the scanning direction is adjusted based on the position of the reference point C derived in the frame t−1 in the circumscribed rectangle such that the reference point C derived in the frame t−1 and the reference point C to be derived in the frame t do not contradict each other. In step 208 or 212, the first-found non-road region is set as the reference point C. The reference point C is a point in the second moving body region which is present on a boundary between the second moving body region and the road region and at which the second moving body is in contact with a road surface of a road.
For example, the CPU 51 stores the frame number and the coordinates of the reference point in the temporary storage device in association with each other to use them in the reference point derivation process for the frame t+1. The CPU 51 may store, instead of the coordinates of the reference point, “L” in the temporary storage device if the reference point is present on the left side of the horizontal center of the circumscribed rectangle or “R” in the temporary storage device if the reference point is present on the right side of the horizontal center of the circumscribed rectangle.
When no reference point is derived in the frame t−1, the circumscribed rectangle may be scanned rightward from the left end or scanned leftward from the right end. When no non-road region is detected in steps 206 and 208, subsequent to step 208, the circumscribed rectangle may be scanned rightward from the center toward the right end, from the lower end toward the upper end. When no non-road region is detected in step 210, subsequent to step 210, the circumscribed rectangle may be scanned leftward from the center toward the left end, from the lower end to the upper end.
In step 312, the CPU 51 determines whether or not there is the reference point derived in the second moving body speed derivation process for the frame t−1. When the determination in step 312 is yes, in step 314, the CPU 51 determines whether or not the reference point C in the frame t−1 and the reference point C in the frame t are located away from each other.
When the determination in step 314 is yes, in step 316, the CPU 51 changes the position of the reference point in the frame t−1. For example, when the reference point in the frame t−1 is located in the front wheel as illustrated in a lower left portion of
For example, when the non-road region is detected in steps 206 and 208 of
In step 206, instead of scanning the circumscribed rectangle from the left end toward the horizontal center, the circumscribed rectangle may be scanned from the left end toward a position away from the left end by ⅓ the horizontal width of the circumscribed rectangle or toward a position away from the left end by ⅔ the horizontal width. In step 210, instead of scanning the circumscribed rectangle from the right end toward the horizontal center, the circumscribed rectangle may be scanned from the right end toward a position away from the right end by ⅓ the horizontal width of the circumscribed rectangle or toward a position away from the right end by ⅔ the horizontal width.
For example, when a distance between the reference point in the frame t−1 and the reference point in the frame t is equal to or greater than a predetermined distance based on the horizontal length of the circumscribed rectangle, the CPU 51 may determine that the reference points are located away from each other. The predetermined distance may be, for example, 40% the horizontal length of the circumscribed rectangle.
In step 324, the CPU 51 acquires position information of the second moving body in the frame t−1. The position information of the second moving body in the frame t−1 may be stored, for example, in the temporary storage device. In step 326, the CPU 51 acquires position information of the first moving body in the frame t. The position information of the first moving body may be, for example, the absolute position acquired by using the GNSS or a relative position acquired by using vehicle information such as vehicle speed or a steering angle. The position information of the first moving body in the frame t is stored, for example, in the temporary storage device to be used in a process for the frame t+1 as the position information of the first moving body in the immediately preceding frame.
In step 328, the CPU 51 derives position information of the second moving body in the frame t. As illustrated in
D1=H/tan θ1 (1)
D2=D1/cos θ2 (2)
In order to simplify the description, description is given assuming that the camera 56 is installed such that the line of sight of the camera 56 is parallel to the traveling direction of the first moving body M1. However, an orientation in which the camera 56 is installed in the first moving body M1 may be adjusted as appropriate.
D1 is a distance between the camera 56 and an intersection B between a line of sight VL of the camera 56 and a straight line FL that is perpendicular to the line of sight VL and that passes through a feature point F when viewed from above. As illustrated in
H is a height at which the camera 56 is installed, and the elevation angle θ1 is an angle between the line of sight VL of the camera 56 and a straight line extending from the camera 56 to the intersection B when viewed from the side. The elevation angle θ1 may be derived by a conversion method between an existing camera coordinate system and an actual space coordinate system based on coordinates of a position corresponding to the intersection B in the frame t, the orientation and characteristics of the camera 56, and the like.
As illustrated in
In step 330, as illustrated in Equation (3), the CPU 51 derives the speed of the second moving body based on position coordinates (x1, y1) of the second moving body M2 in the frame t−1, position coordinates (x2, y2) of the second moving body M2 in the frame t, and a time difference T between the frame t−1 and the frame t.
In the related art, as illustrated in a left portion of
In contrast, in the present embodiment, even if the position of the reference point C is determined as illustrated in the left portion of
The moving body is not limited to a vehicle and may be, for example, a type of drone that travels on a road surface. The road may be, for example, a roadway, a sidewalk, or the like. The camera is not limited to the example in which the camera is installed in a front portion of the first moving body, and may be installed in, for example, a side portion or a rear portion or any combination thereof in addition to or instead of the front portion.
The flows of the processes in the flowcharts of
In the present embodiment, the reference point in the second moving body region that is the image of the second moving body is derived from one image capturing the second moving body among the plurality of time-series images captured by the imaging device installed in the first moving body. The position of the second moving body is derived based on the distance between the imaging device and the feature point in the second moving body corresponding to the reference point in the second moving body region, and the speed of the second moving body is derived based on a change amount of the position of the second moving body corresponding to the second moving body region included in the plurality of time-series images.
According to the present embodiment, it is possible to continuously derive the speed of the second moving body by using the image captured from the first moving body. For example, it is possible to analyze a traffic accident situation by using the present embodiment to estimate the speed of a vehicle of the other party of an accident that is difficult to manually analyze from a monocular image captured by a drive recorder or the like.
The following appendices are further disclosed in relation to each of the above embodiments.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
10794710 | Liu | Oct 2020 | B1 |
20020196341 | Kamijo | Dec 2002 | A1 |
20120050074 | Bechtel | Mar 2012 | A1 |
20120213412 | Murashita | Aug 2012 | A1 |
20140192145 | Anguelov | Jul 2014 | A1 |
20170186169 | Viswanath | Jun 2017 | A1 |
20180107883 | Viswanath | Apr 2018 | A1 |
Number | Date | Country |
---|---|---|
3 422 293 | Jan 2019 | EP |
2007-278844 | Oct 2007 | JP |
2010160802 | Jul 2010 | JP |
2012-113573 | Jun 2012 | JP |
Entry |
---|
Extended European Search Report issued by the European Patent Office corresponding to European Patent Application No. 21178211.5 dated Nov. 29, 2021. |
Arenado, M., et al., “Monovision-based vehicle detection, distance and relative speed measurement in urban traffic,” IET Intelligent Transport Systems, The Institution of Engineering and Technology, Michael Faraday House, Six Hills Way, Stevenage, Herts. SG1 2AY, UK, vol. 8, No. 8, pp. 655-664, Dec. 1, 2014. |
Di, Z., et al., “Forward Collision Warning System Based on Vehicle Detection and Tracking,” 2016 International Conference on Optoelectronics and Image Processing, IEEE, pp. 10-14, Jun. 10, 2016. |
Official Communication issued by the European Patent Office corresponding to European Patent Application No. 21178211.5 dated Oct. 19, 2023. |
Number | Date | Country | |
---|---|---|---|
20220058814 A1 | Feb 2022 | US |