This application is based on and claims the benefit of priority from Japanese Patent Application No. 2016-116575, filed Jun. 10, 2016. The entire disclosure of the above application is incorporated herein by reference.
The present disclosure relates to an object detection apparatus and an object detection method for detecting an object that is present ahead of a vehicle in an advancing direction of the vehicle.
An object detection apparatus that captures an image of an area ahead of a vehicle in an advancing direction of the vehicle by an imaging apparatus, such as a camera, and detects an object that suddenly appears ahead of the vehicle in the vehicle advancing direction from a position that is in a blind spot is known. The blind spot is a position at which the object is not visible from the vehicle. Through detection of the object that suddenly appears from the blind spot, the object detection apparatus is able to actuate various types of control to prevent collision with the object, based on the detection results.
In addition, JP-A-2013-210988 discloses an object detection apparatus that calculates a movement speed and a movement direction in the periphery of a blind spot in an image captured by an imaging apparatus. The movement speed and the movement direction serve as movement speed information. The object detection apparatus then determines whether or not an object has suddenly appeared from the blind spot based on the calculated movement speed information.
In cases in which whether or not an object has suddenly appeared is determined based on the movement speed and the movement direction of the object within a captured image, even when the actual movement directions of the object differ, the movement directions of the object in the image may be recognized as being the same. Specifically, the actual movement directions of the object differ between a case in which the object moves towards an own vehicle in a lateral direction of the own vehicle and a case in which the object moves ahead of the own vehicle in the vehicle advancing direction. However, in a two-dimensional image, the object moves towards the own vehicle in a left-right direction of the own vehicle in both cases. In such instances, the accuracy of determination regarding whether or not an object has suddenly appeared ahead of an own vehicle in a vehicle advancing direction from a blind spot may decrease. In addition, the amount of time required to perform a determination of an object suddenly appearing ahead of an own vehicle from a blind spot may increase.
It is thus desired to provide an object detection apparatus and an object detection method that are capable of performing, at an earlier timing and with high accuracy, detection of an object approaching ahead of an own vehicle from a blind spot.
An exemplary embodiment the present disclosure provides an object detection apparatus that includes: an image acquiring unit that acquires, as a first image, a captured image of an area ahead of a vehicle in a vehicle advancing direction from a first imaging unit provided in the vehicle and acquires, as a second image, a captured image of an area ahead of the vehicle in the vehicle advancing direction from a second imaging unit provided in the vehicle; a blind-spot determining unit that determines whether or not an object is present in a blind spot ahead of the vehicle in the vehicle advancing direction, based on the first image captured by the first imaging unit and the second image captured by the second imaging unit, the blind spot being an area that is visible through one of the first imaging unit and the second imaging unit and is not visible through the other of the first imaging unit and the second imaging unit; an image holding unit that holds the first image captured by the first imaging unit and the second image captured by the second imaging unit in time series, when the object is determined to be present in the blind spot; a difference acquiring unit that acquires, as a first image difference, a difference in a feature quantity between a previous image and a current image in the first image held in time series and acquires, as a second image difference, a difference in a feature quantity between a previous image and a current image in the second image held in time series; and an approach determining unit that determines whether or not the object is approaching the area ahead of the vehicle based on the first image difference and the second image difference acquired by the difference acquiring unit.
When a blind spot is present in an image captured by an imaging apparatus and an object is present in the blind spot, a situation occurs in that the object is visible in the first image captured by the first imaging unit and the object is not visible in the second image captured by the second imaging unit. In addition, the visibility of the object in the first image captured by the first imaging unit and the second image captured by the second imaging unit changes in time series, in accompaniment with the movement of the object. The manner in which the visibility changes is considered to change based on the movement direction.
In this regard, in the above-described configuration, the first image of a peripheral area including the blind spot captured by the first imaging unit is held in time series, the second image of a peripheral area including the blind spot captured by the first imaging unit is held in time series, the difference in feature quantity in the first image in the time series is calculated as the first image difference, and the difference in feature quantity in the second image in the time series is calculated as the second image difference. Then, whether or not the object is approaching the area ahead of the vehicle is determined based on the first image difference and the second image difference. In this case, the approach of the object can be determined taking into consideration that the manner of change in visibility of the object in the first image captured by the first imaging unit and the second image captured by the second imaging unit changes based on the movement direction of the object. Thus, a detection of an object approaching an area ahead of an own vehicle from a blind spot can be performed at an earlier timing and with high accuracy.
In the accompanying drawings:
Embodiments of an object detection apparatus and an object detection method of the present disclosure will be described with reference to the drawings. Sections that are identical or equivalent to each other among the following embodiments are given the same reference numbers in the drawings. Descriptions of sections having the same reference numbers are applicable therebetween.
As shown in
The stereo camera 10 is set inside a vehicle cabin in a state in which an imaging axis faces ahead of the own vehicle CS, such that an area ahead of the own vehicle CS in the vehicle advancing direction can be imaged. In addition, the stereo camera 10 includes a right-side camera 11 and a left-side camera 12. The positions of the right-side camera 11 and the left-side camera 12 in the lateral direction differ. A right-side image captured by the right-side camera 11 and a left-side image captured by the left-side camera 12 are each outputted to the driving assistance ECU 30 at a predetermined cycle. For example, the right-side camera 11 and the left-side camera 12 are each configured by a charge-coupled device (CCD) image sensor or a complementary metal-oxide semiconductor (CMOS) image sensor. According to the first embodiment, the right-side camera 11 and the left-side camera 12 in the stereo camera 10 respectively function as a first imaging unit and a second imaging unit.
As shown in
The driving assistance ECU 30 performs PCS (collision avoidance control) to avoid collision with the object Ob by actuating the control target 40, based on a detection of the object Ob suddenly appearing ahead of the own vehicle CS, the determination being performed by the object detection ECU 20. The driving assistance ECU 30 is configured as a known microcomputer that includes a central processing unit (CPU), a read-only memory (ROM), and a random access memory (RAM).
In
In the above-described PCS, a quick and accurate detection of the object Ob that suddenly appears (approaches) ahead of the own vehicle CS in the vehicle advancing direction, from a blind spot, is desirable. Meanwhile, even when the actual movement directions of the object Ob differ, the movement directions of the object Ob within a captured image may be recognized as being the same.
Consequently, for the determination of the object Ob suddenly appearing ahead of the own vehicle CS to be appropriately performed through use of the movement speed and the movement direction of the object Ob within a captured image, the difference between the actual movement of the object Ob and the movement of the object Ob within the captured image is required to be taken into consideration. Increase in the amount of time required for the determination becomes a concern. Therefore, the object detection ECU 20 includes the configurations shown in
Returning to
The image acquiring unit 21 acquires the right-side image Ri and the left-side image Li respectively captured by the right-side camera 11 and the left-side camera 12. The image acquiring unit 21 receives a pair of captured images composed of the right-side image Ri and left-side image Li at a predetermined cycle. The pair of captured images is captured at the same imaging timing and outputted from the stereo camera 10.
The object detecting unit 22 detects the object Ob based on the images acquired from the right-side camera 11 and the left-side camera 12. For example, the object detecting unit 22 performs known template matching on the right-side image Ri and the left-side image Li, and detects objects Ob in the right-side image Ri and the left-side image Li. For example, in a case in which the object detecting unit 22 detects a pedestrian, the object detecting unit 22 detects the object Ob from the right-side image Ri and the left-side image Li using a predetermined dictionary for pedestrians. When the object detecting unit 22 detects a pedestrian by performing the template matching, a predetermined dictionary for detecting characteristics of the upper body of pedestrians may be used.
In addition, the object detecting unit 22 calculates a three-dimensional position of the object Ob based on the parallax between the right-side image Ri and the left-side image Li. For example, the object detecting unit 22 calculates the parallax between the right-side image Ri and the left-side image Li for each predetermined pixel block, and generates distance information based on the parallax of each pixel block. X-axis, Y-axis, and Z-axis distances of the object Ob are set in the distance information. In the distance information, the Z-axis is a position at which an up-down direction in actual space is a vertical direction.
The blind-spot area detecting unit 23 detects a blind spot ahead of the own vehicle CS in the vehicle advancing direction, based on the right-side image Ri and the left-side image Li. In the present embodiment, the blind spot is a position ahead of the own vehicle CS in the vehicle advancing direction at which an object Ob is not visible from a driver or the like of the own vehicle CS. Specifically, the blind spot is an area that is visible through one of the right-side camera 11 (corresponding to the first imaging unit) and the left-side camera 12 (corresponding to the second imaging unit) configuring the stereo camera 10 and is not visible through the other of the right-side camera 11 and the left-side camera 12.
According to the first embodiment, the blind spot includes a blind spot that is formed by a shielding object (predetermined object) SH such as buildings and signs alongside a travel road, or automobiles and the like. The blind-spot area detecting unit 23 detects, as a shielding object SH forming a blind-spot area DA1 that configures the blind spot, buildings and signs alongside a travel road, or automobiles and the like that have stopped alongside the travel road, in the right-side image Ri and the left-side image Li. For example, when a shielding object SH that is a candidate for causing the blind-spot area DA1 is detected from the right-side image Ri and the left-side image Li through use of the known template matching based on predetermined dictionaries for shielding objects forming the blind spot, the blind-spot area detecting unit 23 detects the blind-spot area DA1 based on the position of the shielding object SH. For example, when the shielding object SH that is a candidate for causing the blind-spot area DA1 is detected, the blind-spot area detecting unit 23 sets an area obtained by extending the position occupied by the shielding object SH within the image by a predetermined length in the lateral direction, as the blind-spot area DA1.
The blind-spot determining unit 24 determines whether or not a blind spot is present, and whether or not an object is present in the blind spot. For example, as shown in
Hereafter, an image in which an object Ob is not detected in a blind spot is referred to as a non-visible image. An image in which an object Ob is detected in the periphery of a blind spot is referred to as a visible image. For example, in
The image holding unit 25 holds images of the periphery of a blind spot captured by the stereo camera 10 in time-series, when the blind spot determining unit 24 determines that an object Ob is present in the blind spot.
The difference acquiring unit 26 acquires a feature quantity in the time-series images held in the image holding unit 25 as an image difference including a first image difference and a second image difference. For example, the difference acquiring unit 26 acquires the first image difference between a previous image and a current image in the right-side image Ri related to the presence-absence of the object Ob in the right-side image Ri, and acquires the second image difference between a previous image and a current image in the left-side image Li as information related to the presence-absence of the object Ob in the left-side image Li.
The approach determining unit 27 determines whether or not the object Ob is approaching the area ahead of the own vehicle CS in the vehicle advancing direction, based on the first image difference and the second image difference acquired by the difference acquiring unit 26. As a result of the object Ob suddenly appearing from a blind spot in the lateral direction, the position of the object Ob changes from a position at which the presence-absence of the object Ob can be detected by either of the right-side camera 11 and the left-side camera 12 to a position at which the presence-absence of the object Ob can be detected by both the right-side camera 11 and the left-side camera 12. Therefore, whether or not the object Ob is approaching the area ahead of the own vehicle CS in the vehicle advancing direction can be determined based on the detection results regarding the presence-absence of the object Ob in the periphery of the blind spot in the right-side image Ri and the left-side image Li.
Next, a determination of the object Ob suddenly appearing (approaching) ahead of the own vehicle CS will be described with reference to the flowchart in
At step S11, the object detection ECU 20 acquires a pair of right-side image Ri and left-side image Li from the stereo camera 10. The imaging timings of the right-side image Ri and the left-side image Li are the same. Therefore, step S11 functions as an image acquiring step.
At step S12, the object detection ECU 20 determines a state flag that indicates that an object Ob is present in a blind-spot area DA1. The object detection ECU 20 initially proceeds to step S13 under a presumption that the determination regarding whether or not an object Ob is present in a blind-spot area DA1 has not been performed.
At step S13, the object detection ECU 20 determines whether or not a blind spot formed by a shielding object SH is present in the right-side image Ri and the left-side image Li. For example, in
When determined that a blind-spot area DA1 formed by a shielding object SH cannot be detected from the right-side image Ri and the left-side image Li (NO at step S13), the object detection ECU 20 temporarily ends the series of processes in
At step S14, the object detection ECU 20 determines whether or not an object Ob is present in the blind-spot area DA1 formed by the shielding object SH. The object detection ECU 20 detects all objects Ob, including pedestrians, that are subject to the approach determination in the blind-spot area DA1. For example, in
When determined that an object Ob is not detected in the blind-spot area DA1 (NO at step S14), the object detection ECU 20 temporarily ends the series of processes in
At step S15, the object detection ECU 20 determines whether or not the object Ob detected at step S14 is a moving object. A reason for this is that, even when the object Ob is detected at step S14, should the object Ob be a stationary object that does not move, the likelihood of the object Ob suddenly appearing ahead of the own vehicle CS in the vehicle advancing direction is low. For example, stationary objects include utility poles and traffic cones. When determined that the detected object Ob is not a moving object (NO at step S15), the object detection ECU 20 temporarily ends the series of processes in
Meanwhile, when determined that the object Ob detected at step S14 is a moving object (YES at step S15), the object detection ECU 20 proceeds to step S16. For example, when the pedestrian is detected as a moving object, the object detection ECU 20 determines that the object Ob detected at step S14 is a moving object. Therefore, step S15 functions as a moving-object determining unit and a type determining unit.
At step S16, the object detection ECU 20 stores the state flag indicating that an object Ob is present in a blind spot area DA1 formed by a shielding object SH.
At step S17, the object detection ECU 20 holds the right-side image Ri and the left-side image Li respectively captured by the right-side camera 11 and the left-side camera 12, as images of a peripheral area including a blind spot area DA1 formed by a shielding object SH. Therefore, peripheral images including the blind-spot area DA1 formed by the shielding object SH are held in time series for the right-side images Ri and the left-side images Li. The holding of the images at step S17 is continued while the state flag is being recorded. Step S17 functions as an image holding step. The object detection ECU 20 then temporarily ends the series of processes shown in
Next, when determined that the state flag indicating that an object Ob is present in a blind-spot area DA1 formed by a shielding object SH is recorded at step S12 in the next series of processes (YES at step S12), the object detection ECU 20 proceeds to step S18.
At step S18, the object detection ECU 20 acquires a first image difference of the right-side images Ri of which holding has been started at step S17 and acquires a second image difference of the left-side images Li of which holding has been started at step S17. The first image difference is information indicating the difference between the previous image and the current image in the right-side images Ri. The second image difference is information indicating the difference between the previous image and the current image in the left-side images Li. Here, the first image difference and the second image difference are whether or not the object Ob is present in the periphery of the blind-spot area DA1. Step S18 functions as a difference acquiring step.
At step S19, the object detection ECU 20 determines whether or not the object Ob continues to be present in the image (visible image) in which the object Ob has been determined to be present at step S14 based on the acquisition result at step S18. When determined that the object Ob is not continuously present (NO at step S19), the object detection ECU 20 determines that the object Ob has moved to a position that cannot be imaged by the right-side camera 11 and the left-side camera 12. At step S22, the object detection ECU 20 deletes the state flag. The object detection ECU 20 then temporarily ends the process shown in
When determined that the object Ob is continuously present in the visible image (YES at step S19), at step S20, the object detection ECU 20 determines whether or not the object Ob is detected in the periphery of the blind-spot area DA1 formed by the shielding object SH in the image (non-visible image) in which the object Ob has not been detected in the blind spot at step S14.
After the object Ob is detected in the visible image (the left-side image Li in
Therefore, when determined that the object Ob is not detected in the periphery of the blind-spot area DA1 formed by the shielding object SH in the non-visible image in which the object Ob has not been detected at step S14 (NO at step S20), the object detection ECU 20 determines that the object Ob is not approaching the area ahead of the own vehicle CS in the vehicle advancing direction and temporarily ends the series of processes shown in
Meanwhile, when determined that the object Ob is detected in the periphery of the blind-spot area DA1 formed by the shielding object SH in the image that had been the non-visible image (YES at step S20), at step S21, the object detection ECU 20 determines that the object Ob is an object that is approaching the area ahead of the own vehicle CS in the vehicle advancing direction. Steps S19 to S21 function as an approach determining step. Upon completing the process at step S21, the object detection ECU 20 temporarily ends the series of processes shown in
Next, operation of the approach determination performed by the object detection ECU 20 will be described with reference to
As shown in
Subsequently, the object Ob (i.e., pedestrian) moves in the direction approaching the area ahead of the own vehicle CS in the vehicle advancing direction, in the lateral direction (X-axis direction). As a result, at time t12 shown in
Meanwhile, in
As described above, in the object detection ECU 20 according to the first embodiment, when an object Ob is determined to be present in a blind spot based on the difference in visibility of the object Ob between the right-side camera 11 and the left-side camera 12, the images of the peripheral area including the blind spot captured by the right-side camera 11 and the left-side camera 12 are held in time series. The difference in the feature quantities of the images in the time series is acquired as the image difference. Then, based on the image difference, whether or not the object Ob is approaching the area ahead of the own vehicle CS in the vehicle advancing direction is determined.
Therefore, the approach of the object Ob can be accurately determined, taking into consideration that the manner of change in visibility of the object Ob in the captured images of the right-side camera 11 and the left-side camera 12 changes based on the movement direction of the object Ob. In addition, as a result of determination of whether or not the object Ob is approaching the own vehicle CS based on the differences in feature quantities of the object Ob present in the periphery of the blind spot, the time required for the approach determination can be shortened. The determination timing can be made earlier.
The object detection ECU 20 determines that the object Ob is approaching the area ahead of the own vehicle CS in the vehicle advancing direction when, in the visible image, a state in which the object Ob is visible is recognized as being maintained based on the first image difference and the second image difference and, in the non-visible image, a state in which the object Ob is not visible is recognized as having changed to a state in which the object Ob is visible based on the first image difference and the second image difference. The visible image is an image (i.e., one of the first image and the second image) in which the object Ob is visible, among the captured images of the right-side camera 11 and the left-side camera. The non-visible image is an image (i.e., the other of the first image and the second image) in which the object Ob is not visible. As a result of the above-described configuration, the movement of the object can be determined based on the differences in visibility of the object Ob in the right-side images Ri and the left-side images Li. Therefore, determination accuracy of the approach determination regarding the object Ob can be improved.
When determined that an image captured by either of the right-side camera 11 and the left-side camera 12 is a visible image and the other images are non-visible images, the object detection ECU 20 determines whether or not the object Ob present in a blind spot in the visible image is a moving object that is moving. Then, when determined that the object Ob present in the blind spot in the visible image is a moving object, the object detection ECU 20 determines whether or not the object Ob is approaching the area ahead of the own vehicle CS in the vehicle advancing direction. Even when the object Ob is a stationary object that does not move, the position of the object Ob within the angle of view changes as a result of the own vehicle CS traveling. The position in the lateral direction of the object Ob within the image changes. Therefore, whether or not the object Ob is approaching the own vehicle CS is determined under a condition that the object Ob is a moving object that is moving. As a result of the above-described configuration, a stationary object can be eliminated from objects subject to the approach determination. Therefore, determination accuracy of the approach determination can be improved.
The object detection ECU 20 determines at least a pedestrian as the type of object Ob present in the blind spot. Under a condition that the object Ob is a pedestrian, the object detection ECU 20 determines whether or not the object Ob is present at the blind spot. The movement speed of a pedestrian is slower than that of an automobile or the like. Therefore, suddenly appearing of the pedestrian based on the movement speed may not be appropriately determined. Consequently, the object Ob is determined as a candidate object under a condition that the object Ob is a pedestrian. As a result of the above-described configuration, the approach determination can be appropriately performed even regarding a pedestrian having a slow movement speed.
According to a second embodiment, instead of detecting a blind-spot area DA1 formed by a shielding object SH within an image and determining whether or not an object Ob is present within the blind-spot area DA1 formed by the shielding object SH, the object detection ECU 20 determines whether or not an object Ob is present in a blind spot based on visibility of an object Ob in a predetermined area within the image. In the second embodiment, the blind spot is an area that is visible through one of the right-side camera 11 (first imaging unit) and the left-side camera 12 (second imaging unit) configuring the stereo camera 10 and is not visible through the other of the right-side camera 11 and the left-side camera 12.
When an object Ob is detected only in a predetermined section in either of the right-side image Ri and the left-side image Li as a result of the difference in imaging direction between the right-side camera 11 and the left-side camera 12, a determination can be made that an object Ob is present in a blind spot that is present on either of the right and left sides of the area ahead of the own vehicle CS in the vehicle advancing direction.
For example, in
In a similar manner, as shown in
Therefore, at step S13 in
For example, in
As described above, according to the second embodiment, whether or not an object Ob is present in a blind spot that is present ahead of the own vehicle CS can be detected, even when a shielding object SH configuring the blind spot is not present in the right-hand image Ri and the left-hand image Li.
The first imaging unit and the second imaging unit may be configured by camera apparatuses having differing angles of view. In
In the camera apparatuses of the configuration shown in
The blind-spot area detecting unit 23 may use parallax matching information for generating a parallax image based on the right-side image Ri and the left-side image Li. When the parallax image cannot be acquired, the blind-spot area detecting unit 23 determines that there is a difference between the right-side image Ri and the left-side image Li, and the blind-spot area is present.
In
The area of the object Ob may be used as the difference between the previous image and the current image in the right-side images Ri and the difference between the previous image and the current image in the left-side image Li. As a result of the object Ob moving from the blind-spot area DA1 to a position that can be imaged by both the right-side camera 11 and the left-side camera 12, the area (number of pixels) detected as the object Ob increases in the periphery of the blind-spot area DA1. Therefore, at steps S19 and S20 in
At step S13 in
The object subject to the approach determination may be a bicycle instead of a pedestrian. In this case, at step S15 in
At step S15 in
Number | Date | Country | Kind |
---|---|---|---|
2016-116575 | Jun 2016 | JP | national |