The present invention relates to a three-dimensional object emergence detecting device for detecting the emergence of a three-dimensional object in the vicinity of a vehicle based on an image from an in-vehicle camera.
A device for supporting driving, in which an in-vehicle camera is placed in a backward-looking manner in a rear trunk part and the like of a vehicle, and a taken image backward of the vehicle obtained from this in-vehicle camera is shown to a driver, is beginning to become popular. As such in-vehicle camera, normally, a wide-angle camera capable of imaging a wide range is used, and is configured so as to display the taken image having the wide range on a small monitor screen.
However, in the wide-angle camera, lens distortion is large, so that straight lines are imaged as curve lines. Accordingly, an image displayed on the monitor screen becomes the image which is hard to be seen. Therefore, conventionally, as described in Patent Document 1, the lens distortion is eliminated from a taken image of the wide-angle camera, and the taken image is converted into the image in which the straight lines can be seen as the straight lines, and such image is displayed on the monitor screen.
A driver finds a burden against visually observing such camera that captures circumference of a vehicle at all times and confirming the safety. Thus, there are conventionally disclosed techniques for detecting, by means of image processing, a three-dimensional object such as a person in danger of collision against the vehicle based on pictures from a camera (for example, see Patent Document 1).
Additionally, there are conventionally disclosed techniques that during the time when a vehicle travels at low speed, based on motion parallax at the time of performing bird's-eye view conversion when having performed viewpoint conversion on images photographed at two times, the images are separated into an area of an earth surface and an area of the three-dimensional object, thereby detecting a three-dimensional object (for example, see Patent Document 2).
Further, there are disclosed techniques for detecting a three-dimensional object around a vehicle based on stereoscopic views of cameras both of which are mounted side by side (for example, see Patent Document 3). Additionally, there are disclosed techniques for an image when a vehicle is stopped and an ignition is turned off is compared with an image when the ignition is turned on in order to start the vehicle, thereby detecting changes around the vehicle during the time from when the vehicle is stopped to when the vehicle is started, and alarming a driver (for example, see Patent Document 4).
However, the technique of Patent Document 2 has a first problem that due to the use of motion parallax, such technique cannot be adopted during the time when a vehicle is stopped. Additionally, in the case where a three-dimensional object is present in the close vicinity of the vehicle, there is a possibility that an alarm would not make it in time during the time from when the vehicle starts to move to when such vehicle collides against the three-dimensional object. The technique of Patent Document 3 requires two cameras both of which face the same direction for stereoscopic viewing, resulting in high costs.
The technique of Patent Document 4 is applicable even with a single camera per one angle of view in a state where a vehicle is stopped. However, such technique compares two images when an ignition is turned off and the ignition is turned on based on strength in unit of a local such as a pixel or an edge, whereby it is not possible to discriminate a case where a three-dimensional object has emerged around the vehicle from a case where the three-dimensional object is left from surroundings of the vehicle during the time from when the ignition is turned off and to when the ignition is turned on. Additionally, under an outdoor environment, fluctuations in the image other than the emergence of the three-dimensional object, such as a sway of sunshine or movement of a shadow, locally occur in a frequent manner, and thus, there is a possibility that many false alarms would be output.
The present invention has been made in view of the foregoing, and has an object to provide a three-dimensional object emergence detecting device capable of detecting emergency of a three-dimensional object rapidly and correctly at low costs.
A three-dimensional object emergence detecting device of the present invention for solving the above-mentioned problems has features in that, in the three-dimensional object emergence detecting device for detecting the emergence of a three-dimensional object in the vicinity of a vehicle based on a bird's-eye view image taken by a camera mounted in the vehicle, orthogonal-direction characteristic components, each of which is on the bird's-eye view image and has a direction nearly orthogonal to an view direction of the camera, are extracted from the bird's-eye view image, and based on the extracted orthogonal-direction characteristic components, the emergence of the three-dimensional object is detected.
According to the present invention, orthogonal-direction characteristic components, each of which is on a bird's-eye view image and has a direction nearly orthogonal to an view direction of an in-vehicle camera, are extracted from the bird's-eye view image, and based on the extracted orthogonal-direction characteristic components, the emergence of a three-dimensional object is detected, thereby enabling to prevent erroneous detection of contingent changes in an image, such as sway of sunshine or movement of a shadow, as the emergence of the three-dimensional object.
The present description incorporates the contents described in the description and/or drawings of JP Patent Application No. 2008-312642 on which the priority of the present application is based.
1 . . . Bird's-eye view image obtaining means, 2 . . . Directional characteristic component extracting means, 3 . . . Vehicle signal obtaining means, 4 . . . Operation controlling means, 5 . . . Memory means, 6 . . . Three-dimensional object detecting means, 7 . . . Camera geometric record, 8 . . . Alarm means, 10 . . . Image detecting means, 12 . . . Sensor, 20 . . . Vehicle, 21 . . . Camera, 22 . . . Three-dimensional object, 30 . . . Bird's-eye view image, 31 . . . Viewpoint, 32 . . . Form 33 . . . View direction, 40 . . . Coordinate grid, 46, 47 . . . Orthogonal-direction characteristic components, 50 . . . Interval, 51 . . . Start point, 52 . . . End point
Hereafter, specific embodiments according to the present invention will be described with reference to the drawings. It is to be noted that the present embodiments will be described citing an automobile as one example of a vehicle, however, the “vehicle” according to the invention is by no means limited to the automobile, and includes all types of movable bodies that travel on an earth surface.
As shown in
The bird's-eye view image obtaining means 1 obtains an image of a camera 21 attached to a vehicle 20 in a predetermined time period. The bird's-eye view image obtaining means 1 corrects lens distortion, and thereafter, creates a bird's-eye view image 30 in which the image of the camera 21 has been projected on the earth surface by means of bird's eye view conversion. It is to be noted that data required for the correction of the lens distortion of the bird's-eye view image obtaining means 1 and the data required for the bird's eye view conversion have been preliminarily prepared, and have been kept in the calculator.
a) is one example of a situation where the camera 21 attached on the rear of the vehicle 20 has captured, in the space, a three-dimensional object 22 in an angle of view 29 of the camera 21. The three-dimensional object 22 is an upstanding human. The camera 21 is attached at a height of about a waist of the human. The angle of view 29 of the camera 21 has captured a leg 22a, a body 22b, and a lower part of an arm 22c of the three-dimensional object 22.
In
For example, in
It is to be noted that when a height of the camera 21 is higher than a position shown in
However, as in
Additionally, when the height of the camera 21 is lower than the position shown in
In the case where the three-dimensional object 22 is a human, the human is not necessarily upstanding, and an upstanding posture may be somewhat deformed due to bending of joints of the arms 22c and the leg 22a. However, in a range where a whole silhouette of the human is vertically long, as in
Even in the case where the human of the three-dimensional object 22 crouches down, the silhouette is vertically long as a whole, so that as in
In each of
The directional characteristic component extracting means 2 obtains horizontal gradient strength H and vertical gradient strength V, which the respective pixels of the bird's-eye view image 30 has, and obtains a light-dark gradient directional angle 0 defined by these horizontal gradient strength H and vertical gradient strength V.
The horizontal gradient strength H is obtained by a convolution operation through use of brightness of a neighborhood pixel located in the neighborhood of a target pixel and coefficients of a horizontal Sobel filter Fh shown in
The light-dark gradient directional angle 0 defined by the horizontal gradient strength H and the vertical gradient strength V is obtained through use of the following formula.
[Formula 1]
θ=tan−1(V/H) (1)
In the above-described Formula (1), the light-dark gradient directional angle θ represents an angle of in which direction a contrast of the brightness within a local range of three pixels by three pixels is changed.
The directional characteristic component extracting means 2 calculates the light-dark gradient directional angle θ as to all of the pixels on the bird's-eye view image 30 though use of the above-described Formula 1, and outputs the angle θ as directional characteristic components of the bird's-eye view image 30.
b) is one example of the calculation of the light-dark gradient directional angle θ through use of the above-described Formula 1. Numerical symbol 90 denotes an image in which the brightness of a pixel area 90a on the upper side is 0, whereas the brightness of a pixel area 90b on the lower side is 255; and each of the upper side and the lower side has a right oblique boundary. Numerical symbol 91 is an image that enlargedly shows an image block of three pixels by three pixels near the boundaries on the upper side and on the lower side of the image 90.
The brightness of the respective pixels of upper-left 91a, upper 91b, upper-right 91c, and left 91d of the image block 91 is 0. The brightness of the respective right 91f, central 91e, lower-left 91g, lower 91h, and lower-right 91i is 255. At this time, the gradient strength H, which is a value of the convolution operation of the central pixel 91e through use of the coefficients of the horizontal Sobel filter Fh shown in
The gradient strength V, which is a value of the convolution operation of the central pixel 91e through use of the coefficients of the vertical Sobel filter Fv, is 1020, which is calculated by the following formula: −1×0−2×0−0×0+0×0+0×0+0×255+1×255+2×255+1×255.
At this time, the light-dark gradient directional angle θ through use of the above-mentioned Formula (1) is approximately 76 degrees, and indicates an approximately lower-right direction in the same manner as the upper and lower boundaries of the image 90. It is to be noted that the coefficients used by the directional characteristic component extracting means 2 for obtaining the gradient strengths H and V or a size of the convolution are by no means limited to the ones shown in
Additionally, the directional characteristic component extracting means 2 may be another method other than the one using the light-dark gradient directional angle θ defined by the horizontal gradient strength H and the vertical gradient strength V, if such method is capable of extracting the direction of the contrast of the brightness (light-dark gradient direction) within the local range. For example, high-level local auto-correlation of Non-Patent Document 1 or Edge of Orientation Histograms of Non-Patent Document 2 can be used for the extraction of the light-dark gradient directional angle θ by the directional characteristic component extracting means 2.
The vehicle signal obtaining means 3 obtains, from a control device of the vehicle 20 and the calculator in the vehicle 20, a state of ON or OFF of an ignition switch, a state of an engine key such as ON of an accessory power source, a signal of a state of a gear such as forward movement, backward movement, parking, an operational signal of the car navigation system, and a vehicle signal such as time information.
As illustrated in, for example,
One example of the interval 50 includes, for example, a brief stop of the vehicle in order for the driver to carry baggage in the vehicle 20, or to carry the baggage out of the vehicle 20. In order to determine this brief stop of the vehicle, the signal when the ignition switch has been turned OFF from ON is taken as the start point 51, and the signal when the ignition switch has been turned ON from OFF is taken as the end point 52.
In addition, one example of the interval 50 includes, for example, a situation where the driver operates a car navigation device during stopping the vehicle to search a destination, and after setting of its route, starts the vehicle again. In order to determine the stop/the start of the vehicle for such operation of the car navigation device, the signal of vehicle speed or a brake and the signal of the start of the operation of the car navigation device are taken as the start point 51, and the signal of termination of the operation of the car navigation device and the signal of the brake are taken as the end point 52.
Here, regarding the operation controlling means 4, in the case where image quality of the camera 21 of the vehicle 20 is unstable immediately after the end point 52, such as a situation where power supply of the vehicle 20 to the camera 21 is cut off at timing of the start point 51, and the power supply of the vehicle 20 to the camera 21 is resumed at the timing of the end point 52, the timing when a predetermined delay time is provided from the timing when an end of the interval 50 shown in
When determining the timing of the start point 51, the operation controlling means 4 transmits, at that point, to the memory means 5 the directional characteristic components output from the directional characteristic component extracting means 2. Additionally, when determining the timing of the end point 52, the operation controlling means 4 outputs a signal of determination of detection to the three-dimensional object detecting means 6.
The memory means 5 holds stored information so as for such information not to be erased during the interval 5 shown in
In
The detection area of the bird's-eye view image 30 is provided by totally combining the intervals of the distances ρ of the coordinate grid 40 per the angle φ of the polar coordinates of the coordinate grid 40. For example, on
For the viewpoint 31 of the camera 21 on the bird's-eye view image 30 and a lattice of the polar coordinates of
a) and
c) shows a histogram 41a of the light-dark gradient directional angle θ obtained by the directional characteristic component extracting means 2 from the bird's-eye view image 30a.
[Formula 2]
θbin=Int(θ/θTICS)
In the above-mentioned Formula 2, θICS represents a pitch of the discretization of the angle, and INT( ) represents a function that rounds down numerals after the decimal point to make the remaining numerals an integer. θTICS may be preliminarily determined to the extent that the contour of the three-dimensional object 22 is deviated from the view direction 33, or in response to disarray of the image quality. For example, in the case where the three-dimensional object 22 targets a walking human, or in the case where the disarray of the image quality is large, θTICS may be made large so as to tolerate fluctuations in the contour of the three-dimensional object 22 due to the walking of the human or variations of the respective pixels of the light-dark gradient directional angle θ calculated by the directional characteristic component extracting means 2 due to the disarray of the image. It is to be noted that in the case where the disarray of the image is small and the fluctuations in the contour of the three-dimensional object 22 are also small, θICS may be made small.
In
The road surface 35 in the detection area 34 of the bird's-eye view image 30a is gravel, and a pattern of the gravel locally faces a random direction. Accordingly, the light-dark gradient directional angle θ calculated by the view direction detecting means 2 is not biased. Additionally, the shadow 38a in the detection area 34 of the bird's-eye view image 30a has a light and dark contrast at a boundary part between the shadow 38a and the road surface 35. However, a segment length of the boundary part between the shadow 38a and the road surface 35 in the detection area [I] 34 is shorter compared with a case of the three-dimensional object 22 such as a human, and an influence due to the aforementioned contrast is small. Thus, in the histogram 41a of the light-dark gradient directional angle θ obtained from the bird's-eye view image 30a, the directional characteristic components are not strongly biased as shown in
Meanwhile, in the bird's-eye view image 30b, the boundary part between the three-dimensional object 22 and the road surface 35 is included in the detection area [I] 34 along the distance ρ direction of the polar coordinates, and there is the strong contrast in a direction intersecting with the view direction 33. Thus, in the histogram 41b of the light-dark gradient directional angle θ obtained from the bird's-eye view image 30b, an orthogonal-direction characteristic component 46 or an orthogonal-direction characteristic component 47 has the large frequency (amount).
It is to be noted that in
In Step S2 of
In the processing of Step S2 and Step S3, among the directional characteristic components of the histograms illustrated in
For example, given that the angle of the view direction 33 is and an acceptable error from the view direction 33 of the contour of the form 32 in consideration of the walking of the human or the disarray of the image is ε, the orthogonal-direction characteristic component 46 can be calculated by the number of the pixels having the angle θ in the range of (η−90±ε) in the detection area [I] 34; whereas the orthogonal-direction characteristic component 47 can be calculated by the number of the pixels having the angle θ in the range of (η+90±ε) in the detection area [I] 34.
In Step S4 of
[Formula 3]
ΔS+=Sb+−Sa+ (3)
[Formula 4]
S
−
=S
b−
−S
a− (4)
[Formula 5]
ΔS±=ΔS++ΔS− (5)
In Step S5 of
Meanwhile, when the increments of the orthogonal-direction characteristic components 46 and 47 calculated in Step S4 is less than the predetermined threshold values, it is determined that during the interval 50 shown in
For example, in the case where the bird's-eye view image 30a shown in
In contrast, in the case where the bird's-eye view image 30b shown in
In the case where the three-dimensional object 22 has not emerged during the interval 50 shown in
Additionally, in the case where the three-dimensional object 22 has not emerged during the interval 50 shown in
Meanwhile, in the case where although there is the emergence of the three-dimensional object 22 during the interval 50 shown in
Step S9 of
In Step S9, first, the detection areas are integrated in the distance ρ direction of the identical direction φ on the polar coordinates. For example, as shown in
Next, in Step S9, among the detection areas integrated in the distance ρ direction on the polar coordinates, ones whose directions φ on the polar coordinates are close are integrated into one detection area. For example, as shown in
a) and
The angle Ω 90 is uniquely determined from the width W 92 at the foot and the distance R 91. Given that the widths W 92 are the same, when the three-dimensional object 22 is close to the viewpoint 31 of the camera 21 as shown in
The three-dimensional object emergence detecting device of the present invention targets, for the detection, the three-dimensional object 22 having the width and the height close to those of a human among the three-dimensional objects. Thus, it is possible to preliminarily estimate a range of the width at the foot in the space of the three-dimensional object 22. Therefore, it is possible to preliminarily estimate the range of the width W 92 at the foot of the three-dimensional object 22 on the bird's-eye view image 30 from the range of the width at the foot of the three-dimensional object 22 in the space and calibration data of the camera geometric record 7.
From this preliminarily estimated range of the width W 92 at the foot, it is possible to calculate the range of the apparent angle Ω 90 at the foot with respect to the distance R 91 to the foot. The range of the angle φ for integrating the detection areas in Step S9 is determined through use of the distance from the detection area on the bird's-eye view image 30 to the viewpoint 31 of the camera 21, and a relationship between the above-mentioned distance R 91 to the foot and the apparent angle Ω 90 at the foot.
The method for integrating the detection areas of Step S9 mentioned above is merely one example. Any method, which integrates the detection areas in the range depending on the apparent size of the three-dimensional object 22 on the bird's-eye view image 30, is applicable to the method for integrating the detection areas of Step S9. For example, any method, which calculates the distance between the detection areas where it is determined that there is the emergence of the three-dimensional object 22 on the coordinate partitioning 40 and forms a group of the adjacent detection areas or that of the detection areas whose distances are close in the range of the apparent size of the three-dimensional object 22 on the bird's-eye view image 30, is applicable to the method for integrating the detection areas of Step S9.
It is to be noted that in the descriptions of Step S5, Step S6, and Step S7, the explanations have been made that even in the case where the three-dimensional object 22 has emerged during the interval 50 shown in
Additionally, regarding the coordinate partitioning 40 of the loop processing from Step S1 to Step S8, grid partitioning of the polar coordinates shown in
Moreover, partitioning intervals of the distance ρ and the angle φ of the coordinate partitioning 40 are arbitrary. The smaller the partitioning intervals of the coordinate partitioning 40 becomes, the more exerted, in Step S4, is an advantage that the emergence of the small three-dimensional object 22 can be detected based on the local increments of the orthogonal-direction characteristic components 46 and 47 on the bird's-eye view image 30. Meanwhile, produced is a disadvantage that the number of the detection areas, in which the integration is determined in Step S9, increases and thus a calculation amount increases. It is to be noted that when the partitioning intervals of the coordinate partitioning 40 are made the smallest, the initial detection area of the coordinate partitioning 40 becomes one pixel on the bird's-eye view image.
In Step S10 of
In
It is to be noted that the three-dimensional object detecting means 6 adopts a method for detecting the three-dimensional object 22 from the two bird's-eye view images 30 of the start point 51 and the end point 52 on the basis of the increments of the orthogonal-direction characteristic components 46 and 47. Accordingly, the three-dimensional object detecting means 6 can correctly extract the silhouette of the three-dimensional object 22 as long as a disturbance, such as the shadow of the three-dimensional object 22 or the shadow of the own vehicle 20, does not incidentally overlap with the view direction 33 of the camera. Therefore, the broken line 70 is drawn along the silhouette of the three-dimensional object 22 in most cases, and a driver can comprehend a shape of the three-dimensional object 22 from the broken line 70.
It is to be noted that the alarm means 8 may draw a graphic close to the silhouette of the three-dimensional object 22 on the bird's-eye view image 30 in place of the broken line 70 in the screen display 71. For example, the alarm means 8 may draw a parabolic line in place of the broken line 70.
It is to be noted that in order to display the vicinity of the vehicle 20, also considerable is a configuration in which the angle of view of the bird's-eye view image 30 is set in the neighborhood of the vehicle 20, and a whole of the bird's-eye view image 30 is used for the screen display 71. However, if the angle of view of the bird's-eye view image 30 is narrowed, the extension of the three-dimensional object 22 along the view direction 33 becomes small. Thus, it is difficult for the three-dimensional object detecting means 6 to detect the three-dimensional object 22 with favorable precision. For example, in the case where the angle of view of the bird's-eye view image 30 is narrowed to a range of the screen display 71′, only the foot of the three-dimensional object 22 is included in the angle of view of the bird's-eye view image 30. Thus, in comparison with the case where portions from the leg 22a to the body 22b of the three-dimensional object 22 are included in the angle of view of the bird's-eye view image 30 as in
The alarm means 8 may be fabricated so as to be rotated to change its direction, or fabricated so as to adjust the brightness for further improving visibility of the screen display 71 whose example has been shown in
In addition to an alarm sound such as a peeping sound, the audio output of the alarm means 8 may be an announcement for explaining a content of the alarm, such as “Some kind of three-dimensional object seems to have emerged around the vehicle” or “Some kind of three-dimensional object seems to have emerged around the vehicle. Please confirm the monitor screen,” or both of the alarm sound and the announcement.
In Embodiment 1 of the present invention, according to the above-described functional configurations, the comparison of the images before and after the driver's attention is temporarily deviated from the confirmation of the surroundings of the vehicle 20 is determined based on the increments of the orthogonal-direction characteristic components that are the directional characteristic components each having the direction orthogonal to the view direction from the viewpoint 31 of the camera 21 among the directional characteristic components on the bird's-eye view image 30, whereby it is possible to draw the attention to the surroundings of a driver attempting to start the vehicle 20 again by outputting the alarm when the three-dimensional object 22 has emerged during the time when the confirmation of the surroundings is ceased.
Additionally, the changes in the images before and after the driver's attention is temporarily deviated from the confirmation of the surroundings of the vehicle 20 are narrowed down to the increments of the orthogonal-direction characteristic components each having the direction nearly orthogonal to the view direction from the viewpoint 31 of the camera 21 on the bird's-eye view image 30, whereby it is possible to suppress erroneous reports due to erroneous detection of those other than an emerged object, such as the changes in the shadow of the own vehicle 20 or the fluctuations in sunshine strength, or to suppress the unnecessary erroneous reports when the three-dimensional object 22 is left.
In
The image changes of the three-dimensional object 22 captured by the image detecting means 10 may be attached with prerequisites. For example, the image detecting means 10 may adopt a method for capturing the whole movement of the three-dimensional object 22 or motions of a limb under the prerequisite that the three-dimensional object 22 is movable.
The image features of the three-dimensional object 22 captured by the image detecting means 10 may also be attached with prerequisites. The image detecting means 10 may adopt a method for detecting a skin color under the prerequisite that a skin is exposed. Examples of the image detecting means 10 include a moving vector method for detecting a moving object based on a movement amount, in which corresponding points between images at two times are searched and obtained in order to capture motions of a whole or part of the three-dimensional object 22, or a skin color detection method for extracting skin color components from a color space of a color image in order to extract a skin color part of the three-dimensional object 22. However, the image detecting means 10 is by no means limited to these examples. Taking the image at the present time or those in time series as input, when detection requirements are satisfied in a local unit on the image, the image detecting means 10 outputs “detection ON;” whereas when the detection requirements are satisfied, the image detecting means 10 outputs “detection OFF”.
In
In
It is determined whether or not the amounts of the orthogonal-direction characteristic components each having the direction nearly orthogonal to the view direction from the viewpoint 31 of the camera 21 obtained in Step S3, namely, a sum of Sb+ obtained by the above-mentioned Formula 3 and Sb− obtained by the above-mentioned Formula 4 is equal to or more than a predetermined threshold value (Step S14). When the aforementioned amount or sum is equal to or more than the threshold value, it is determined that there is the three-dimensional object in the detection area [I] (Step S16). When such amount or sum is less than the threshold value, it is determined that there is no three-dimensional object in the detection area [I] (Step S17).
In subsequent Step S9, similarly to Embodiment 1, the plural detection areas are integrated. In Step S10, the number of the three-dimensional objects 22 and area information are output. Note that in the determination of Step S14, it is possible to use any method, in which two directions orthogonal to the view direction from the viewpoint 31 of the camera 21 (e.g., the direction 36 and the direction 37 in
In
Meanwhile, in the determination of Step S16, the shadow 63 of the three-dimensional object 22 does not extend along the view direction from the viewpoint 31 of the camera 21, so that the determination is “no.” Thus, it is only the three-dimensional object 22 that is detected in Step S10 in a scene of
It is to be noted that supposing that a situation, in which the strut 62 extending along the view direction from the viewpoint 31 of the camera 21 or the white line 64 is determined in Step S15, is taken into consideration, the orthogonal-direction characteristic components each having the direction nearly orthogonal to the view direction from the viewpoint 31 of the camera 21 are concentrated and increased in the strut 62 or the white line 64, so that the determination result in Step S15 is “yes.” However, in the strut 62 or the white line 64, there is no movement amount, and the determination in Step S11 in a former stage than Step S15 is “no.” Thus, it is determined that there is no three-dimensional object in the detection area [I] including the strut 62 or the white line 64 (S17).
Under situations other than that of
However, if the plants are not tall and do not extend along the view direction from the viewpoint 31 of the camera 21, the determination of Step S16 is “no,” and it is determined that there is no three-dimensional object (Step S17). In addition, even in the case of a target to which the image detecting means 10 incidentally outputs “detection ON,” as long as the target incidentally in the state of “detection ON” does not extend along the view direction from the viewpoint 31 of the camera 21, such target is not detected as the three-dimensional object 22.
It is to be noted that in terms of the properties of the processing of the image detecting means 10, in the case where the three-dimensional object 22 can be only partially detected in the bird's-eye view image 30, in the flow of
Moreover, as in a situation where the three-dimensional object 22 moves, and thereafter, stops on the bird's-eye view image 30, in the case where the image detecting means 10 once outputs “detection ON;” but thereafter, the image detecting means 10 outputs “detection OFF,” resulting in losing sight of the three-dimensional object 22, in the flow of
In the above-mentioned example, the image detecting means 10 adopted the moving vector method. However, in a similar manner, also in other image processing methods, when the image detecting means 10 outputs “detection ON,” as long as the target in the state of “detection ON” does not extend along the view direction from the viewpoint 31 of the camera 21, it is possible to suppress the erroneous detection of those other than the three-dimensional object 22. Additionally, also after the image detecting means 10 has lost sight of the detected target, during the predetermined timeout time, when the target in the state of “detection ON” extends along the view direction from the viewpoint 31 of the camera 21, such target remains detected as the three-dimensional object 22.
In Embodiment 2 of the present invention, through the above-described functional configurations, among targets on which the image detecting means 10 by means of the image processing has performed the detection, those extending along the view direction from the viewpoint 31 of the camera 21 are selected, thereby enabling to eliminate the unnecessary erroneous reports when the image detecting means 10 detects those other than the three-dimensional object 22, such as the incidental disturbance.
Additionally, in Embodiment 2 of the present invention, also in the case where the image detecting means 10 detects the unnecessary area around the three-dimensional object 22, such as the shadow 63 of the three-dimensional object 22, it is possible to delete the unnecessary part other than the three-dimensional object 22 in the screen of the alarm means 8 and perform the output. Moreover, in Embodiment 2, also after the image processing means 10 has lost sight of the detected target, during the timeout time, as long as the target in the state of “detection ON” extends along the view direction from the viewpoint 31 of the camera 21, it is possible to continue the detection.
In
In
In
In
Step S3 and Step S15 when the determination of Step S12 is “yes” are identical to those of Embodiment 2. In Step S3, the orthogonal-direction characteristic components, each having the direction nearly orthogonal to the view direction from the viewpoint 31 of the camera 21, are calculated from the directional characteristics of the bird's-eye view image 30 at the present time. Thereafter, in Step S15, the orthogonal-direction characteristic components obtained in Step S3, each having the direction nearly orthogonal to the view direction from the viewpoint 31 of the camera 21, have the values equal to or more than the threshold values, it is determined that there is the three-dimensional object in the detection area [I] (Step S16). When such values are less than the threshold value, it is determined that there is no three-dimensional object in the detection area [I] (Step S17).
In terms of the properties of the sensor 12, in the case where the detection range 74 of the sensor 12 covers only the limited area on the bird's-eye view image 30, even if the three-dimensional object 22 is present on the bird's-eye view image 30, merely a part of the three-dimensional object 22, which extends along the view direction from the viewpoint 31 of the camera 21, can be detected.
For example, in the case of
For example, given that the detection area of (p1, p2, q2, q1) in the coordinate partitioning 40 of
In terms of the properties of the sensor 12, in the case where the sensor 12 can only intermittently detect the three-dimensional object 22 when viewed in time series, in the determination of Step S12 of
Moreover, in the case where the sensor 12 once outputs “detection ON;” but thereafter, outputs “detection OFF,” resulting in losing sight of the three-dimensional object 22, in the flow of
In the case where the sensor 12 outputs the measurement information such as the distance or the orientation other than “detection ON” and “detection OFF,” taking the detection range 75 as an effective area of the detection range 74, the detection area [I] is in the detection range 74 in Step S12, so that the conditions, for which the detection area [I] is in the detection range 75, may be tightened. In this way, in Step S12, when comparing the detection area [I] with the detection range 75, even if the strut 62 or the white line 64, as in
In Embodiment 3 of the present invention, through the above-described functional configurations, among targets detected by the sensor 12, those extending along the view direction from the viewpoint 31 of the camera 21 are selected, thereby enabling to suppress the detection of the targets other than the three-dimensional object 22 or the detection of the incidental disturbance, and thus, to decrease the erroneous reports. Additionally, also after the image processing has lost sight of the detected target, as long as the target in the state of “detection ON” extends along the view direction from the viewpoint 31 of the camera 21 during the timeout time, it is possible to continue the detection.
In the present Embodiment 3, through the above-described functional configurations, from the detection range 74 or the detection range 75 of the sensor 12, the detection range extending along the view direction from the viewpoint 31 of the camera 21 is selected, thereby enabling to decrease the unnecessary erroneous reports when the sensor 12 detects those other than the three-dimensional object, such as the incidental disturbance. Additionally, in the present Embodiment 3, even in the case where the sensor 12 detects the limited unnecessary area around the three-dimensional object on the bird's-eye view image 30, it is possible to delete the unnecessary part other than the three-dimensional object 22 in the screen of
Moreover, in the present Embodiment 3, the determination conditions are loosened in such a manner that the overlapping of the detection area [I] with the detection range 74 is made somewhere along the polar coordinates of the coordinate grid 40, thereby enabling to detect an overall image of the three-dimensional object 22 even in the case where the detection range 74 of the area sensor 12 is narrow on the bird's-eye view image 30.
According to the present invention, the emergence of the three-dimensional object 22 is detected by comparing the amounts of the directional characteristic components of the images before and after the interval 50 when the driver's attention is deviated from the confirmation of the surroundings of the vehicle 20 (e.g., the bird's-eye view images 30a and 30b), so that it is possible to detect the three-dimensional object 22 around the vehicle 20 even in a situation where the vehicle 20 is stopped. Additionally, the emergence of the three-dimensional object 22 can be detected by the single camera 21. Moreover, it is possible to suppress the unnecessary alarm when the three-dimensional object 22 is left. Besides, through use of the orthogonal-direction characteristic components among the directional characteristic components, it is possible to suppress the erroneous reports due to the incidental changes in the image, such as the sway of the sunshine or the movement of the shadow.
It is to be noted that the present invention is by no means limited to the above-mentioned embodiments, and various modifications can be made within a range not departing from the spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2008-312642 | Dec 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/070457 | 12/7/2009 | WO | 00 | 6/7/2011 |