The present invention relates to a vehicle periphery monitoring apparatus (apparatus for monitoring surroundings of vehicle) for detecting a monitored target in the periphery of a vehicle and for displaying the detected object in a simple format.
There has heretofore been known a vehicle periphery monitoring apparatus for displaying an image captured of an area in front of a vehicle by an infrared camera. The image is displayed on a display in front of the driver's seat, and in the image, an area is highlighted representing a pedestrian detected from the image (see FIG. 5 of Japanese Laid-Open Patent Publication No. 2009-067214).
Another known apparatus displays an icon indicating the presence of a pedestrian on a head-up display (HUD) in addition to displaying a highlighted image area representing a pedestrian in an image displayed on a display. According to Japanese Laid-Open Patent Publication No. 2004-364112, if a pedestrian is determined to be present in an image captured by an infrared camera, an icon of the pedestrian is displayed on an HUD (see FIG. 6 and paragraphs [0036] through [0038]).
One technology for detecting pedestrians is capable of dealing with both processing speed and judging accuracy, by simply selecting a pedestrian candidate from binarized information and judging if the candidate represents a pedestrian based on grayscale information (see Abstract of Japanese Laid-Open Patent Publication No. 2003-284057).
According to Japanese Laid-Open Patent Publication No. 2004-364112, as described above, an icon indicating the presence of a pedestrian is displayed on an HUD. However, much remains to be improved in relation to calling more appropriate attention from the user.
The present invention has been made in view of the above task. It is an object of the present invention to provide a vehicle periphery monitoring apparatus, which is capable of calling appropriate attention from the user.
According to the present invention, there is provided a vehicle periphery monitoring apparatus for detecting a monitored target in the periphery of a vehicle based on a captured image signal generated by an image capturing device mounted on the vehicle, comprising a first display unit that displays a captured image represented by the captured image signal, a second display unit that visualizes information concerning whether or not the monitored target exists in a plurality of sub-regions, which make up the captured image displayed on the first display unit, based on whether marks associated respectively with the sub-regions are displayed, and an attention degree evaluator that evaluates a degree of attention of the monitored target for the vehicle, if at least one instance of the monitored target is detected in the captured image, wherein the second display unit displays the marks in different display modes depending on the degree of attention evaluated by the attention degree evaluator.
According to the present invention, if at least one instance of the monitored target is detected in the captured image, the second display unit displays the marks in different display modes depending on the degree of attention of the monitored target for the vehicle. Accordingly, it is possible to visually indicate to the user different degrees of attention of monitored targets, thereby calling appropriate attention from the user.
The degree of attention may represent a misidentifying possibility that the driver or occupant of the vehicle may possibly misidentify the position of the monitored target by visually recognizing the mark that is displayed. If it is judged that the misidentifying possibility is high, then the second display unit may simultaneously or alternately display the marks corresponding to one of the sub-regions in which at least a portion of the monitored target exists and an adjacent one of the sub-regions. The existence of the monitored target is thus displayed in a highlighted manner, making it possible to call appropriate attention from the user.
The attention degree evaluator may judge that the misidentifying possibility is high if the monitored target exists on one of boundary lines between the sub-regions.
The attention degree evaluator may judge that the misidentifying possibility is high before or after the monitored target moves across one of boundary lines between the sub-regions.
The attention degree evaluator may judge that the degree of attention is high if the monitored target is highly likely to collide with the vehicle.
The degree of attention may represent a possibility of collision of the monitored target with the vehicle. In this case, the attention degree evaluator may evaluate the possibility of collision of each monitored target if the monitored targets are detected respectively in at least two of the sub-regions, and the second display unit may display the marks in different display modes depending on the possibility of collision. Consequently, the difference between the degrees of attention of the monitored targets can be indicated to the user for assisting in driving the vehicle.
The second display unit may display one of the marks depending on at least one monitored target whose possibility of collision is evaluated as being high, so as to be more visually highlighted than another one of the marks depending on another one of the monitored targets. Thus, the existence of a monitored target whose attention level is relatively high from among a plurality of monitored targets can be conveniently indicated to the driver.
The attention degree evaluator may judge whether or not it is easy for the driver of the vehicle to locate the monitored target based on at least the captured image signal, and evaluate the possibility of collision depending on the result of the judgment. Accordingly, the accuracy in evaluating the degree of risk is increased by also taking into account an evaluation considered from the viewpoint of the driver.
The attention degree evaluator may judge whether or not the monitored target recognizes the existence of the vehicle based on at least the captured image signal, and evaluate the possibility of collision depending on the result of the judgment. Thus, the accuracy in evaluating the degree of risk can be increased by also taking into account an evaluation considered from the viewpoint of the monitored target.
The attention degree evaluator may predict a route to be followed by the vehicle, and evaluate the possibility of collision depending on the predicted route. Accordingly, the accuracy in evaluating the degree of risk can be increased by also taking into account the predicted route to be followed by the vehicle.
The attention degree evaluator may predict a direction of travel of the monitored target, and evaluate the possibility of collision depending on the predicted direction of travel. Accordingly, the accuracy in evaluating the degree of risk can be increased by also taking into account the direction of travel of the monitored target.
The sub-regions may comprise a central region corresponding to a central range that includes a direction of travel of the vehicle, a left region corresponding to a left range that is positioned to the left of the central range, and a right region corresponding to a right range that is positioned to the right of the central range, in an image range captured in front of the vehicle by the image capturing device. Therefore, apart from monitored targets that exist on left and right sides of the direction of travel of the vehicle, it is possible to call attention from the driver concerning a monitored target that exists in the direction of travel of the vehicle and is likely to collide with the vehicle.
The vehicle periphery monitoring apparatus may further comprise a region selector that selects one of the sub-regions to which a target image area sought as an image area of the monitored target from the captured image belongs. The region selector may select the central region if the target image area is positioned on a boundary line between the central region and the left region, or is positioned on a boundary line between the central region and the right region.
Generally, when a monitored target exists in the central range including the direction of travel of the vehicle, the driver pays more attention to the monitored target than if the monitored target were to exist in the left and right ranges. According to the present invention, if a target image area exists over a range between the central region and the left or right region i.e., on one of the boundary lines of the central region, the monitored target is detected as belonging to the central range, rather than the left range or the right range. Therefore, the driver is made to pay as much attention to the detected monitored target as attention that would be directed to a monitored target included fully within the central range.
The vehicle periphery monitoring apparatus may further comprise a direction-of-turn detecting sensor that detects a direction of turn of the vehicle, and a boundary line setter that displaces the boundary line between the central region and the right region toward the right region if the direction-of-turn detecting sensor detects a left turn of the vehicle, and displaces the boundary line between the central region and the left region toward the left region if the direction-of-turn detecting sensor detects a right turn of the vehicle.
A horizontal distance between a monitored target that exists near an edge of a road and the vehicle tends to be greater when the vehicle is traveling on a curved road than when the vehicle is traveling on a straight road. Thus, while the vehicle is making a left turn, the boundary line setter displaces the boundary line between the central region and the right region toward the right region. Similarly, while the vehicle is making a right turn, the boundary line setter displaces the boundary line between the central region and the left region toward the left region. In this manner, an image area of the monitored target existing near the edge of the road is displayed on the second display unit as belonging to the central region, thereby drawing attention from the driver to the monitored target.
The vehicle periphery monitoring apparatus may further comprise a vehicle speed sensor that detects a vehicle speed of the vehicle, and a boundary line setter that displaces the boundary line between the central region and the right region toward the right region and displaces the boundary line between the central region and the left region toward the left region, if the vehicle speed detected by the vehicle speed sensor is high rather than low.
At times that the vehicle is traveling at high speed, the time required for the vehicle to approach a monitored target in front of the vehicle is shorter than if the vehicle were traveling at low speed, so that the driver needs to pay more attention to the monitored target. Accordingly, while the vehicle is traveling at high speed, the boundary line setter displaces the boundary line between the central region and the left region toward the left region, and displaces the boundary line between the central region and the right region toward the right region. Consequently, a target image area, which would be displayed as belonging to the left region or the right region on the second display unit while the vehicle is traveling at low speed, is displayed as belonging to the central region while the vehicle is traveling at high speed. Consequently, while the vehicle is traveling at high speed, early attention can be called from the driver concerning the monitored target that is approaching the vehicle.
Vehicle periphery monitoring apparatus according to preferred embodiments of the present invention will be described below with reference to the accompanying drawings. More specifically, a vehicle periphery monitoring apparatus according to a first embodiment of the present invention will be described below with reference to
As shown in
The infrared cameras 16L, 16R are image capturing devices, which function as image capturing means for capturing images of the periphery of the vehicle 12. According to the present embodiment, the two infrared cameras 16L, 16R are combined to make up a stereo camera. The infrared cameras 16L, 16R both have a characteristic such that, as the temperature of a subject is higher, output signals from the infrared cameras 16L, 16R become higher in level (increase in brightness).
As shown in
The vehicle speed sensor 18 detects a vehicle speed V [km/h] of the vehicle 12, and supplies an output signal representing the detected vehicle speed V to the ECU 22. The yaw rate sensor 20 detects a yaw rate Yr [°/sec] of the vehicle 12, and supplies an output signal representing the detected yaw rate Yr to the ECU 22.
The ECU 22 serves as a controller for controlling the vehicle periphery monitoring apparatus 10. As shown in
Signals from the infrared cameras 16L, 16R, the vehicle speed sensor 18, and the yaw rate sensor 20 are supplied through the input/output unit 30 to the ECU 22. Output signals from the ECU 22 are supplied through the input/output unit 30 to the speaker 24, the general-purpose monitor 26, and the MID 28. The input/output unit 30 has an A/D converter circuit, not shown, which converts analog signals supplied thereto into digital signals.
The processor 32 performs processing operations on the signals from the infrared cameras 16L, 16R, the vehicle speed sensor 18, and the yaw rate sensor 20. Based on the results of such processing operations, the processor 32 generates signals to be supplied to the speaker 24, the general-purpose monitor 26, and the MID 28.
As shown in
The binarizing function 40 generates a binarized image (not shown) by binarizing a grayscale image 72 (
The MID controlling function 48 controls the MID 28 in order to display a mark, e.g., an icon (hereinafter referred to as a “biological target icon”) representing a biological target such as a human being or an animal on the MID 28. As shown in
The memory 34 includes a RAM (Random Access Memory) for storing temporary data, etc., used for various processing operations, and a ROM (Read Only Memory) for storing programs to be executed, tables, maps, etc.
The speaker 24 produces a warning sound or the like based on a command from the ECU 22. Although not shown in
The general-purpose monitor 26 comprises a liquid crystal panel, an organic EL (ElectroLuminescence) panel, or an inorganic EL panel for displaying color or monochromatic images. As shown in
The general-purpose monitor 26 can add a highlighting feature, which is generated by the general-purpose monitor controlling function 46, to the grayscale image 72. More specifically, as shown in
The general-purpose monitor 26 may display a grayscale image 72 captured by the right infrared camera 16R, rather than the grayscale image 72 captured by the left infrared camera 16L. The general-purpose monitor 26 may also display any of various other images, including navigation images such as road maps, service information, etc., or moving image content, etc., simultaneously in addition to, or selectively instead of the grayscale image 72 from the infrared camera 16L or the infrared camera 16R. The general-purpose monitor 26 may select any of such images in response to pressing of a certain pushbutton switch, or according to a preset selecting condition, for example.
The MID 28 is a simple display device (icon display device) for visualizing and displaying ancillary information at the time that the vehicle 12 is driven. The MID 28 comprises a display module, which is simpler in structure and less costly than the general-purpose monitor 26, particularly the display panel thereof. For example, a display panel, which is lower in resolution than the general-purpose monitor 26, e.g., a display panel that operates in a non-interlace mode, may be used as the MID 28.
As shown in
For example, as shown in
As shown in
Incidentally, instead of the human icon 86, icons representing other biological target (e.g., an animal icon representing an animal) may be displayed.
Alternatively, instead of the icons referred to above, i.e., the road icon 84, the human icon 86, and the animal icon, the MID 28 may selectively display information concerning the mileage of the vehicle 12, the present time, and the instrument panel 62. The MID 28 may select any of such items of information (images) in response to pressing of a certain pushbutton switch, or according to a preset selecting condition, for example.
As shown in
According to the present embodiment, one of the first sub-regions 90 to which the biological area 74 belongs (hereinafter referred to as a “first biological area existing sub-region”) in the grayscale image 72 is determined, and a biological icon, such as a human icon 86, an animal icon, or the like, is displayed on the MID 28 in one of the second sub-regions 100 of the icon image 82, which corresponds to the biological area existing sub-region (hereinafter referred to as a “second biological area existing sub-region”). Stated otherwise, depending on whether or not marks (a human icon 86 or the like) associated respectively with the first sub-regions 90 of the grayscale image 72 are displayed on the general-purpose monitor 26, information is visualized concerning whether or not monitored targets (biological targets) exist in the first sub-regions 90 of the grayscale image 72. In certain cases, as described in detail later, the number of biological icons to be displayed increases from 1 to 2.
In step S4, the ECU 22 extracts a biological area 74 from the acquired binarized image and the grayscale image 72. Since a biological target is higher in temperature than the surrounding area, the area corresponding to the biological target, i.e., the biological area 74, appears high in brightness in the binarized image and in the grayscale image 72. Consequently, it is possible to extract a biological area 74 by searching the binarized image and the grayscale image 72 for an area of pixels having a brightness level that is greater than a predetermined threshold value.
Both the binarized image and the grayscale image 72 are used in order to identify the presence of a biological target easily from the binarized image, and then acquire detailed information concerning the biological target from the grayscale image 72. Such a processing sequence is disclosed in Japanese Laid-Open Patent Publication No. 2003-284057, for example. Alternatively, a biological area 74 may be extracted from either one of the binarized image and the grayscale image 72.
In step S5, the ECU 22 displays the grayscale image 72 with the highlighted biological area 74 on the general-purpose monitor 26. As described above, the ECU 22 highlights the biological area 74 with at least one of a color applied to the biological area 74 and the highlighting frame 76 added around the biological area 74 (
In step S6, the ECU 22 establishes first boundary lines 92 (
In step S7, the ECU 22 determines one of the first sub-regions 90 to which the biological area 74 belongs (first biological area existing sub-region). If the biological area 74 exists on one of the first boundary lines 92, then the ECU 22 may determine two first sub-regions 90 on both sides of the first boundary line 92 as first sub-regions 90 to which the biological area 74 belongs.
As shown in
As shown in
In step S8, the ECU 22 determines one of the second sub-regions 100 (a second biological area existing sub-region), which corresponds to the first biological area existing sub-region.
In step S9, the ECU 22 displays a biological target (a human icon 86 or the like) representing the biological target in the second biological area existing sub-region, which was determined in step S8. If there is a high likelihood of collision between the vehicle 12 and the biological target, then the speaker 24 may generate a warning sound.
In the above embodiment, it is determined whether or not the biological area 74 exists on one of the first boundary lines 92 based on the position of the biological area 74 at the present time (in a present processing cycle). However, the judgment process is not limited to this technique. It is possible to regard a first sub-region 90 in which the person 110 (biological area 74) is highly likely to move as a first biological area existing sub-region, based on a motion vector of the person 110 (biological area 74) or the position thereof in the grayscale image 72, thereby calling more appropriate attention from the driver.
Upon use of the motion vector of the biological area 74, if the biological area 74 exists in the central first sub-region 90C and the motion vector is directed to the left, then the central first sub-region 90C and the left first sub-region 90L are selected. On the other hand, if the biological area 74 exists in the central first sub-region 90C and the motion vector is directed to the right, then the central first sub-region 90C and the right first sub-region 90R are selected.
Upon use of the position of the biological area 74 on the grayscale image 72, if the biological area 74 exists in a left side of the central first sub-region 90C, then the central first sub-region 90C and the left first sub-region 90L are selected. On the other hand, if the biological area 74 exists in a right side of the central first sub-region 90C, then the central first sub-region 90C and the right first sub-region 90R are selected.
The motion vector and the position on the grayscale image 72 may also be used to correct the position of the biological area 74, which is used in the process involving the first boundary lines 92, and the process carried out when there is a high possibility of collision between the vehicle 12 and the biological target. For example, if the motion vector of the biological area 74 is directed to the left, then using coordinates that are shifted to the left from the position (present position) where the biological area 74 exists, it may be judged whether or not the biological area 74 exists on one of the first boundary lines 92.
As described above, the degree of attention evaluating function 44 evaluates the degree of attention of a biological target for the vehicle 12 if at least one biological target (a monitored target such as a person 110 or an animal 160) is detected in the first sub-regions 90 of the grayscale image 72, and the MID 28 displays a mark in a different display mode depending on the degree of attention evaluated by the degree of attention evaluating function 44. Accordingly, it is possible to visually indicate to the user different degrees of attention of monitored targets, thereby calling appropriate attention from the user.
According to the first embodiment, the degree of attention represents a misidentifying possibility that the driver or occupant of the vehicle 12 may possibly misidentify the position of the monitored target such as the person 110 or the like by visually recognizing the displayed mark. If it is judged that the misidentifying possibility is high, then a plurality of biological icons (human icons 86) corresponding to one of the first sub-regions 90 where at least a portion of the monitored target exists and an adjacent one of the first sub-regions 90 are displayed simultaneously on the MID 28.
If the possibility of misidentification is high, this implies that (1) the biological area 74 exists on one of the first boundary lines 92, (2) the biological area 74 lies across one of the first boundary lines 92, or (3) the person 110 is highly likely to collide with the vehicle 12 (e.g., the person 110 is close to the vehicle 12 or the vehicle speed V is high).
A modification of the operation sequence of the vehicle periphery monitoring apparatus 10 will be described below. Such a modification differs from the first embodiment in relation to the behavior of the boundary line setting function 50 (step S6 of
In step S6 of
The captured image 134 is segmented into a central region 154C corresponding to a central range including the direction of travel of the vehicle 12, a left region 154L corresponding to a left range to the left of the central range, and a right region 154R corresponding to a right range to the right of the central range. The ranges are included in a captured image range in front of the vehicle 12. The captured image 134 is segmented into the central region 154C, the left region 154L, and the right region 154R by a left boundary line 151L, which is disposed to the left of the central region 154C, and a right boundary line 151R, which is disposed to the right of the central region 154C. Segmentation of the captured image 134 shown in
The left boundary line 151L and the right boundary line 151R are used in a process of generating an icon image 144. The left boundary line 151L and the right boundary line 151R are not displayed in the actual captured image 134 on the general-purpose monitor 26. Data for generating the captured image 134 represent brightness data of the pixels that make up the captured image 134. Based on which region each of the pixels of the pixel group belongs to, i.e., the left region 154L, the central region 154C, or the right region 154R, it is judged whether a group of pixels making up a monitored target image belongs to the left region 154L, the central region 154C, or the right region 154R.
A relationship between the left boundary line 151L and the right boundary line 151R, and a lateral field of view of the infrared camera 16R in
A central viewing angle β is defined within the lateral viewing angle α and left and right ends thereof are demarcated by an inner left demarcating line 166L and an inner right demarcating line 166R, respectively. The inner left demarcating line 166L and the inner right demarcating line 166R specify the left boundary line 151L and the right boundary line 151R, respectively, in the captured image 134. In other words, an image area representing an object within the central viewing angle β is displayed in the central region 154C of the captured image 134.
In the first icon image 144 from the left, the animal icon 157 is displayed in a left-hand position to the left of the left line of the road icon 145, which is displayed obliquely upward to the right. In the second icon image 144 from the left, the animal icon 157 is displayed in a central position on an upward extension of the central line of the road icon 145. In the third icon image 144 from the left, the animal icon 157 is displayed in a right-hand position to the right of the right line of the road icon 145, which is displayed obliquely upward to the left.
The three icon images 144 shown in the lower portion of
If there are a plurality of targets existing in front of the vehicle 12, then a plurality of animal icons 157 corresponding to the targets are displayed in the icon images 144 at positions of the targets, which are spaced along the transverse direction of the vehicle 12. If plural biological image areas belong to one region, then the MID 28 may display only one biological icon at a position corresponding to that region.
As shown in the upper portion of
As shown in
As the biological target becomes positioned farther from the vehicle 12, a corresponding biological icon tends to be displayed at a central position on the MID 28. Since a farther-distanced biological target, which exists within an attention calling distance, becomes more uncertain in position when the vehicle 12 actually moves closer toward the biological target, the MID 28 displays a corresponding biological icon at a central position, thereby calling attention from the driver.
A biological target, which is positioned farther from the vehicle 12, results in a corresponding biological image area having smaller dimensions in the captured image 134. If the size of the biological image area in the captured image 134 is smaller than a predetermined threshold value, then the biological image area is not extracted as a biological target. When a biological target is positioned farther from the vehicle 12 beyond a predetermined distance, even if the biological target exists within the central viewing angle β, the MID 28 does not display a corresponding biological icon in the icon image 144. The distance from the vehicle 12 up to a biological target is calculated based on a parallax effect developed by the infrared cameras 16R, 16L with respect to the biological target.
When the driver observes the MID 28 and sees an animal icon 157 displayed in the icon image 144 on the MID 28, the driver knows that an animal 160 exists in front of the vehicle 12. With respect to the position of the animal 160 along the transverse direction of the vehicle 12, from the lateral position of the animal icon 157 with respect to the road icon 145 on the MID 28, the driver can judge whether the animal 160 exists in a central, left, or right area in front of the vehicle 12, without the need to move the driver's eyes from the MID 28 to the general-purpose monitor 26.
A biological target is more likely to collide with the vehicle 12 when the biological target resides within a central range including the direction of travel of the vehicle 12 than when the biological target is in a left range or a right range on one side of the central range. Therefore, when a biological icon is in a central position in the icon image 144, i.e., the second icon image 144 from the left in the lower portion of
Segmentation according to the first embodiment represents a segmentation of the captured image 134 into three equal regions based on initial settings. However, various improvements may be made to the way in which segmentation is performed.
In step S11, the ECU 22 segments a captured image 134, which is to be displayed on the general-purpose monitor 26, based on initial settings. More specifically, the ECU 22 divides the lateral viewing angle α into three angle segments, i.e., a left angle segment, a central angle segment, and a right angle segment, and with a left boundary line 151L and a right boundary line 151R, segments the captured image 134 into three laterally equal regions, i.e., a left region 154L, a central region 154C, and a right region 154R.
In step S12, the ECU 22 checks if the vehicle 12 is making a right turn. If the vehicle 12 is making a right turn, then control proceeds to step S13. If the vehicle 12 is not making a right turn, then step S13 is skipped and control proceeds to step S14. Based on an output signal from the yaw rate sensor 20, the ECU 22 can determine whether the vehicle 12 is traveling straight forward, is making a right turn, or is making a left turn.
In step S13, the ECU 22 shifts the left boundary line 151L a predetermined distance to the left in the captured image 134. The reasons why the left boundary line 151L or the right boundary line 151R is shifted depending on the direction in which the vehicle 12 is turned will be described below with reference to
In
In
The horizontal distance between the vehicle 12 and a target exiting near an edge of the road 131 tends to become greater while the vehicle 12 is traveling on a curved road than while the vehicle 12 is traveling on a straight road. While the vehicle 12 is making a right turn, as shown in
While the vehicle 12 is making a right turn, the ECU 22 carries out the process of step S13, so as to shift the inner left demarcating line 166L to the left outwardly along the direction of the turn by an angle q, as shown in
In step S14 of
The process of shifting the left boundary line 151L to the left while the vehicle 12 makes a right turn has been described above with reference to
In step S21, based on the initial settings, the ECU 22 segments a captured image 134 to be displayed on the general-purpose monitor 26. The process of step S21 is the same as in the first improvement (see step S11 of
In step S22, the ECU 22 checks if the vehicle speed V is equal to or greater than a threshold value. If the vehicle speed V is equal to or greater than the threshold value, then control proceeds to step S23. If the vehicle speed V is not equal to or greater than the threshold value, then the vehicle-speed-dependent segmentation process is brought to an end. In step S23, the ECU 22 shifts the left boundary line 151L and the right boundary line 151R laterally outward.
Specific modes of displaying icon images 144 will be described below with reference to
In
When the vehicle speed V is low (
When the vehicle speed V is high (
While the vehicle 12 is traveling at high speed, the time required for the vehicle 12 to approach a target in front of the vehicle 12 is shorter than while the vehicle 12 is traveling at low speed, and thus, the driver needs to pay more attention to the target. While the vehicle 12 is traveling at high speed, in step S23, the left boundary line 151L between the central region 154C and the left region 154L, and the right boundary line 151R between the central region 154C and the right region 154R are displaced toward the left region 154L and the right region 154R, respectively. Consequently, a biological image area such as the human image area 133, which normally is displayed as belonging to the left region 154L or the right region 154R on the MID 28 while the vehicle 12 is traveling at low speed, is displayed as belonging to the central region 154C on the MID 28 while the vehicle 12 is traveling at high speed. Therefore, while the vehicle 12 is traveling at high speed, the distance from the vehicle 12 to the biological target at the time that attention starts to be called from the driver is increased, so as to prevent a delay in calling the attention of the driver.
In the vehicle-speed-depending segmentation process shown in
The segmentation process, which is improved in the foregoing manner, allows the driver to pay attention according to an appropriate process, which depends on the manner in which the vehicle 12 is presently being driven. The above modifications may be applied to the first embodiment or to a second embodiment, which will be described in detail below.
A vehicle periphery monitoring apparatus 210 according to a second embodiment of the present invention will be described below.
The image processing unit 214, which controls the vehicle periphery monitoring apparatus 210, includes an A/D conversion circuit, not shown, for converting supplied analog signals into digital signals, a CPU (Central Processing Unit) 214c for performing various processing operations, a memory 214m for storing various data used in an image processing routine, and an output circuit, not shown, for supplying drive signals to the speaker 24 as well as display signals to the general-purpose monitor 26 and the MID 28.
The CPU 214c functions as a target detector 240, a position calculator 242, which includes a first position calculator 244, a second position calculator 246, and an actual position calculator 248, an attention degree evaluator 250, which includes a sole evaluator 252 and a comparative evaluator 254, and a display mark determiner 256.
The general-purpose monitor 26 shown in
The MID 28 shown in
The vehicle periphery monitoring apparatus 210 is incorporated in a vehicle 12, in the same manner as the vehicle periphery monitoring apparatus 10 according to the first embodiment (see
Operations of the vehicle periphery monitoring apparatus 210 will be described below with reference to the flowchart shown in
In step S31, the image processing unit 214 acquires captured image signals at the present time from the infrared cameras 16L, 16R, which capture images of the periphery of the travelling vehicle 12. If the infrared cameras 16L, 16R capture images at intervals of about 33 ms, for example, then either one of the infrared cameras 16R or 16L continuously or intermittently produces a captured image signal having 30 frames per second.
In step S32, the image processing unit 214 supplies the captured image signal from one of the infrared cameras 16L, 16R, e.g., the infrared camera 16L, to the general-purpose monitor 26. The general-purpose monitor 26 displays a first image 270 (
The first image 270 in
In step S33, the target detector 240 detects a monitored target from an image region represented by the captured image signal. Examples of the monitored target include various animals (specifically, mammals such as deer, horses, sheep, dogs, cats, etc., birds, etc.) and artificial structures (specifically, power poles, guardrails, walls, etc.). The target detector 240 may make use of any appropriate one of various known detecting algorithms, which is suitable for the type of target that is monitored.
In step S34, the first position calculator 244 calculates the position or an existing range (hereinafter referred to as a “first position”) of each of the monitored targets in the first display region 260. If the general-purpose monitor 26 possesses high display resolution, then the first position calculator 244 is capable of identifying the position of the monitored targets with high accuracy.
In step S35, the second position calculator 246 calculates the position of each of the monitored targets (hereinafter referred to as a “second position”) in the second display region 264. The associated relationship between the first position in the first display region 260 and the second position in the second display region 264 will be described below with reference to
As shown in
Division of the first display region 260 is not limited to the example shown in
In step S36, the target detector 240 judges whether or not a plurality of monitored targets are detected from the result of step S33. If no monitored target or if only one monitored target is detected, then in step S38, the image processing unit 214 supplies the MID 28 with a display signal representing the second image 266 (see
Prior to displaying the second image 289 (
As a result, as shown in
When marks are displayed respectively in positions (the left position 284, the central position 286, and the right position 288) that match the layout of the three sub-regions (the left region 274, the central region 276, and the right region 278), the driver can instinctively recognize whether or not a monitored target exists, as well as the position of a monitored target, if any.
In step S39, the image processing unit 214 determines whether or not there is a possibility of collision of the vehicle 12 with a monitored target. If the image processing unit 214 judges that there is no possibility of collision of the vehicle 12 with a monitored target, then control returns to step S31, and steps S31 through S38 are repeated.
If the image processing unit 214 judges that there is a possibility of collision of the vehicle 12 with a monitored target, then the vehicle periphery monitoring apparatus 210 produces a warning sound via the speaker 24, for example, thereby giving the driver information concerning the possibility of a collision in step S40. Accordingly, the driver is prompted to control the vehicle 12 to avoid the collision.
If two or more monitored targets are detected in step S36 in
According to the present embodiment, the attention degree evaluator 250 evaluates the possibility of collision of a monitored target with the vehicle 12 (hereinafter referred to as a “degree of risk”). A process of evaluating a degree of risk, which is performed in step S37, will be described in detail below with reference to the flowchart shown in
In step S51, the attention degree evaluator 250 designates a monitored target which is yet to be evaluated. The sole evaluator 252 evaluates a degree of risk at the present time with respect to the designated monitored target in view of various states, i.e., by implementing steps S52 through S57, to be described below.
In step S52, the sole evaluator 252 evaluates a degree of risk of collision of a monitored target with the vehicle 12 from the positional relationship between the monitored target and the vehicle 12. Prior to evaluating the degree of risk, the actual position calculator 248 calculates the actual position of the monitored target, e.g., a human body corresponding to the human area H1, and the actual distance between the monitored target and the vehicle 12, from a pair of captured image signals from the infrared cameras 16R, 16L, according to a known process such as triangulation. If the distance between the monitored target and the vehicle 12 is small, then the sole evaluator 252 judges that there is a high possibility of collision of the vehicle 12 with the monitored object. On the other hand, if the distance between the monitored target and the vehicle 12 is large, then the sole evaluator 252 judges that there is a low possibility of collision of the vehicle 12 with the monitored object.
In step S53, the sole evaluator 252 evaluates a degree of risk of collision of the vehicle 12 with a monitored target from the direction of movement of the monitored target.
The sole evaluator 252 evaluates a degree of risk of collision of the vehicle 12 with the monitored targets depending on the motion vectors, or more specifically, depending on directions of the motion vectors. For example, since the motion vector MV1 of the human area H1 lies substantially parallel to the horizontal direction of the first image 272, the sole evaluator 252 presumes that the human area H1 represents a pedestrian walking across the road, and judges that the vehicle 12 has a high degree of risk of colliding with the monitored target. On the other hand, since the motion vector MV2 of the human area H2 is inclined a certain angle or greater with respect to the horizontal direction of the first image 272, the sole evaluator 252 presumes that the human area H2 does not represent a pedestrian walking across the road, and judges that the vehicle 12 has a low degree of risk of colliding with the monitored target.
In step S54, the sole evaluator 252 evaluates a degree of risk of collision of the vehicle 12 with a monitored target depending on a predicted route followed by the vehicle 12.
As shown in
In
In
In step S55, the sole evaluator 252 evaluates a degree of risk of collision of the vehicle 12 with a monitored target from the ease with which the driver is able to locate the monitored target. More specifically, a state in which the driver finds it difficult to locate the monitored target is presupposed, and the sole evaluator 252 evaluates the presupposed state as having a high degree of risk, regardless of whether or not the driver has actually located the monitored target. For example, the presupposed state may represent a detected area having a small size, a small movement distance (motion vector), a detected area having a shape that differs from a normal shape, etc. Specific examples of detected areas having a normal shape include a walking pedestrian, a running pedestrian, a standing pedestrian, etc., whereas specific examples of detected areas having an abnormal shape include a squatting pedestrian, a pedestrian who is lying down, etc.
The driver also finds it difficult to locate a monitored target if a difference in color between the color of the monitored target and the background color is small, for example, when a pedestrian is wearing neutral and dark clothes at night. Such a monitored target can be distinguished based on a difference between the brightness of the monitored target and the background brightness, in a grayscale image acquired by the infrared cameras 16L, 16R. The monitored target can also be distinguished based on a difference between the color of the monitored target and the background color in a color space such as CIERGB, CIELAB, or the like, in a color image acquired by a color camera.
In step S56, from the ability of the monitored target to recognize the existence of the vehicle 12, the sole evaluator 252 evaluates a degree of risk of collision of the vehicle 12 with a monitored target. More specifically, a state in which the monitored target is incapable of recognizing the existence of the vehicle 12 is presupposed, and the sole evaluator 252 evaluates the presupposed state as having a high degree of risk, regardless of whether or not the monitored target actually has recognized the existence of the vehicle 12. For example, the sole evaluator 252 can judge whether or not the vehicle 12 lies within the field of vision of the monitored target, by detecting the attitude of the monitored target (e.g., a facial direction if the monitored target is a human). The sole evaluator 252 may evaluate a face that is directed away from the vehicle 12, a face that is directed sideways, and a face that is directed toward the vehicle 12 as possessing progressively lower degrees of risk.
The direction of the face can be detected with high accuracy based on the brightness of a binary image of the head, i.e., the ratio of on-pixels of the binary image. If a human turns his or her face toward the vehicle 12, the area of bare skin (on-pixels) of the head becomes greater, whereas if a human turns his or her back toward the vehicle 12, the area of hair (off-pixels) of the head becomes greater. The sole evaluator 252 may also presume intermediate states (facing sideways or facing obliquely), other than the states of facing toward the vehicle 12 and facing away from the vehicle 12.
The sole evaluator 252 may further presume the situation judging ability and/or behavior predictability of a monitored target, and reflect the presumed situation judging ability and/or behavior predictability in evaluating a degree of risk. For example, based on the shape or behavior of the detected area of the monitored target, the sole evaluator 252 may judge whether a monitored target, which is judged as being a human, is an elderly person or a child, and evaluate the judged and monitored target as having a high degree of risk.
In step S57, the sole evaluator 252 makes a comprehensive evaluation of a degree of risk of the monitored target that was designated in step S51. The degree of risk may be represented in any data format, such as a numerical value or a level. The levels of importance (weighting) of evaluation values, which are calculated in steps S52 through S56, may be changed as desired. For example, a degree of risk basically is evaluated based on the positional relationship between a monitored target and the vehicle 12 (see step S52). Further, if plural monitored targets having a high degree of risk are present, then other evaluation values (see steps S53 through S56) may also be taken into account for carrying out evaluation thereof.
In step S53, the accuracy in evaluating the degree of risk is increased by also taking into account a predicted motion of a monitored target. In step S54, the accuracy in evaluating the degree of risk is increased by also taking into account a predicted route followed by the vehicle 12. In step S55, the accuracy in evaluating the degree of risk is increased by also taking into account an evaluation from the viewpoint of the driver. In step S56, the accuracy in evaluating the degree of risk is increased by also taking into account an evaluation from the viewpoint of the monitored target. The first image 272 or other items of input information, e.g., the vehicle speed V, the brake pedal depression depth Br, the yaw rate Yr, and distance information which is acquired from a GPS (Global Positioning System) or a distance measuring means, may be used in evaluating the degrees of risk in steps S53 through S56.
In step S58, the attention degree evaluator 250 judges whether or not all of the processes of evaluating each monitored target have been completed. If the attention degree evaluator 250 judges that all of the evaluating processes have not been completed, then control returns to step S51, and steps S51 through S57 are repeated until all of the evaluating processes have been completed. If the attention degree evaluator 250 judges that all of the evaluating processes have been completed, then control proceeds to step S59.
In step S59, the comparative evaluator 254 selects at least one monitored target as having a high degree of risk from among a plurality of monitored targets. The comparative evaluator 254 may select only one monitored target, or two or more monitored targets, as having a high degree of risk. It is assumed that the human area H1 is selected from two monitored targets (human areas H1, H2).
In this manner, step S37 comes to an end. In step S38, the MID 28 displays a second image 291 at the present time in the second display region 264 (see
Prior to displaying the second image 291, the display mark determiner 256 determines the form of the mark to be displayed on the MID 28. In the first image 272 shown in
As a result, as shown in
The human icons 292, 294 may be displayed in different display modes, for example, by displaying the human icon 292 in a way less noticeable than ordinary, rather than displaying the human icon 294 in a way more noticeable than ordinary, or by displaying the human icons 292, 294 in combined ways less and more noticeable than ordinary. The different display modes for displaying the human icons 292, 294 may include, other than different colors, any means such as different shapes (e.g., sizes), or different visual effects (e.g., blinking or fluctuating), insofar as such modes of display can impart relative visibility differences to a plurality of marks.
According to the second embodiment, when monitored targets (human areas H1, H2) are detected from two or more sub-regions (e.g., the central region 276 and the right region 278), the MID 28 displays the human icons 292, 294 in different display modes depending on the degree of risk evaluated by the attention degree evaluator 250. Consequently, the difference between the degrees of attention of the monitored targets can be indicated to the driver for assisting in driving the vehicle 12. The degree of risk (degree of attention) represents a possibility that a monitored target can collide with the vehicle 12.
The present invention is not limited to the aforementioned first and second embodiments, but may employ various arrangements based on the details of the disclosure of the present invention. For example, the present invention may employ the following arrangements.
[1. Objects in which the Vehicle Periphery Monitoring Apparatus can be Incorporated]
In the above embodiments, the vehicle 12 is assumed to be a four-wheel vehicle (see
In the above embodiments, the vehicle periphery monitoring apparatus 10, 210 is incorporated in the vehicle 12. However, the vehicle periphery monitoring apparatus 10, 210 may be incorporated in another mobile object, insofar as the device detects a monitored target in the periphery of the mobile object and indicates the detected monitored target to the user. The mobile object may be a ship or an aircraft, for example.
In the above embodiments, the two infrared cameras 16L, 16R are used as image capturing means for capturing images in the periphery of the vehicle 12. However, the image capturing means are not limited to infrared cameras 16L, 16R, insofar as the image capturing means are capable of capturing images in the periphery of the vehicle 12. For example, the image capturing means may be multiocular (stereo camera) or monocular (single camera). Instead of infrared cameras, the image capturing means may comprise cameras (color cameras), which use light having wavelengths primarily in the visible range, or may comprise both color and infrared cameras.
In the above embodiments, the general-purpose monitor 26 is used to display the grayscale image 72 from the infrared camera 16L. However, any type of display unit may be used, insofar as the display unit is capable of displaying images captured by image capturing means. In the above embodiments, the highlighting frame 76 is displayed within the grayscale image 72 that is displayed on the general-purpose monitor 26. However, the grayscale image 72 from the infrared camera 16L may be displayed in an unmodified form on the general-purpose monitor 26 without any highlighting features added thereto.
In the above embodiments, a relatively versatile display unit, which operates in a non-interlace mode, is used as the MID 28 for displaying biological icons (marks). However, a plurality of (e.g., three) indicators, which are arranged in an array for displaying only biological icons, may be used instead of the MID 28. Alternatively, a head-up display (HUD), such as that shown in FIG. 2 of Japanese Laid-Open Patent Publication No. 2004-364112, may be used in place of the MID 28.
In the above embodiments, the general-purpose monitor 26 and the MID 28 both are used. However, only the MID 28 may be used. If only the MID 28 is used, then the grayscale image 72 acquired by the infrared camera 16L is displayed on the MID 28.
Number | Date | Country | Kind |
---|---|---|---|
2011-206620 | Sep 2011 | JP | national |
2011-224528 | Oct 2011 | JP | national |
2011-238600 | Oct 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/074229 | 9/21/2012 | WO | 00 | 3/20/2014 |