1. Field of the Invention
The present invention relates to an in-vehicle image display device that displays images captured with in-vehicle cameras.
2. Background Art
Display systems that display an image of the peripheral area of a vehicle to assist drivers in checking the peripheral area of the vehicle have been in widespread use, in which a wide-angle camera is mounted on the vehicle and an image captured with the camera is displayed on a monitor installed on the side of the driver's seat. Typical among monitors installed on the driver's seat are compact monitors with a screen size of about 7 to 10 inches due to the restrictions of the interior room space of the vehicle.
A display system that displays an image of the peripheral area of a vehicle has one or more cameras mounted on the vehicle. When two or more cameras are mounted on the vehicle with such a display system that displays an image of the peripheral area of the vehicle, a monitor installed on the side of the driver's seat is configured to display one of the camera images with a manual switching operation of the driver, display two or more camera images side-by-side, or display a single image formed by combining the two or more camera images.
Reference 1 (JP Patent Publication (Kokai) No. 2007-288611 A) discloses a technique for processing images captured with wide-angle cameras that are installed on the four sides, including the front side, rear side, left side, and right side, of a vehicle with a signal processing device, displaying a top-view image formed through a viewpoint conversion process such that the resulting image appears to be viewed from a virtual viewpoint right above the vehicle, and displaying alongside the top-view image at least one of the images captured with the cameras on the front side, rear side, left side, and right side of the vehicle.
A top-view image is an image formed by combining the camera images of the front side, rear side, left side, and right side of the vehicle based on the ground surface around the vehicle as a reference. Using a top-view image is advantageous in that the driver can recognize at a glance the conditions of the entire peripheral area of the vehicle, while it is also disadvantageous in that stereoscopic objects could be displayed in a distorted manner as a result of the viewpoint conversion and an image of the area above the ground surface could not be not displayed.
According to Reference 1, images captured with the cameras installed on the front side, rear side, left side, and right side of the vehicle are displayed alongside the top-view image to compensate for the aforementioned disadvantages of the top-view image. Specifically, Reference 1 discloses a technique of selecting a camera image to be displayed alongside the top-view image in accordance with the vehicle speed or the vehicle travelling direction. For example, when the vehicle drives in reverse to the right, the cameras on the right side and rear side of the vehicle are selected.
With regard to a display system that displays an image of the peripheral area of a vehicle using a top-view image, Reference 2 discloses a method using an obstacle sensor installed on the vehicle, which includes enlarging and displaying, upon detection of an obstacle around the vehicle with the obstacle sensor, the peripheral area of the detected obstacle around the vehicle on the top-view image.
According to the technique of Reference 1, a camera image to be displayed alongside the top-view image is switched based on the vehicle traveling direction as a reference. However, while driving a vehicle, the driver should carefully watch not only the vehicle travelling direction but also the entire peripheral area, including the front side, rear side, left side, and right side, of the vehicle to avoid a collision with a nearby structure or another vehicle. Further, a portion of the vehicle to which the driver should carefully watch to avoid a collision with another vehicle could change from moment to moment in accordance with the vehicle driving state. Thus, with the technique disclosed in Reference 1, a shortage of images needed by the driver may occur (the first problem).
Further, with the technique of Reference 1, there is also a concern that the driver, when viewing the top-view image and the camera images on the right side, rear side, left side, and right side of the vehicle captured from different viewpoints, may not easily recognize the correspondence relationship between the top-view image and the camera images (the second problem). In particular, the second problem is considered to assume more importance in the conditions that the driver should carefully maneuver the vehicle or watch the positional relationship between the vehicle and a parking bay for parking the vehicle while parking.
Next, with the technique of Reference 2, if the top-view is enlarged too close to the obstacle, it would be difficult for the driver to recognize the position of the obstacle in relation to the vehicle. Meanwhile, if the enlarged degree of the top-view image is too small, the driver would not be able to visually check the obstacle unless he/she pays particular attention to the obstacle. The currently-available monitors for display systems that display an image of the peripheral area of a vehicle are as small as 7 to 10 inches in size as described above, and it is still unlikely that such monitor size would increase in future due to the restrictions on the installation space. Thus, the technique of Reference 2 poses a third problem that lies in the difficulty of recognizing the positional relationship between the vehicle and the obstacle while at the same time displaying a detailed image of the obstacle.
The present invention has been made in view of the aforementioned. It is an object of the present invention to provide an in-vehicle image display device that is capable of providing, from among images of the peripheral area of the vehicle that can change in accordance with the driving state, an image of a part needed by the driver at an appropriate timing so that the driver can recognize the positional relationship between the vehicle and the peripheral area of the vehicle.
A periphery image display device in accordance with the present invention that solves the aforementioned problems is an in-vehicle image display device that displays images captured with a plurality of in-vehicle cameras, which includes an image acquisition unit configured to acquire images captured with the in-vehicle cameras, a vehicle periphery image generation unit configured to generate an image of the peripheral area of the vehicle based on the images acquired with the image acquisition unit, a collision-warned part selection unit configured to select a collision-warned part of the vehicle that has a possibility of hitting a nearby object around the vehicle based on a driving state of the vehicle, an enlarged image generation unit configured to process at least one of the images acquired with the image acquisition unit to generate an enlarged image of a peripheral area of the collision-warned part of the vehicle selected by the collision-warned part selection unit, a composite display image generation unit configured to generate a composite display image composed of the enlarged image generated by the enlarged image generation unit and the image of the peripheral area of the vehicle generated by the vehicle periphery image generation unit, the composite display image being displayed in a form in which positions of the enlarged image and the image of the peripheral area of the vehicle are correlated with each other, and a display unit configured to display the composite display image generated by the composite display image generation unit.
According to the present invention, a collision-warned part of a vehicle that has a possibility of hitting a nearby object around the vehicle is selected based on the vehicle driving state, and an enlarged image of the peripheral area of the selected collision-warned part of the vehicle and the image of the peripheral area of the vehicle are displayed in a form in which the positions thereof are correlated with each other. Thus, it is possible to provide, from among images of the peripheral area of the vehicle that can change in accordance with the driving state, an image of a part needed by the driver at an appropriate timing so that the driver can recognize the positional relationship between the vehicle and the peripheral area of the vehicle.
Hereinafter, specific embodiments of the in-vehicle image display device in accordance with the present invention will be described with reference to the accompanying drawings. Although this embodiment will describe an automobile as an example of a vehicle, “vehicles” in accordance with the present invention are not limited to automobiles and can include a variety of kinds of moving objects that drive on the ground surface.
[Embodiment 1]
The in-vehicle image display device includes, as shown in
The functions of the vehicle signal acquisition unit 1, the driving state estimation unit 2, the collision-warned part selection unit 3, the image acquisition unit 4, the periphery image generation unit 5, the enlarged image generation unit 6, and the composite display image generation unit 7 are realized with one or both of the computers within the camera and the vehicle. The function of the display unit 8 is realized with at least one of a monitor screen such as a car navigation screen or a speaker within the vehicle.
The in-vehicle image display device performs a process of selecting as a collision-warned part a part of the vehicle that has a possibility of hitting a nearby object by acquiring a vehicle signal of the vehicle with the vehicle signal acquisition unit 1, estimating the vehicle driving state based on the time series of the vehicle signal with the driving state estimation unit 2, and referring to a reference table set in advance with the collision-warned part selection unit 3 based on the vehicle driving state.
Then, the in-vehicle image display device acquires images captured with the cameras installed on the vehicle with the image acquisition unit 4, generates an image of the peripheral area of the vehicle with the periphery image generation unit 5 based on the camera images, and generates an enlarged image of the peripheral area of the collision-warned part with the enlarged image generation unit 6.
Then, the in-vehicle image display device performs a process of generating a composite display image that is composed of both the vehicle periphery image and the enlarged image and represents the correspondence relationship between the periphery image and the enlarged image. That is, the in-vehicle image display device performs a process of generating a composite display image that can be displayed in a form in which the positions of the periphery image and the enlarged image are correlated with each other, and displaying such a composite display image with the display unit 8.
Next, each of the aforementioned functions will be described in detail.
The vehicle signal acquisition unit 1 acquires vehicle signals such as the vehicle speed, gear position, and steering angle from vehicle sensors or a control device (e.g., acquires the vehicle speed from a vehicle speed sensor of a vehicle wheel or the like, acquires the gear position from a gear control device, and acquires the steering angle from a steering angle sensor) at predetermined time intervals.
The driving state estimation unit 2 constantly reads vehicle signals acquired with the vehicle signal acquisition unit 1 at predetermined time intervals, and estimates a vehicle driving state from the time series of the vehicle signals. For example, the driving state estimation unit 2 can estimate a driving state such as a driving state in perpendicular parking (
First, a method for estimating a driving state in perpendicular parking (left-in reverse parking) will be described with reference to
In
The driving state estimation unit 2 constantly keeps determining a state transition based on the time series of vehicle signals acquired with the vehicle signal acquisition unit 1, from the time the engine of the vehicle 41 is started till it is stopped. In the state transition diagram of
In the state transition diagram of
After the vehicle has entered the reverse state C2 in the state transition diagram of
The perpendicular entry state C3 corresponds to the scenes of
In the perpendicular entry state C3, it is presumed that the vehicle 41 is swinging to the left at a large steering angle and entering the parking bay 70 from the side of the rear part 37 of the vehicle 41. The scene of
The hit of the left rear part 34 of the vehicle 41 against the vehicle 61 can result in a collision accident, and the hit of the right front part 32 of the vehicle 41 against the gutter 60 can result in the tire coming off the wheel. Thus, the driver should carefully maneuver the vehicle. The scene of
In the perpendicular entry state C3 in the state transition diagram of
In the reverse state C4, it is presumed that the vehicle 41 is advancing deeper after the vehicle 41 is positioned substantially parallel with the parking bay 70 with the driver's turn of the steering wheel at a large steering angle. The reverse state C4 corresponds to the scenes of
In the reverse state C4 in the state transition diagram shown in
Described next with reference to
The driving state estimation unit 2 constantly keeps determining a state transition based on the time series of vehicle signals acquired with the vehicle signal acquisition unit 1, from the time the engine of the vehicle 41 is started till it is stopped. In the state transition diagram of
In the initial state C11 in the state transition diagram of
In the normal driving state C12 in the state transition diagram of
In the state transition diagram of
The reduced-speed driving state C13 corresponds to the scene of
In the reduced-speed driving state C13 in the state transition diagram of
The steering angle Sb is the threshold set in advance by estimating the steering angle of when the driver pulls the vehicle 41 alongside an oncoming vehicle to finely adjust the horizontal position of the vehicle 41 within the lane. The steering angle Sb is set smaller than the large steering angle of when, for example, the vehicle makes a left turn at an intersection.
The time duration Tb is the threshold set in advance such that it is longer than the time required to steer the vehicle instantaneously to change lanes, for example. When the vehicle is in the passing state C14 in the state transition diagram of
In the passing state C14, it is presumed that the driver is carefully passing alongside the oncoming vehicle 45 while finely adjusting the steering with the reduced speed of the vehicle 41. The passing state C14 corresponds to the scenes of
In the scene of
In the passing (pulling to the left) state C14 in the state transition diagram of
In the reduced-speed driving state C13 in the state transition diagram of
Described next with reference to
The driving state estimation unit 2 constantly keeps determining a state transition based on the time series of vehicle signals acquired with the vehicle signal acquisition unit 1, from the time the engine of the vehicle 41 is started till it is stopped. In the state transition diagram of
In the state transition diagram of
After the vehicle has entered the reverse state C22 in the state transition diagram of
In the parallel parking (left side, the first half) state C23, it is presumed that the vehicle 41 is being parked into the parking bay 80 from the side of the rear part 37 with a swing of the vehicle 41 to the left. The scene of
In the parallel parking (left side, the first half) state C23 in the state transition diagram of
In the reverse state C24, it is presumed that the driver is determining the timing of when to turn the steering back to the right by switching the vehicle state from the parallel parking (left side, the first half) state C23. In the reverse state C24, the vehicle 41 moves substantially straight back. Thus, the rear part 37 of the vehicle 41 may hit a nearby object.
After the vehicle has entered the reverse state C24 in the state transition diagram of
While the vehicle 41 keeps driving backwards, the parallel parking (left side, the second half) state C25 is continued. The parallel parking (left side, the second half) state C25 corresponds to the scenes of
The scene of
In the parallel parking (left side, the second half) state C25 in the state transition diagram of
Described next with reference to
The driving state estimation unit 2 constantly keeps determining a state transition based on the time series of vehicle signals acquired with the vehicle signal acquisition unit 1, from the time the engine of the vehicle 41 is started till it is stopped. In the state transition diagram of
In the state transition diagram of
The vehicle speed Vd is the threshold set in advance. The vehicle speed Vd is set at a speed at which the vehicle 41 can be regarded as being forward parked while the driver carefully watches the surrounding. In the reduced-speed forward driving state C42, it is presumed that the driver is determining the timing of when to start steering while recognizing the positional relationship between the parking bay 90 and the vehicle 41. In the reduced-speed forward driving state C42, the vehicle 41 moves substantially straight ahead. Thus, the front part 35 of the vehicle 41 may hit a nearby object.
After the vehicle has entered the reduced-speed forward driving state C42 in the state transition diagram of
While the steering angle is continuously leftward and is greater than or equal to the steering angle Sd, the forward parking (left-in) state C43 is continued. In the sequence of the driving states in and before/after forward parking (left-in) shown in of
In the forward parking (left-in) state C43, it is presumed that the vehicle 41 is being parked into the parking bay 90 from the side of the front part 35 with a swing of the vehicle 41 to the left. The scene of
The scene of
It should be noted that in forward parking, the rear part of the vehicle travels the inner side of a swing than does the front part of the vehicle due to the inner wheel difference. Thus, even when the left front part 31 does not hit the vehicle 92 in
In typical automobiles, the swing arc 1014 of the left rear tire 1004 whose direction in relation to the vehicle 41 does not change by a vehicle maneuver has a smaller radius than the swing arc 1011 of the left front tire 1001 that is controlled to point to the left by a vehicle maneuver. That is the inner wheel difference. Due to the inner wheel difference, the rear part of the left side of the vehicle 41 is located closer to the center 1000 of a swing than is the front side. In
In the forward parking (left-in) state C43 in the state transition diagram of
In the reduced-speed forward driving state C44, it is presumed that the vehicle 41 is advancing deeper into the parking bay 90 after the vehicle 41 is positioned substantially parallel with the parking bay 90 with the driver's turn of the steering wheel at a large steering angle. The reduced-speed forward driving state C44 corresponds to the scenes of
In the reduced-speed forward driving state C44 in the state transition diagram of
It should be noted that in the state transition process shown in each of the state transition diagrams of
The driving state estimation unit 2 repeats estimation of a state transition in at least one driving state sequence registered in advance such as a state transition in the sequence of the driving states in and before/after perpendicular parking (left-in) shown in
It should be noted that the sequence of the driving states registered in the driving state estimation unit 2 is not limited to perpendicular parking (left-in reverse parking) shown in
The collision-warned part selection 3, in accordance with the driving state estimated with the driving state estimation unit 2, refers to a reference table set in advance to determine a part of the vehicle 41 that has a possibility of hitting a nearby object, and outputs the signal as a collision-warned part. The reference table is stored in storage means (not shown) such as ROM or RAM of the computer.
For example, when the driving state is the reverse state, the rear part 37 of the vehicle 41 may hit a nearby object as shown in the exemplary scenes of
When the driving state is the perpendicular parking (left-in reverse parking) state, the right front part 32 or the left rear part 34 may hit a nearby object as shown in the exemplary scene of
When the driving state is the state of driving past an oncoming vehicle (pulling to the left), the left front part 31 may hit a nearby object as shown in the exemplary scene of
When the driving state is the parallel parking (left side, the first half) state, the right front part 32 or the left rear part 34 may hit a nearby object or another vehicle as shown in the exemplary scene of
When the driving state is the parallel parking (left side, the second half) state, the left front part 31, the left rear part 34, or the rear part 37 may hit another vehicle or nearby object as shown in the exemplary scenes of
When the driving state is the reduced-speed forward driving state, the front part 35 may hit a nearby object as shown in the exemplary scenes of
When the driving state is the forward parking (left-in) state, the left front part 31, the right front part 32, or the left part 38 may hit another vehicle or a nearby object as shown in the exemplary scenes of
The image acquisition unit 4 acquires images captured with the cameras installed on the vehicle 41.
Each of the cameras 51 to 54 is a wide-angle camera installed such that it can capture an image of the peripheral area of the front part 35, the right part 36, the rear part 37, or the left part 38 of the vehicle 41 within the angle of view. Further, one or both of the camera 51 on the front part and the camera 54 on the left part is/are installed such that the camera(s) can capture an image of the peripheral area of the left front part 31 of the vehicle 41 within the angle of view.
Similarly, one or both of the camera 51 on the front part and the camera 52 on the right part, one or both of the camera 52 on the right part and the camera 53 on the rear part, and one or both of the camera 53 on the rear part and the camera 54 on the left part are installed such that the cameras can capture images of the peripheral areas of the right front part 32, the right rear part 33, and the left rear part 34 of the vehicle 41, respectively.
The periphery image generation unit 5 generates an image of the peripheral area of the vehicle 41 (vehicle periphery image) in a predetermined time cycle from the images acquired with the image acquisition unit 4. The periphery image generation unit 5 processes the images captured with the cameras 51, 52, 53, and 54 on the front part, right part, rear part, and left part of the vehicle 41 to generate a top-view image through a viewpoint conversion process such that the resulting image appears to be viewed from a virtual viewpoint above the vehicle 41 with the ground around the vehicle 41 as a reference. For geometric data needed for the viewpoint conversion process, data calculated in advance is used.
The enlarged image generation unit 6 processes the image acquired with the image acquisition unit 4 to generate an enlarged image of the peripheral area of a specific part of the vehicle 41 in accordance with the collision-warned part, which has a possibility of hitting a nearby object, of the vehicle 41 output from the collision-warned part selection unit 3.
When the collision-warned part selection unit 3 has output the left front part 31 as the collision-warned part, the enlarged image generation unit 6 generates, with reference to the correspondence table of
It should be noted that the correspondence table of
The composite display image generation unit 7 generates a composite display image composed of an image of the peripheral area of the vehicle generated by the periphery image generation unit 5 and an enlarged image of the peripheral area of a specific part of the vehicle 41 generated by the enlarged image generation unit 6.
In
The composite display image 102 is displayed in a form in which the positions of the vehicle periphery image 100 and the enlarged images 232, 233, and 234 are correlated with each other. Specifically, the composite display image 102 is displayed such that the peripheral areas of the collision-warned parts of the vehicle 41 are displayed in detail, and the markers 332, 333, and 334 on the vehicle periphery image 100 are linked to the enlarged images 232, 233, and 234 shown on the display space 101 so that the driver can recognize at a glance the correspondence relationship between the enlarged images 232, 233, and 234 and the corresponding parts of the vehicle 41.
It should be noted that the composite display image generation unit 7 can be configured to, instead of depicting the connecting lines 432, 433, and 434, correlate the enlarged images 232, 233, and 234 and the corresponding parts of the vehicle 41 by unifying the design (e.g. the color or the kind of line) of the markers 332, 333, and 334 and the design (the color of the outer frame or the kind of line) of the corresponding enlarged images 232, 233, and 234.
The composite display image generation unit 7 can also be configured to perform a process such as rotation or inversion so that the top and bottom edges or the right and left edges of the enlarged image 234 or the like coincide with those of the vehicle periphery image 100. Furthermore, the composite display image generation unit 7 can also be configured to perform a process such as viewpoint conversion by which the viewpoint of the enlarged image 232 or the like is converted into a virtual viewpoint of the vehicle periphery image 100 or a virtual viewpoint that is close to the virtual viewpoint of the vehicle periphery image 100 so that the enlarged image 232 or the like looks the same way as the vehicle periphery image 100 does.
Reference numeral 254 denotes elliptical guide lines on the enlarged image 234 depicted in positions at about the same distance from the vehicle 41 as the substantially concentric guide lines 154. The number of the guide lines 154 on the vehicle periphery image 100 and the number of the guide lines 254 on the enlarged image 234 are the same, and the kind or color of the corresponding lines are also the same.
On the composite display image 102 of
It should be noted that the number of the guide lines 154 on the vehicle periphery image 100 and the number of the guide lines 254 on the enlarged image 234 in
In
On the composite display image 102 shown in
Further, on the composite display image 102 shown in
The semi-transparent image 264 of the vehicle 41 shown in
Further, as which part of the vehicle periphery image 100 corresponds to the enlarged image 234 can be clearly seen from the semi-transparent portion 164 of the icon 141 and the semi-transparent image 264 of the vehicle 41 shown in
The guide lines 154 and the guide lines 254 shown in
The composite display image generation unit 7 reads images from the periphery image generation unit 5 and from the enlarged image generation unit 6 in a predetermined operation cycle to update the displayed image. Further, the composite display image generation unit 7 updates the layout of the composite display image each time the driving state estimated by the driving state estimation unit 2 changes and the collision-warned part selection 3 changes the collision-warned part that has a possibility of hitting a nearby object.
It should be noted that in the composite display image generation unit 7, the layouts of the vehicle periphery image 100 and the display space 101 are not limited to the examples shown in
The display space 101 can be configured to always display any image while it does not display the enlarged images 232, 233, 234, and the like. For example, the display space 101 can be configured to always display an image captured with the camera 53 on the rear part or a screen of a car navigation system and display an enlarged image only when the vehicle is in a given driving state.
The display unit 8 displays for the driver the composite display image 102 formed through a composite process by the composite display image generation unit 7. It should be noted that the display unit 8 may have two or more screens. For example, the vehicle periphery image 100 can be output to a car navigation screen and the enlarged image 232 or the like can be displayed on the side of a vehicle speed mater on a liquid crystal display.
In Embodiment 1, a collision-warned part of the vehicle 41 that has a possibility of hitting a nearby object is estimated in accordance with the driving state of the vehicle 41 with the aforementioned functional configuration. Then, an enlarged image of the peripheral area of the collision-warned part, a marker indicating the collision-warned part on the image of the peripheral area of the vehicle 41, a connecting line that indicates the correspondence relationship between the marker of the peripheral area of the vehicle 41 and the enlarged image, and the like are depicted.
Therefore, the composite display image 102 can be automatically displayed at an appropriate timing while the vehicle is driving such that the driver can recognize at a glance the detailed enlarged image 232 or the like of the peripheral area of the collision-warned part as well as the positional relationship between the enlarged image 232 or the like and the vehicle 41.
When the aforementioned in-vehicle image display device according to Embodiment 1 is compared with the technique of Reference 1 in which a displayed camera image is switched based on the vehicle traveling direction as a reference, it would be difficult, in the scene shown in
In contrast, with the in-vehicle image display device according to Embodiment 1, both the right rear part 33 and the left rear part 34 in the travelling direction of the vehicle 41 and the right front part 32 of the vehicle 41 on the opposite side are enlarged for display based on the estimation of the driving state in perpendicular parking (left-in reverse parking) with the driving state estimation unit 2 in the scene of
It should be noted that although
For example, when a single wide-angle camera that can capture an image of the peripheral areas of the right rear part 33, the rear part 37, and the left rear part 34 is installed on the rear part of the vehicle 41, and the collision-warned part selection unit 3 designates the right rear part 33, the rear part 37, or the left rear part 34 as the collision-warned parts in accordance with the driving state estimated with the driving state estimation unit 2, it is acceptable as long as the enlarged image of the right rear part 33, the rear part 37, or the left rear part 34 is combined with the vehicle periphery image 100 for display on the display space 101 as shown in
It should be noted that when there exists a blind spot with the cameras installed on the vehicle 41 being unable to capture images of the entire peripheral area of the vehicle 41, and the collision-warned part selection unit 3 designates the portion of the blind spot as the collision-warned part, the enlarged image generation unit 6 and the composite display image generation unit 7 exclude the generation of enlarged images and composite display image, respectively (do not generate such images). Alternatively, when there exists a blind spot with the cameras installed on the vehicle 41 being unable to capture images of the entire peripheral area of the vehicle 41, the periphery image generation unit 5 blanks the portion of the blind spot on the vehicle periphery image 100.
In the description of Embodiment 1, a top-view image formed from images captured with the cameras 51, 52, 53, and 54 on the front part, right part, rear part, and left part is used as the vehicle periphery image 100 generated by the periphery image generation unit 5. However, the periphery image generation unit 5 can be configured to generate as the vehicle periphery image 100 an image other than the top-view image as long as such an image allows the peripheral area of the vehicle 41 to be recognized at a glance when arranged on the periphery of the icon 141 of the vehicle 41.
In Embodiment 1, the driving state estimation unit 2 can be configured to have, using a menu screen G1 such as the one shown in
A user interface screen for adjusting the parameters of the driving state estimation unit 2 is not limited to the example of the menu screen G1. For example, as shown in
Alternatively, for adjustment of the parameters of the driving state estimation unit 2, all of parameters G22 can be collectively displayed as shown in a menu screen G21 like a user interface screen of
It should be noted that the characters used for the parameters G2, G12, and G22 are only exemplary and other characters can be used. In addition, for the slide bars G3, G13, and G23, other graphical user interfaces such as menu buttons can be used.
The driver can adjust the parameters G2, G12, or G22 of the driving state estimation unit 2 using the menu screen G1, G11, or G21 so that the enlarged image 234 or the like can be displayed at an appropriate timing in accordance with the drive of each driver. The user interface screen G1 of
[Embodiment 2]
Next, Embodiment 2 will be described.
One or both of a nearby object recognition unit 9 and a nearby object sensor 10 is/are included in the functional configuration of Embodiment 2 and constitute(s) an object detection unit. The nearby object recognition unit 9 receives as an input a camera image acquired with the image acquisition unit 4, and recognizes the kind of an object, for which the possibility of a collision with the vehicle 41 cannot be ignored, out of objects that exist around the vehicle 41 through an image recognition process.
The nearby object recognition unit 9 recognizes stereoscopic objects existing around the vehicle 41 such as the vehicles 45, 61, 62, 81, 82, 91, and 92 around the vehicle 41 or the guard rail 63 as shown in
The nearby object recognition unit 9, upon detecting an object around the vehicle 41, identifies if the detected object is located around any of the peripheral areas of the collision-warned parts of the vehicle 41 such as the left front part 31 and the rear part 37 from the positional relationship between the vehicle 41 and the camera installed on the vehicle 41 such as the camera 51 on the front part or the camera 52 on the right part. The nearby object recognition unit 9 outputs to a collision-warned part selection unit 3b information indicating that it has detected a nearby object as well as information about which part of the vehicle 41 the detected object is located close to.
The nearby object sensor 10 recognizes stereoscopic objects existing around the vehicle 41 such as the vehicles 45, 61, and 62 and the guard rail 63 (see
For a sensor used as the nearby object sensor 10, the positional relationship between the sensor and the vehicle 41, e.g., information about which part of the vehicle 41 the sensor is installed on and which part of the vehicle 41 is to be measured is determined in advance. The nearby object sensor 10, upon detecting a nearby object around the vehicle 41, identifies the direction of the detected object in relation to the vehicle 41 (e.g., the left front part 31 or the left part 37). Then, the nearby object sensor 10 outputs information indicating that it has detected a nearby object as well as information about which part of the vehicle 41 the detected object is located close to.
The collision-warned part selection unit 3b has, in addition to the aforementioned function of the collision-warned part selection unit 3 of Embodiment 1, a function of narrowing the collision-warned parts of the vehicle 41 set in accordance with the driving state down to a part of the vehicle 41 around which the object has been detected with the nearby object recognition unit 9 or the nearby object sensor 10. Specifically, the collision-warned part selection unit 3b performs a process of determining at least one of the collision-warned parts of the vehicle 41 set in accordance with the driving state to be a candidate part that could be the collision-warned part of the vehicle, and selecting as the collision-warned part a candidate part around which the object detected with the object detection unit such as the nearby object sensor 10 is located.
For example, in the scene of
Information on the collision-warned part of the vehicle 41 output from the collision-warned part selection unit 3b is reflected in the enlarged image generated by the enlarged image generation unit 6 and in the composite display image formed by combining the image of the peripheral part of the vehicle 41 and the enlarged image generated by the enlarged image generation unit 6 with the composite display image generation unit 7.
For example, when the vehicle is in the driving state of perpendicular parking (left-in reverse parking) and the nearby object recognition unit 9 has detected an object around the rear part 33 of the vehicle, the composite display image of
It should be noted that when the detection of an object around the vehicle 41 with the nearby object recognition unit 9 or the nearby object sensor 10 has failed, the collision-warned part selection unit 3b outputs information indicating the absence of collision-warned parts of the vehicle 41 independently of the driving state.
When both the nearby object recognition unit 9 and the nearby object sensor 10 are included in the functional configuration of Embodiment 2, the collision-warned part selection unit 3b narrows the collision-warned parts down to a part of the vehicle around which an object has been detected with at least one of the nearby object recognition unit 9 and the nearby object sensor 10.
In the functional configuration of Embodiment 2, the collision-warned parts determined in accordance with the driving state are narrowed down to a part of the vehicle 41 around which an object is determined to exist, whereby parts that, in practice, have no possibility of hitting a nearby object (parts with little possibility of a collision) can be omitted from the enlarged images in the composite display image formed through a composite process by the composite display image generation unit 7.
Therefore, it is possible to reduce the burden of the driver watching the enlarged images of the parts of the vehicle that have no possibility of hitting a nearby object, thereby allowing the driver to attend to the visual check of the very part of the vehicle that has a possibility of hitting a nearby object.
It should be noted that the nearby object recognition unit 9 or the nearby object sensor 10, even when used alone, has a function of identifying a part of the vehicle 41 that has a possibility of hitting a nearby object. Thus, for comparison purposes, a functional configuration without the vehicle signal acquisition unit 1, the driving state estimation unit 2, and the collision-warned part selection unit 3b in Embodiment 2 is shown in the functional block of
In the comparative example shown in
The comparative example shown in
For example, if stereoscopic objects are detected around the right part 36 and the left part 38 of the vehicle 41 and the specific parts of the vehicle 41 are set, it follows that nearby vehicles would be detected each time the vehicle stops at a traffic light on the heavily congested road, and enlarged images of the right part 36 and the left part 38 of the vehicle 41 would be frequently displayed, which can feel cumbersome for the driver.
In the functional configuration of Embodiment 2, parts of the vehicle 41 that have a possibility of hitting a nearby object are narrowed down in accordance with the driving state. Thus, enlarged images are not displayed at an undesired timing such as when stopping at a traffic light unlike with the functional configuration of
[Embodiment 3]
In Embodiment 3, the vehicle signal estimation unit 11 estimates the vehicle speed, gears, and the steering angle through image recognition from a temporal change of the camera images acquired with the image acquisition unit 4, and outputs them as vehicle signals to the driving state estimation unit 2. The driving state estimation unit 2 of Embodiment 3 estimates the vehicle driving state with the same function as that described in Embodiment 1 based on the premise that the vehicle signal estimation unit 11 outputs vehicle signals of about the same signals as the vehicle signal estimation unit 1 of Embodiment 1.
First, in step S1, a top-view image is formed through a viewpoint conversion process using the images captured with at least one of the cameras 51, 52, 53, and 54 installed on the vehicle 41 such that the resulting image appears to be viewed from a virtual viewpoint above the vehicle 41 with the ground as a reference. The top-view image in step S1 is formed such that the upward direction of the vehicle periphery image 100 coincides with the forward direction of the vehicle 41 and the rightward direction of the vehicle periphery image 100 coincides with the rightward direction of the vehicle 41 as with the vehicle periphery image 100 shown in
Next, in step S2, an optical flow of images in the vertical direction and the horizontal direction is determined for each pixel on the top-view image, from a displacement of the top-view image formed in the current processing cycle from the top-view image formed in the previous processing cycle.
Next, in step S3, a forward displacement V of the vehicle 41 and a rightward displacement U of the vehicle 41 are determined from the central value of the distribution of the optical flow of each pixel determined in step S2. The forward displacement V of the vehicle 41, when the vehicle drives forward, indicates a positive sign, and when the vehicle drives in reverse, indicates a negative sign. Meanwhile, the rightward displacement U of the vehicle 41, when the vehicle drives to the right, indicates a positive sign, and when the vehicle drives to the left, indicates a negative sign.
In step S3, as the top-view image created in step S1 is based on the ground as a reference, the optical flow of step S2 can be recognized as the motion of the ground except the portion of a stereoscopic object.
In step S3, the central value of the distribution of the optical flow of each pixel determined in step S2 is determined based on the premise that the area of the ground surface is narrower than the area of the stereoscopic object in the top-view image of step S1, so that the motion of the ground relative to the vehicle 41 is determined. Specifically, when the vehicle drives in the opposite direction of the direction of the motion of the ground, e.g., when the ground moves in the rightward direction, the vehicle 41 is determined to be move in the leftward direction.
Then, in step S4, the absolute value |V| of the forward displacement V of the vehicle 41 is divided by the processing cycle to estimate the vehicle speed.
Next, in step S5, the gear position of the vehicle 41 is estimated. Herein, if the forward displacement V of the vehicle 41 is greater than 0, the gear is determined to be in “drive” position (forward) and if the forward displacement V of the vehicle 41 is less than 0, the gear is determined to be in “reverse” position (back). If the forward displacement V of the vehicle 41 is about 0, the gear is determined to be not fixed.
In step S6, an angle θ made by the forward displacement V of the vehicle 41 and the rightward displacement U of the vehicle 41 is determined from the following Formula (1) to determine the steering angle of the vehicle 41.
[Formula 1]
θ=tan−1(U/|V|) (1)
It should be noted that the image recognition process with the vehicle signal estimation unit 11, and the method for estimating vehicle signals such as the vehicle speed, gear position, and steering angle shown in
According to Embodiment 3, even when vehicle signals cannot be acquired from a vehicle sensor, a steering angle sensor, a gear control device, or the like, the functional configurations of Embodiment 1 and Embodiment 2 can be realized by estimating vehicle signals through an image recognition process using as an input an image acquired with the image acquisition unit 4. Therefore, the in-vehicle image display device can be easily installed on the vehicle and can also be easily mounted later.
Number | Date | Country | Kind |
---|---|---|---|
2009-175745 | Jul 2009 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6734896 | Nobori et al. | May 2004 | B2 |
7139412 | Kato et al. | Nov 2006 | B2 |
7161616 | Okamoto et al. | Jan 2007 | B1 |
7495550 | Huang et al. | Feb 2009 | B2 |
7760113 | Uhler | Jul 2010 | B2 |
8082101 | Stein et al. | Dec 2011 | B2 |
8170787 | Coats et al. | May 2012 | B2 |
20030080877 | Takagi et al. | May 2003 | A1 |
Number | Date | Country |
---|---|---|
1 065 642 | Jan 2001 | EP |
2 181 892 | May 2010 | EP |
2000-168475 | Jun 2000 | JP |
3300334 | Apr 2002 | JP |
2005-123968 | May 2005 | JP |
2007-201748 | Aug 2007 | JP |
2007-288611 | Nov 2007 | JP |
2008-17311 | Jan 2008 | JP |
2008-306402 | Dec 2008 | JP |
WO 2009141846 | Nov 2009 | WO |
Entry |
---|
European Search Report dated Oct. 14, 2010 (Three (3) pages). |
Japanese Office Action dated Dec. 6, 2011 (five (5) pages). |
Akira Iguchi et al., “Depth Estimation with Stereo Camera Using Monocular Motion Parallax”, FIT (Forum of Information Science Technology), 2002, pp. 87-88. |
Number | Date | Country | |
---|---|---|---|
20110025848 A1 | Feb 2011 | US |