The present invention relates to driving support technologies.
A vehicle display device which facilitates traveling while an obstacle or the like is being passed is disclosed in Japanese Unexamined Patent Application Publication No. 11-259798.
The vehicle display device disclosed in Japanese Unexamined Patent Application Publication No. 11-259798 calculates a predicted travel track of the present vehicle, measures a distance between an obstacle or the like and the center of the present vehicle in a lateral direction and calculates/displays, from the measured distance in the lateral direction and the width of the present vehicle, a margin distance in the lateral direction. Furthermore, the vehicle display device disclosed in Japanese Unexamined Patent Application Publication No. 11-259798 calculates an arrival time or a distance up to the obstacle or the like, and when the arrival time or the distance up to the obstacle or the like is shorter than a predetermined value and the margin distance in the lateral direction is shorter than a predetermined value, the vehicle display device uses a sound or a warning sound so as to encourage a user to pay attention.
In the vehicle display device disclosed in Japanese Unexamined Patent Application Publication No. 11-259798, since the margin distance in the lateral direction is displayed as a value, it is impossible to make a driver intuitively grasp a position relationship between the present vehicle and the obstacle or the like in the lateral direction (vehicle width direction). Hence, it is hard to say that the vehicle display device disclosed in Japanese Unexamined Patent Application Publication No. 11-259798 sufficiently facilitates traveling while an obstacle or the like is being passed.
An object of the present invention is to provide a driving support technology with which it is possible to make a driver intuitively grasp a position relationship between the present vehicle and an obstacle candidate object in a vehicle width direction.
According to one aspect of the present invention, a driving support device includes: an estimation portion which estimates an anticipated course of the present vehicle; and an image generation portion which generates an image around the present vehicle including a graphic, where the graphic at least indicates, in a vehicle width direction of the anticipated course, part of an edge of a region occupied by an obstacle candidate object that is present in a direction in which the present vehicle travels.
According to another aspect of the present invention, a driving support method includes: an estimation step of estimating an anticipated course of the present vehicle; and an image generation step of generating an image around the present vehicle including a graphic, where the graphic at least indicates, in a vehicle width direction of the anticipated course, part of an edge of a region occupied by an obstacle candidate object that is present in a direction in which the present vehicle travels.
Illustrative embodiments of the present invention will be described in detail below with reference to drawings.
<1-1. Configuration of Driving Support Device According to First Embodiment>
A front camera 11, a back camera 12, a left side camera 13, a right side camera 14, the driving support device 201, a display device 31 and a speaker 32 shown in
The front camera 11 is provided at the front end of the present vehicle V1. The optical axis 11a of the front camera 11 is along the forward/backward direction of the present vehicle V1 in plan view from above. The front camera 11 shoots in the forward direction of the present vehicle V1. The back camera 12 is provided at the back end of the present vehicle V1. The optical axis 12a of the back camera 12 is along the forward/backward direction of the present vehicle V1 in plan view from above. The back camera 12 shoots in the backward direction of the present vehicle V1. Although the positions in which the front camera 11 and the back camera 12 are attached are preferably in the center of the present vehicle V1 in a left/right direction, the positions may be slightly displaced from the center in the left/right direction toward the left/right direction.
The left side camera 13 is provided in the left-side door mirror M1 of the present vehicle V1. The optical axis 13a of the left side camera 13 is along the left/right direction of the present vehicle V1 in plan view from above. The left side camera 13 shoots in the leftward direction of the present vehicle V1. The right side camera 14 is provided in the right-side door mirror M2 of the present vehicle V1. The optical axis 14a of the right side camera 14 is along the left/right direction of the present vehicle V1 in plan view from above. The right side camera 14 shoots in the rightward direction of the present vehicle V1. When the present vehicle V1 is a so-called door mirrorless vehicle, the left side camera 13 is attached around the rotary shaft (hinge portion) of a left side door without intervention of the door mirror, and the right side camera 14 is attached around the rotary shaft (hinge portion) of a right side door without intervention of the door mirror.
The angle of view θ of each of the vehicle-mounted cameras in a horizontal direction is equal to or more than 180 degrees. Thus, it is possible to shoot all around the present vehicle V1 in the horizontal direction with the four vehicle-mounted cameras (the front camera 11, the back camera 12, the left side camera 13 and the right side camera 14). Although in the present embodiment, the number of vehicle-mounted cameras is set to four, the number of vehicle-mounted cameras necessary for producing a bird's-eye-view image described later with images shot by the vehicle-mounted cameras is not limited to four as long as a plurality of cameras are used. As an example, when the angle of view θ of each of the vehicle-mounted cameras in the horizontal direction is relatively wide, based on three shot images acquired from three cameras which are less than four cameras, a bird's-eye-view image may be generated. Furthermore, as another example, when the angle of view θ of each of the vehicle-mounted cameras in the horizontal direction is relatively narrow, based on five shot images acquired from five cameras which are more than four cameras, a bird's-eye-view image may be generated.
With reference back to
The driving support device 201 processes the shot images output from the four vehicle-mounted cameras (the front camera 11, the back camera 12, the left side camera 13 and the right side camera 14), and outputs the processed images to the display device 31. The driving support device 201 performs control so as to output a sound from the speaker 32.
The display device 31 is provided in such a position that the driver of the present vehicle can visually recognize the display screen of the display device 31, and displays the images output from the driving support device 201. Examples of the display device 31 include a display installed in a center console, a meter display installed in a position opposite the driver seat and a head-up display which projects an image on a windshield.
The speaker 32 outputs the sound according to the control of the driving support device 201.
The driving support device 201 can be formed with hardware such as an ASIC (application specific integrated circuit) or an FPGA (field-programmable gate array) or with a combination of hardware and software. When the driving support device 201 is formed with software, a block diagram of a portion realized by the software indicates a functional block diagram of the portion. A function realized with the software is described as a program, and the program is executed on a program execution device, with the result that the function may be realized. As the program execution device, for example, a computer which includes a CPU (Central Processing Unit), a RAM (Random Access Memory) and a ROM (Read Only Memory) can be mentioned.
The driving support device 201 includes a shot image acquisition portion 21, an estimation portion 22, an image generation portion 23 and a sound control portion 24.
The shot image acquisition portion 21 acquires, from the four vehicle-mounted cameras (the front camera 11, the back camera 12, the left side camera 13 and the right side camera 14), analogue or digital shot images at a predetermined period (for example, a period of 1/30 seconds) continuously in time. Then, when the acquired shot images are analogue, the shot image acquisition portion 21 converts (A/D conversion) the analogue shot images into digital shot images. The shot image acquisition portion 21 outputs the acquired shot images or the shot images acquired and converted to the image generation portion 23.
The estimation portion 22 acquires the steering angle information, the vehicle speed information and the like of the present vehicle from the vehicle control ECU (Electronic Control Unit) and the like of the present vehicle, estimates an anticipated course of the present vehicle based on the acquired information and outputs the estimation result to the image generation portion 23.
The image generation portion 23 includes a bird's-eye-view image generation portion 23a, an obstacle candidate object detection portion 23b and a graphic superimposition portion 23c.
The bird's-eye-view image generation portion 23a projects the shot images acquired by the shot image acquisition portion 21 on a virtual projection plane, and converts them into projection images. Specifically, the bird's-eye-view image generation portion 23a projects the shot image of the front camera 11 on the first region R1 of the virtual projection plane 100 in a virtual three-dimensional space shown in
The virtual projection plane 100 shown in
The bird's-eye-view image generation portion 23a generates, based on a plurality of projection images, a virtual viewpoint image seen from a virtual viewpoint. Specifically, the bird's-eye-view image generation portion 23a virtually adheres the first to fourth projection images to the first to fourth regions R1 to R4 in the virtual projection plane 100.
The bird's-eye-view image generation portion 23a virtually configures a polygon model showing the three-dimensional shape of the present vehicle V1. The model of the present vehicle V1 is arranged, in the virtual three-dimensional space where the virtual projection plane 100 is set, in the position (the center portion of the virtual projection plane 100) which is determined to be the position where the present vehicle V1 is present such that the first region R1 is the front side and the fourth region R4 is the back side.
Furthermore, the bird's-eye-view image generation portion 23a sets the virtual viewpoint in the virtual three-dimensional space where the virtual projection plane 100 is set. The virtual viewpoint is specified by a viewpoint position and a view direction. As long as at least part of the virtual projection plane 100 enters the view, the viewpoint position and the view direction of the virtual viewpoint can be set to an arbitrary viewpoint position and an arbitrary view direction. In the present embodiment, the viewpoint position of the virtual viewpoint is assumed to be located backward and upward of the present vehicle, and the view direction of the virtual viewpoint is assumed to be located forward and downward of the present vehicle. In this way, the virtual viewpoint image generated by the bird's-eye-view image generation portion 23a becomes a bird's-eye-view image. The viewpoint position of the virtual viewpoint is assumed to be located backward and upward of the present vehicle, the view direction of the virtual viewpoint is assumed to be located forward and downward of the present vehicle and thus the driver can more accurately confirm a distant obstacle candidate object. Unlike the present embodiment, for example, the viewpoint position may be assumed to be the position of the eyes of a standard driver, and the view direction may be assumed to be located forward of the present vehicle.
The bird's-eye-view image generation portion 23a virtually cuts out, according to the set virtual viewpoint, the image of a region (region seen from the virtual viewpoint) necessary for the virtual projection plane 100. The bird's-eye-view image generation portion 23a also performs, according to the set virtual viewpoint, rendering on the polygon model so as to generate a rendering picture of the present vehicle V1. Then, the bird's-eye-view image generation portion 23a generates a bird's-eye-view image in which the rendering picture of the present vehicle V1 is superimposed on the image that is cut out.
The obstacle candidate object detection portion 23b detects, based on the shot image of the front camera 11, an obstacle candidate object which can be present in the forward direction of the present vehicle. In the detection of the obstacle candidate object, a known image recognition technology is used. For example, in the detection of an obstacle candidate object which is a moving object, a background differencing method can be used, and in the detection of an obstacle candidate object which is a stationary object, a mobile stereo method can be used. Although in the present embodiment, the image recognition technology is used so as to detect the obstacle candidate object, in addition to or instead of the image recognition technology, for example, information which is output from a radar device mounted in the present vehicle or information which can be obtained by communication with a cloud center, vehicle-to-vehicle communication, road-to-vehicle communication or the like may be used so as to detect the obstacle candidate object.
The graphic superimposition portion 23c calculates a vehicle width direction of the anticipated course estimated by the estimation portion 22, and generates a graphic indicating part of an edge of a region occupied by the obstacle candidate object in the calculated vehicle width direction. Specifically, the graphic superimposition portion 23c calculates the vehicle width direction of the anticipated course estimated by the estimation portion 22, and generates the graphic indicating an overlapping region of the region occupied by the present vehicle and a region occupied by the obstacle candidate object in the calculated vehicle width direction. The graphic superimposition portion 23c generates an output image obtained by superimposing the graphic described above on the bird's-eye-view image generated by the bird's-eye-view image generation portion 23a. The output image generated by the graphic superimposition portion 23c is output to the display device 31. The vehicle width direction of the anticipated course estimated by the estimation portion 22 is a direction which is substantially perpendicular to the anticipated course, and for example, when the anticipated course is a course in which the present vehicle travels linearly forward, the vehicle width direction coincides with a vehicle width direction in the current position of the present vehicle.
The sound control portion 24 makes the speaker 32 produce, for example, a caution sound which provides a notification that the obstacle candidate object is detected and a warning sound which provides a notification that an overlapping region is produced. The warning sound is preferably set more stimulative than the caution sound.
<1-2. Operation of Driving Support Device According to First Embodiment>
When the flow operation shown in
Then, the bird's-eye-view image generation portion 23a uses the shot images acquired by the shot image acquisition portion 21 so as to generate the bird's-eye-view image (step S20).
Then, the image generation portion 23 uses the image recognition technology so as to detect the position of a roadway, calculates a travelable region of the present vehicle based on the detected position of the roadway and superimposes a left side guide line indicating the left end of the travelable region and a right side guide line indicating the right end of the travelable region on the bird's-eye-view image (step S30).
Then, the image generation portion 23 determines whether or not the obstacle candidate object is detected by the obstacle candidate object detection portion 23b (step S40).
When the obstacle candidate object is not detected, for example, the image generation portion 23 outputs, as the output image, to the display device 31, a bird's-eye-view image obtained by superimposing a rendering picture VR1 of the present vehicle and the left side guide line G1 and the right side guide line G2 as shown in FIG. 5 (step S100), and the flow operation is completed. Although the form of the left side guide line G1 and the right side guide line G2 included in the output image shown in
On the other hand, when the obstacle candidate object is detected, the sound control portion 24 makes the speaker 32 produce the caution sound according to the result of the detection by the obstacle candidate object detection portion 23b (step S50).
In step S60 subsequent to step S50, the estimation portion 22 estimates the anticipated course of the present vehicle.
In step S70 subsequent to step S60, the graphic superimposition portion 23c calculates the vehicle width direction of the anticipated course estimated by the estimation portion 22 so as to determine whether or not an overlapping region of the region occupied by the present vehicle and the region occupied by the obstacle candidate object is present in the calculated vehicle width direction.
When the overlapping region is not present, the image generation portion 23 outputs, as the output image, to the display device 31, for example, a bird's-eye-view image obtained by superimposing the rendering picture VR1 of the present vehicle and the left side guide line G1 and the right side guide line G2 as shown in
When the overlapping region is present, the graphic superimposition portion 23c generates a warning line serving as a graphic which indicates a boundary between the overlapping region and the non-overlapping region, and superimposes, instead of the right side guide line, the warning line on the bird's-eye-view image (step S90), furthermore, the image generation portion 23 uses a different color for the overlapping region in the rendering picture VR1 of the present vehicle from the regions other than the overlapping region, for example, a bird's-eye-view image obtained by superimposing the rendering picture VR1 of the present vehicle and the left side guide line G1 and the warning line A1 as shown in
The driver confirms the output image including the graphic indicating the overlapping region, and thereby can intuitively grasp a position relationship between the present vehicle and the obstacle candidate object in the vehicle width direction. In this way, it is easy to drive while avoiding future contact between the present vehicle and the obstacle candidate object.
The rendering picture VR1 of the present vehicle is made to differ in form between the overlapping region and the non-overlapping region, and thus the driver can grasp the width of the overlapping region. In this way, it is easier to drive while avoiding future contact between the present vehicle and the obstacle candidate object. Although in the present embodiment, different colors are individually used for the overlapping region and the non-overlapping region in the rendering picture VR1 of the present vehicle so as to make different forms, for example, the rendering picture VR1 may be made to significantly differ in brightness between the overlapping region and the non-overlapping region so as to make different forms.
Although the form of the warning line A1 included in the output image shown in
The calculation portion 23d calculates a distance between the present vehicle and the obstacle candidate object based on the shot image of the front camera 11. Although in the present embodiment, the image recognition technology is used so as to calculate the distance between the present vehicle and the obstacle candidate object, in addition to or instead of the image recognition technology, for example, information which is output from a radar device mounted in the present vehicle or information which can be obtained by communication with a cloud center, vehicle-to-vehicle communication road-to-vehicle communication or the like may be used so as to calculate the distance between the present vehicle and the obstacle candidate object.
When the distance between the present vehicle and the obstacle candidate object is more than a predetermined value, even if an overlapping region is present, the graphic superimposition portion 23c does not superimpose a graphic indicating a boundary between the overlapping region and the non-overlapping region on the bird's-eye-view image. In this way, it is possible to prevent the unnecessary appearance of a graphic in a stage where it is hardly necessary for the driver to intuitively grasp the position relationship between the present vehicle and the obstacle candidate object in the vehicle width direction.
When the distance between the present vehicle and the obstacle candidate object is equal to or less than the predetermined value, if an overlapping region is present, the graphic superimposition portion 23c superimposes the graphic indicating the boundary between the overlapping region and the non-overlapping region on the bird's-eye-view image.
The change portion 25 changes the predetermined value described above according to the speed of the present vehicle. For example, the change portion 25 increases the predetermined value as the speed of the present vehicle is increased. In this way, as an anticipated time necessary until the present vehicle and the obstacle candidate object are aligned in the vehicle width direction is shorter, it is possible to make the driver intuitively grasp the position relationship between the present vehicle and the obstacle candidate object in the vehicle width direction in an earlier stage. In this way, it is possible to start to drive while avoiding future contact between the present vehicle and the obstacle candidate object with appropriate timing.
The change portion 25 previously stores, for example, a relationship between the speed of the present vehicle and the predetermined value shown in
Unlike the present embodiment, the change portion 25 may change the predetermined value described above according to a relative speed at which the present vehicle approaches the obstacle candidate object. For example, the change portion 25 may increase the predetermined value as the relative speed at which the present vehicle approaches the obstacle candidate object is increased. In this way, the accuracy of a correlation between the anticipated time necessary until the present vehicle and the obstacle candidate object are aligned in the vehicle width direction and the predetermined value is enhanced. The relative speed at which the present vehicle approaches the obstacle candidate object may be calculated by use of the image recognition technology based on the shot image of the front camera 11, and in addition to or instead of the image recognition technology, for example, information which is output from a radar device mounted in the present vehicle or information which can be obtained by communication with a cloud center, vehicle-to-vehicle communication, road-to-vehicle communication or the like may be used so as to calculate the relative speed.
For simplification, unlike the present embodiment, without provision of the change portion 25, the predetermined value described above may be set to a single fixed value.
In step S51, the calculation portion 23d calculates the distance between the present vehicle and the obstacle candidate object based on the shot image of the front camera 11, and the image generation portion 23 determines whether or not the distance between the present vehicle and the obstacle candidate object is equal to or less than the predetermined value. When the distance between the present vehicle and the obstacle candidate object is equal to or less than the predetermined value, the process is transferred to step S60 whereas when the distance between the present vehicle and the obstacle candidate object is not equal to or less than the predetermined value, the process is transferred to step S100.
The margin graphic superimposition portion 23e generates an image including a margin graphic when the overlapping region of the present vehicle and the obstacle candidate object is not present in the vehicle width direction of the anticipated course estimated by the estimation portion 22. The margin graphic shows a region which indicates how far the present vehicle is from the obstacle candidate object in the vehicle width direction of the anticipated course estimated by the estimation portion 22.
The form change portion 23f changes the form of the margin graphic according to the distance between the region occupied by the present vehicle and the region occupied by the obstacle candidate object in the vehicle width direction of the anticipated course estimated by the estimation portion 22. In this way, it is possible to make the driver grasp how high the probability is that an overlapping region of the present vehicle and the obstacle candidate object is produced in the future, and thus the driver can drive with a margin.
In the flowchart shown in
In step S71, the margin graphic superimposition portion 23e generates a margin graphic in a form based on an instruction from the form change portion 23f, and superimposes, instead of the right side guide line, the margin graphic on the bird's-eye-view image, furthermore, in step S100, the image generation portion 23 outputs, as the output image, to the display device 31, a bird's-eye-view image obtained by superimposing, for example, the rendering picture R1 of the present vehicle, the left side guide line G1 and the margin graphic B1 as shown in
The output image shown in
The output image shown in
The output image shown in
Although in the present embodiment, the form change portion 23f changes the form of the margin graphic B1 by the color and the number of lines, for example, the form change portion 23f may change the form of the margin graphic B1 by the thickness and the like of lines.
The future position estimation portion 23g estimates the future position of the obstacle candidate object based on the shot image of the front camera 11. In the estimation of the future position of the obstacle candidate object, for example, the background differencing method can be used. Although in the present embodiment, the image recognition technology is used so as to estimate the future position of the obstacle candidate object, in addition to or instead of the image recognition technology, for example, information which is output from a radar device mounted in the present vehicle or information which can be obtained by communication with a cloud center, vehicle-to-vehicle communication, road-to-vehicle communication or the like may be used so as to estimate the future position of the obstacle candidate object.
Then, the image generation portion 23 provides, when a region corresponding to the current position of the obstacle candidate object is not included in the bird's-eye-view image, a picture indicating the obstacle candidate object in a region of the bird's-eye-view image corresponding to the future position of the obstacle candidate object. In this way, even when the region corresponding to the current position of the obstacle candidate object is not present in the bird's-eye-view image including the graphic, it is possible to make the driver intuitively grasp a future position relationship between the present vehicle and the obstacle candidate object. The present embodiment is particularly useful when the vertical width of the display screen of the display device 31 is relatively narrow.
In the flowchart shown in
In step S52, the image generation portion 23 determines whether or not the distance between the present vehicle and the obstacle candidate object is equal to or less than a second predetermined value. The second predetermined value is a value which is less than the predetermined value (first predetermined value). As with the predetermined value (first predetermined value), the second predetermined value may be varied according to the speed of the present vehicle or the relative speed at which the present vehicle approaches the obstacle candidate object or may be a single fixed value. Regardless of whether the predetermined value (first predetermined value) is a variable value or a single fixed value, the second predetermined value may be a variable value or a single fixed value.
When the distance between the present vehicle and the obstacle candidate object is equal to or less than the second predetermined value, the process is transferred to step S53 whereas when the distance between the present vehicle and the obstacle candidate object is not equal to or less than the second predetermined value, the process is transferred to step S60.
In step S53, the bird's-eye-view image generation portion 23a changes the viewpoint position of the virtual viewpoint to a position immediately above the present vehicle, and changes the view direction of the virtual viewpoint to a direction immediately below the present vehicle (substantially in the direction of gravitational force). When the region corresponding to the current position of the obstacle candidate object is not present in the bird's-eye-view image, the image generation portion 23 provides the picture indicating the obstacle candidate object in the region of the bird's-eye-view image corresponding to the future position of the obstacle candidate object (for example, a polygon picture P1 in
When step S90 is reached through step S53, in step S90, the image generation portion 23 also superimposes, on the bird's-eye-view image, a graphic W1 indicating the anticipated course of the present vehicle at the present time and a graphic W2 indicting a recommended course for avoiding future contact with the obstacle candidate object. Hence, when step S90 is reached through step S53, the output image is, for example, an image as shown in
In addition to the embodiments described above, various variations can be added to various technical features disclosed in the present specification without departing from the split of the technical creation thereof. A plurality of embodiments and variations described in the present specification may be combined and practiced if possible.
For example, when the image generation portion 23 determines that it is impossible to avoid future contact with the obstacle candidate object by the steering of the present vehicle, as shown in
For example, when the overlapping region is produced or when the graphic indicating the recommended stop position is superimposed on the bird's-eye-view image, the driving support device may transmit the situation thereof to the vehicle control ECU of the present vehicle such that the vehicle control ECU of the present vehicle performs automatic steering or automatic braking.
Although in the embodiments described above, the output image output by the driving support device is the bird's-eye-view image, the output image output by the driving support device is not limited to the bird's-eye-view image, and for example, a graphic or the like may be superimposed on the shot image of the front camera 11. For example, although even in the shot image of the front camera 11, a slight displacement from the actual position is made, a picture indicating the present vehicle (picture indicating a front end portion of the present vehicle) may be included.
Although in the embodiments described above, the rendering picture VR1 of the present vehicle is superimposed on the bird's-eye-view image, the rendering picture VR1 of the present vehicle does not need to be superimposed on the bird's-eye-view image.
Although in the embodiments described above, the shot image is used for the generation of the output image, CG (Computer Graphics) showing scenery around the present vehicle may be used without use of the shot image so as to generate the output image. When the CG showing the scenery around the present vehicle is used so as to generate the output image, the driving support device preferably acquires the CG showing the scenery around the present vehicle from, for example, a navigation device mounted in the present vehicle.
Since the position relationship between the present vehicle and the obstacle candidate object in the vehicle width direction is important, the scenery around the present vehicle is not necessarily needed. Hence, the scenery around the present vehicle shot by the vehicle-mounted camera and the CG showing the scenery around the present vehicle may be prevented from being included in the output image.
Although in the embodiments described above, the direction in which the present vehicle travels is the forward direction, the present invention can also be applied to a case where the direction in which the present vehicle travels is the backward direction.
In the output image output by the driving support device, the vehicle speed information of the present vehicle, the range information of a shift lever in the present vehicle and the like may be included.
In the third embodiment described above, a configuration may be adopted in which the image generation portion 23 does not include the graphic superimposition portion 23c. In this case, the driving support device preferably performs a flow operation in which steps S80 and S90 are removed from the flowchart shown in
Number | Date | Country | Kind |
---|---|---|---|
2017-167373 | Aug 2017 | JP | national |
This nonprovisional application claims priority under 35 U.S.C. § 119(a) on Patent Application No. 2017-167373 filed in Japan on Aug. 31, 2017, the entire contents of which are hereby incorporated by reference.