This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2022-097653 filed on Jun. 16, 2022, the disclosure of which is incorporated by reference herein.
The present disclosure relates to a display control device, a display device, a display system, a vehicle, a display control method, and a non-transitory storage medium.
In a hitherto known display device such as a head up display, a virtual image of an augmented reality (AR) image is overlay displayed on a scene in front of a vehicle by projecting an image onto a front windshield or the like of the vehicle.
For example, Japanese Patent No. 6278222 discloses technology including a caution target detection section that detects a caution target needing caution to be raised for a driver of a vehicle and that computes a distance from the caution target to the vehicle, and a display control section that performs adjacent display of a caution mark to raise the caution of the driver in respect to the caution target detected by the caution target detection section at a position separated from the caution target on the front glass by a specific distance as viewed by the driver. The display control section changes the specific distance according to the distance from the caution target to the vehicle so as to change the display position of the caution mark, and corrects the display position of the caution mark according to a time difference from when the caution target is detected until the caution mark is displayed, and also corrects the specific distance according to the velocity of the vehicle when the caution target was detected.
In the configuration described above, the display position of the caution mark is corrected according to the time difference from when the caution target is detected until the caution mark is displayed, and also the specific distance is corrected according to the velocity of the vehicle when the caution target was detected, thereby enabling the caution mark to be displayed at a suitable position irrespective of the time difference until the caution mark is displayed and irrespective of the velocity of the vehicle.
However, the technology disclosed in Japanese Patent No. 6278222 is technology in which the distance of the caution mark from the caution target is changed and the display position of the mark is corrected according to the time difference from when the caution target is detected until the caution mark is displayed, and according to the velocity of the vehicle when the caution target was detected, and so there is room for improvement since positional misalignment still occurs as before between the caution target and the caution mark.
The present disclosure appropriately corrects positional misalignment occurring between a position for overlay display of a mark urging caution with respect to a target and the actual position of the target.
A display control device according to a first aspect includes a positional information acquisition section that acquires positional information related to a position of a target based on detection information of a target detection section for detecting the target in surroundings of a vehicle, a velocity information acquisition section that acquires velocity information related to a velocity of the vehicle, an approach direction acquisition section that acquires an approach direction along which the target approaches the vehicle, an offset section that offsets the positional information acquired by the positional information acquisition section in the approach direction acquired by the approach direction acquisition section based on the velocity information acquired by the velocity information acquisition section, and a mark generation section that generates a mark urging caution with respect to the target for overlay display on a display area provided in a cabin of the vehicle based on post offset positional information that has been offset by the offset section.
In the display control device according to the first aspect the mark is generated at a position offset from the position of the target acquired by the positional information acquisition section based on the velocity information, such that the mark urging caution with respect to the target is generated at a position of the position of the target acquired by the positional information acquisition section offset according to the velocity. This enables correction of positional misalignment between where the mark is overlay displayed and the actual target occurring due to the processing time from detection of the target by the target detection section until the mark is generated by the mark generation section.
Moreover, by offsetting the position of the target acquired by the positional information acquisition section in the approach direction along which the target approaches, the position of the target acquired by the positional information acquisition section is offset in a direction in which positional misalignment occurs between the overlay displayed mark and the actual target. This accordingly enables positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target to be corrected appropriately. As a result thereof, the position of the overlay displayed mark can be brought closer to the actual position of the target.
A display control device according to a second aspect is the display control device of the first aspect wherein the velocity information is information related to an absolute velocity of the vehicle in a case in which the approach direction acquired by the approach direction acquisition section is a progression direction of the vehicle.
In the display control device according to the second aspect, the velocity information is taken as being the absolute velocity of the vehicle, and the velocity information is acquired by detecting the velocity of the vehicle. This thereby enables the velocity information to be acquired quicker than in a case in which the velocity information is taken as being the relative velocity of the vehicle with respect to the target. As a result thereof, the time taken by the offset section to offset the position of the target acquired by the positional information acquisition section can be shortened. Moreover, the load on the CPU installed in the display control device can be lightened. This means that positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target can be corrected more appropriately.
A display control device according to a third aspect is the display control device of the first aspect, wherein the velocity information is information related to a relative velocity of the vehicle with respect to the target in a case in which the approach direction acquired by the approach direction acquisition section is an oblique direction at an angle with respect to a progression direction of the vehicle.
In the display control device according to the third aspect, the velocity information is taken as being the relative velocity of the vehicle with respect to the target, and an offset distance for the offset section to offset the position of the target acquired by the positional information acquisition section is appropriately set based on this relative velocity. This means that positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target can be corrected more appropriately.
A display control device according to a fourth aspect is the display control device of any one of the first aspect to the third aspect, wherein the offset section adjusts an offset distance with which the offset section offsets the positional information based on a processing time from detection of the target by the target detection section until generation of the mark.
In the display control device according to the fourth aspect, the offset distance with which to offset the position of the target acquired by the positional information acquisition section is adjusted based on the processing time from detection of the target by the target detection section until generation of the mark, such that the offset distance is large when the processing time is long, and the offset distance is small when the processing time is short. This means that the offset distance with which the offset section offsets the position of the target acquired by the positional information acquisition section is set appropriately according to the processing time. As a result thereof, the position of the mark for overlay display can be corrected to an appropriate position according to the processing time.
A display control device according to a fifth aspect is the display control device of any one of the first aspect to the fourth aspect, wherein the positional information acquisition section acquires positional information related to a position of the target in the progression direction of the vehicle based on detection information of a radar, and acquires positional information related to the position of the target in an orthogonal direction orthogonal to the progression direction of the vehicle based on a camera image of a camera.
In the display control device according to the fifth aspect, the vehicle progression direction positional information of the target in the progression direction of the vehicle is acquired by the detection information of the radar, and the vehicle orthogonal direction positional information of the target in the orthogonal direction orthogonal to the progression direction of the vehicle is acquired based on the camera image of the camera. This means that more accurate positional information of the target is acquired. As a result thereof, positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target can be corrected more appropriately.
A display control device according to a sixth aspect is the display control device of any one of the first aspect to the fifth aspect, wherein the mark generation section changes a size of the mark based on the positional information acquired by the positional information acquisition section.
In the display control device according to the sixth aspect, the size of the mark is changed based on the positional information acquired by the positional information acquisition section, such that the size of the mark is large when the distance between the vehicle and the target is near, and the size of the mark is small when the distance between the vehicle and the target is far. This thereby enables the mark to be overlay displayed at an appropriate size according to the distance to the target. As a result thereof an occupant is able to ascertain the distance to the target from the size of the mark.
A display device according to a seventh aspect includes the display control device of any one of the first aspect to the sixth aspect, an output section that outputs an image of the mark for overlay display on the display area provided in the vehicle cabin, and a display area on which the image output from the output section is overlay displayed.
In the display device according to the seventh aspect, positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target can be corrected appropriately.
A display system according to an eighth aspect includes the display device of the seventh aspect, and a target detection section for detecting the target in surroundings of the vehicle.
In the display system according to the eighth aspect, positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target can be corrected appropriately.
A vehicle according to a ninth aspect includes the display system of the eighth aspect, and a windshield configuring the display area.
In the vehicle according to the ninth aspect, the mark is displayed on the display area of the windshield, and so an occupant is aware of the mark in a state of gazing in the vehicle forward direction. This accordingly enables the occupant to be aware of the mark during driving without shifting from a vehicle forward direction gaze.
A display control method executed by a processor according to a tenth aspect includes, by a processor, acquiring positional information related to a position of a target based on detection information of a target detection section for detecting the target in surroundings of a vehicle, acquiring velocity information related to a velocity of the vehicle, acquiring an approach direction along which the target approaches the vehicle, offsetting the positional information in the approach direction based on the velocity information, and generating a mark urging caution with respect to the target for overlay display on a display area provided in a cabin of the vehicle based on the positional information that has been offset.
The display control method executed by a processor according to the tenth aspect enables positional misalignment occurring between a position for overlay display of a mark urging caution with respect to a target and the actual position of the target to be corrected appropriately.
A non-transitory storage medium according to an eleventh aspect stores a program executable by a processor to perform display control processin. The processing includes acquiring positional information related to a position of a target based on detection information of a target detection section for detecting the target in surroundings of a vehicle, acquiring velocity information related to a velocity of the vehicle, acquiring an approach direction along which the target approaches the vehicle, offsetting the positional information in the approach direction based on the velocity information, and generating a mark urging caution with respect to the target for overlay display on a display area provided in a cabin of the vehicle based on the positional information that has been offset.
The non-transitory storage medium according to the eleventh aspect enables positional misalignment occurring between a position for overlay display of a mark urging caution with respect to a target and the actual position of the target to be corrected appropriately.
The present disclosure enables positional misalignment occurring between a position for overlay display of a mark urging caution with respect to a target and the actual position of the target to be corrected appropriately.
Exemplary embodiments of the present invention will be described in detail based on the following figures, wherein:
Description follows regarding a display system according to a first exemplary embodiment, with reference to the drawings. Note that an arrow FR in
Configuration of Display System Installed Vehicle
As illustrated in
The radar 14 is attached to the bumper at a front side of the vehicle 10. In the radar 14, transmission waves that have been transmitted so as to hit a target 90 in the surroundings of the vehicle 10 are received as reflection waves reflected by the target 90, so as to detect both a distance between the vehicle 10 and the target 90 and the direction of the target 90 with respect to the vehicle 10. Namely, the radar 14 detects positional information of the target 90 with respect to the vehicle 10. The radar 14 has a smaller angle of view 14A than the angle of view 12A of the camera 12.
A vehicle ahead 92 (see
As illustrated in
Instrument Panel
The instrument panel 18 is provided extending along a vehicle width direction. A steering wheel 20 is provided at a vehicle right side of the instrument panel 18. In the first exemplary embodiment, as an example, the vehicle 10 is configured as a right hand drive vehicle in which the steering wheel 20 is provided on the vehicle right side.
The instrument panel 18 is provided with a first display section 28 serving as an output section and including a first display area G1 serving as a display area, and a second display section 30 serving as an output section and including a second display area G2 serving as a display area.
The first display section 28 is disposed on the vehicle right side of the instrument panel 18 and at a vehicle forward direction side of the steering wheel 20. The first display section 28 is, for example, configured as a meter display on which a speedometer to show the traveling velocity, a direction indicator, a warning or the like are displayed.
The second display section 30 is disposed at a vehicle width direction center of the instrument panel 18. The second display section 30 is, for example, configured as a center display for displaying an image output from a navigation system.
Windshield
The windshield 22 is supported by front pillars 24. The front pillars 24 are respectively disposed at the vehicle right side and vehicle left side at a vehicle front side of the vehicle cabin, and extend in directions substantially along a vehicle height direction.
A third display section 32 including a third display area G3 serving as a display area is provided on the windshield 22. Based on information output from a display control device 35 inbuilt to the instrument panel 18, an image projected from a projection section 51 serving as an output section is projected onto the third display section 32 of the windshield 22. An image is thereby overlay displayed as a virtual image ahead of an occupant (driver) H (see
A display device 34 equipped with the display control device 35, the projection section 51, and the third display section 32 configures a head up display device. The third display section 32 onto which the image from the projection section 51 is projected configures a projection surface of the head up display device.
The projection section 51 is, as illustrated in
As illustrated in
Display System Hardware Configuration
As illustrated in
The camera 12 images ahead of the vehicle. The camera images ahead of the vehicle imaged by the camera 12 are input to the display control device 35. The radar 14 detects positional information of the target 90 ahead of the vehicle. The detection information detected by the radar 14 is input to the display control device 35. The vehicle velocity sensor 15 measures the velocity V1 of the vehicle 10 (see
The display control device 35 is configured as an electronic control unit (ECU) that performs various controls. The display control device 35 is configured including a central processing unit (CPU) 36, read only memory (ROM) 38, random access memory (RAM) 40, storage 42, a communication interface (communication I/F) 44, and an input/output interface (input/output I/F) 46. Each configuration element is connected to a bus 48 so as to enable communication with each other.
The CPU 36 is a central processing unit that executes various programs and controls each section. Namely, the CPU 36 serves as a processor, reads a program from the ROM 38 serving as memory or from the storage 42 serving as memory, and executes the program using the RAM 40 as workspace. Moreover, the CPU 36 controls each of the above configuration elements and performs various computational processing according to the program recorded in the ROM 38 or the storage 42.
The ROM 38 is stored with various programs and various data. The RAM 40 serves as workspace to temporarily store a program and/or data. The storage 42 is configured by a hard disk drive (HDD) or solid state drive (SSD), and is a non-transitory recording medium stored with various programs including an operating system, and various data. In the first exemplary embodiment, a display program and the like for performing display processing, described later, is stored in the ROM 38 or the storage 42.
The first display section 28, the second display section 30, the projection section 51, the camera 12, the radar 14, and the vehicle velocity sensor 15 are connected to the input/output interface 46.
Display Control Device Functional Configuration
Description follows regarding functionality of the display control device 35 of the first exemplary embodiment.
As illustrated in
Positional Information Acquisition Section
The positional information acquisition section 54 acquires positional information related to a position of the target 90 in surroundings of the vehicle 10. More specifically, the positional information acquisition section 54 detects the positional information of the target in the progression direction of the vehicle 10 based on the detection information of the radar 14. Moreover, the positional information acquisition section 54 detects positional information of the target 90 in an orthogonal direction orthogonal to the progression direction of the vehicle 10 based on a camera image of the camera 12. Note that vehicle progression direction positional information of the target 90 in the progression direction of the vehicle 10 and vehicle orthogonal direction positional information of the target 90 in the orthogonal direction of the vehicle 10 may be detected based on at least one of the detection information of the radar 14 or the camera image of the camera 12.
An xy coordinate system is presumed here in which, when the vehicle 10 is in a bird's eye view, the vehicle progression direction is the y axis, and a direction orthogonal to the vehicle progression direction is the x axis. As illustrated in
As illustrated in
Namely, the positional information acquisition section 54 acquires the position P1 (x1, y1) of the pedestrian 94 based on the detection information of the radar 14 and the camera image of the camera 12. Note that in
Velocity Information Acquisition Section
The velocity information acquisition section 56 acquires velocity information related to the velocity of the vehicle 10. The velocity information includes information about the absolute velocity of the vehicle 10, and information about the relative velocity of the vehicle with respect to the target 90.
More specifically, as illustrated in
As illustrated in
Note that in a case in which the pedestrian 94 is moving toward the vehicle right side obliquely to the left and in front of the vehicle 10, the velocity information acquisition section 56 may compute a shift in the positional information of the pedestrian 94 based on plural detection information detected by the radar 14, and based on this shift acquire a relative velocity of the pedestrian 94 with respect to the vehicle 10.
Approach Direction Acquisition Section
The approach direction acquisition section 58 acquires an approach direction along which the target 90 approaches the vehicle 10. More specifically, based on plural camera images imaged by the camera 12, the approach direction acquisition section 58 computes a shift in positional information of the target 90, and based on this shift acquire an approach direction of the target 90 with respect to the vehicle 10. Note that the approach direction acquisition section 58 may compute a shift in positional information of the target 90 based on plural detection information detected by the radar 14, and based on this shift acquire an approach direction of the target 90 with respect to the vehicle 10.
As illustrated in
Note that cases in which the approach direction is taken as being the progression direction of the vehicle 10 include cases in which the vehicle ahead 92 is at a different position to the vehicle 10 in a vehicle width direction, and also cases in which the vehicle ahead 92 is traveling forward or backward. Namely, cases in which the approach direction is taken as being the progression direction of the vehicle 10 include cases in which the vehicle ahead 92 is traveling forward or backward and the vehicle ahead 92 is at the same position as the vehicle 10 in the vehicle width direction, and cases in which the vehicle ahead 92 is traveling forward or backward at a different position to the vehicle 10 in the vehicle width direction.
As illustrated in
Note that in a case in which the pedestrian 94 is moving toward the vehicle right side obliquely to the left and in front of the vehicle 10, the approach direction acquisition section 58 may acquire the approach direction of the pedestrian 94 with respect to the vehicle 10 as an oblique direction at an angle with respect to the progression direction of the vehicle 10 based on plural detection information detected by the radar 14.
Processing Time Estimation Section
The processing time estimation section 60 estimates a processing time from when the camera 12 images the target 90, or from when the radar 14 detects the target 90, until a mark M1 (see
Moreover, in a case in which the approach direction acquired by the approach direction acquisition section 58 is the progression direction of the vehicle 10, the processing time estimation section 60 may estimate the processing time to be shorter than cases in which the approach direction is an oblique direction at an angle with respect to the progression direction of the vehicle 10. Moreover, the processing time estimation section 60 may estimate the processing time to be shorter the slower the velocity information acquired by the velocity information acquisition section 56.
Offset Section
Based on the velocity information acquired by the velocity information acquisition section 56, the offset section 62 offsets a position P1 of the target 90 acquired by the positional information acquisition section 54 by offsetting in the approach direction acquired by the approach direction acquisition section 58.
More specifically, the offset section 62 estimates an offset distance for offset based on the following computation equation (1).
offset distance (m)=velocity (km/h)×target coordinate coefficient (1)
The target coordinate coefficient is a coefficient that takes into consideration the processing time taken from when the camera 12 images the target 90, or from when the radar 14 detects the target 90, until the mark M1 (see
Note that the offset section 62 may adjust the offset distance to offset the position P1 acquired by the positional information acquisition section 54 according to the processing time estimated by the processing time estimation section 60. More specifically, in a case in which the processing time estimated by the processing time estimation section 60 is short, the offset section 62 may shorten the offset distance for offsetting compared to cases in which the processing time is long. Namely, the target coordinate coefficient may be caused to vary according to the processing time estimated by the processing time estimation section 60.
More precisely, in a case in which the vehicle ahead 92 is traveling forward ahead of the vehicle 10 as illustrated in
As illustrated in
Mark Generation Section
The mark generation section 64 generates the mark M1 (see
More specifically, in a case in which the vehicle ahead 92 is traveling in a forward direction ahead of the vehicle 10 as illustrated in
As illustrated in
More specifically, in a case in which the vehicle ahead 92 is traveling in a forward direction ahead of the vehicle 10 as illustrated in
In a case in which the pedestrian 94 is moving toward the vehicle right side obliquely to the left and in front of the vehicle 10 as illustrated in
Then, in a case in which the pedestrian 94 is moving toward the vehicle right side obliquely to the left and in front of the vehicle 10 as illustrated in
The mark generation section 64 may generate a mark to display a velocity V1 of the vehicle 10 and, as illustrated in
Note that the mark generation section 64 may change the size of the mark M1 based on positional information acquired by the positional information acquisition section 54. More specifically, as illustrated in
Display Control Device Display Processing Flow
A flow of display processing by the display control device 35 will now be described with reference to the flowchart illustrated in
As illustrated in
Next, the approach direction acquisition section 58 acquires the approach direction along which the target 90 approaches the vehicle 10 (step S102).
Next, the display control device 35 determines whether or not the approach direction acquired by the approach direction acquisition section 58 is the progression direction of the vehicle 10 (step S103). When the approach direction acquired by the approach direction acquisition section 58 is the progression direction of the vehicle 10 (step S103: YES), the velocity information acquisition section 56 acquires the velocity V1 that is the absolute velocity of the vehicle 10 based on the measurement information of the vehicle velocity sensor 15 (step S104).
Next, based on the velocity V1 that is the absolute velocity of the vehicle 10 acquired by the velocity information acquisition section 56, the offset section 62 offsets the position P1 acquired by the positional information acquisition section 54 in the vehicle progression direction that is the approach direction acquired by the approach direction acquisition section 58 (step S105). Note that the offset section 62 is able to adjust the offset distance based on the processing time estimated by the processing time estimation section 60.
Next, the mark generation section 64 generates the mark M1 at the position P2 offset by the offset section 62 (step S106).
Next, the information of the mark M1 generated by the mark generation section 64 is output to the projection section 51, the mark M1 is overlay displayed as a virtual image on the third display area G3 of the third display section 32 (step S107), and the display processing is ended.
However, in a case in which at step S103 the approach direction acquired by the approach direction acquisition section 58 is an oblique direction at an angle with respect to the progression direction of the vehicle 10 (step S103: NO), the velocity information acquisition section 56 acquires the relative velocity of the vehicle 10 with respect to the target 90 (step S108).
Next, based on the relative velocity of the vehicle 10 as acquired by the velocity information acquisition section 56, the offset section 62 offsets the position P1 acquired by the positional information acquisition section 54 in the oblique direction that is the approach direction acquired by the approach direction acquisition section 58 (step S109). Note that the offset section 62 is able to adjust the offset distance based on the processing time estimated by the processing time estimation section 60.
Next, the mark generation section 64 generates the mark M1 at the position P2 offset by the offset section 62 (step S110).
Next, the information of the mark M1 generated by the mark generation section 64 is output to the projection section 51, the mark M1 is overlay displayed as a virtual image on the third display area G3 of the third display section 32 (step S111), and the display processing is ended.
Next, description follows regarding the operation and advantageous effects of the first exemplary embodiment.
The display control device 35 of the first exemplary embodiment includes the positional information acquisition section 54 that acquires positional information related to the position of a target 90 based on at least one of the camera image of the target 90 ahead of the vehicle 10 as imaged by the camera 12 or the detection information of the radar 14 detecting the target 90 ahead of the vehicle 10, includes the velocity information acquisition section 56 that acquires the velocity information related to the velocity of the vehicle 10, includes the approach direction acquisition section 58 that acquires the approach direction along which the target 90 approaches the vehicle 10, includes the offset section 62 that based on the velocity information acquired by the velocity information acquisition section 56 offsets the position P1 acquired by the positional information acquisition section 54 in the approach direction acquired by the approach direction acquisition section 58, and includes the mark generation section 64 that generates the mark M1 to urge caution with respect to the target 90 to overlay display on the third display area G3 provided inside the vehicle cabin at the post offset position P2 after offset by the offset section 62 (see
However, processing time is needed for communication, plotting, and the like from when the camera 12 images the target 90 or from when the radar 14 detects the target 90, until the mark generation section 64 generates the mark M1 (see
In the first exemplary embodiment, the mark M1 is generated at the position P2 that is the position P1 of the target 90 as acquired by the positional information acquisition section 54 offset based on the velocity information, and the mark M1 urging caution with respect to the target 90 is overlay displayed at the position P2 that is the position of the target 90 acquired by the positional information acquisition section 54 offset according to velocity. This enables correction of the positional misalignment between the overlay displayed mark M1 and the actual target 90, occurring due to the processing time from when the camera 12 images the target 90 or from when the radar 14 detects the target 90 until the mark M1 (see
Moreover, by offsetting the position of the target 90 as acquired by the positional information acquisition section 54 in the approach direction in which the target 90 approaches, the position P1 of the target 90 as acquired by the positional information acquisition section 54 is offset in the direction in which the positional misalignment occurs between the overlay displayed mark M1 and the actual target 90.
This enables the positional misalignment occurring between the position of the overlay displayed mark M1 urging caution with respect to the target 90 and the actual position of the target 90 to be corrected appropriately. As a result thereof, the position of the overlay displayed mark M1 can be brought closer to the actual position of the target 90.
In the display control device 35 of the first exemplary embodiment, in a case in which the approach direction acquired by the approach direction acquisition section 58 is the progression direction of the vehicle 10, the velocity information is velocity V1 that is the absolute velocity of the vehicle 10 (see
Taking the velocity information as being the absolute velocity of the vehicle 10 means that the velocity information is acquired by detecting the velocity of the vehicle 10. This enables the velocity information to be acquired quicker than cases in which the velocity information is taken as the relative velocity of the vehicle 10 with respect to the target 90. As a result thereof, the time taken for the offset section 62 to offset the position P1 of the target 90 as acquired by the positional information acquisition section 54 can be shortened. Moreover, the load on the CPU installed in the display control device 35 can be lightened.
This means that positional misalignment occurring between the position of the overlay displayed mark M1 urging caution with respect to the target 90 and the actual position of the target 90 can be corrected more appropriately.
In the display control device 35 of the first exemplary embodiment, in a case in which the approach direction acquired by the approach direction acquisition section 58 is an oblique direction at an angle with respect to the progression direction of the vehicle 10, the velocity information is taken as being information related to the relative velocity of the vehicle 10 with respect to the target 90 (see
By taking the velocity information as being the relative velocity of the vehicle 10 with respect to the target 90, the offset distance for the offset section 62 to offset the position of the target 90 as acquired by the positional information acquisition section 54 is set appropriately based on this relative velocity. This means that positional misalignment occurring between the position of the overlay displayed mark M1 urging caution with respect to the target 90 and the actual position of the target 90 can be corrected more appropriately.
In the display control device 35 of the first exemplary embodiment, the offset section 62 adjusts the offset distance for offsetting the position P1 of the target 90 as acquired by the positional information acquisition section 54 based on the processing time taken from when the camera 12 imaged the target 90 or from when the radar 14 detected the target 90 until the mark generation section 64 generates the mark M1 (see
By adjusting the offset distance for offsetting the position P1 of the target 90 as acquired by the positional information acquisition section 54 based on the processing time taken from when the camera 12 imaged the target 90 or from when the radar 14 detected the target 90 until the mark generation section 64 generates the mark M1 (see
In the display control device 35 of the first exemplary embodiment, the positional information acquisition section 54 acquires the vehicle progression direction positional information related to the position of the target 90 in the progression direction of the vehicle based on the detection information of the radar 14, and acquires the vehicle orthogonal direction positional information related to the position of the target 90 in the orthogonal direction orthogonal to the progression direction of the vehicle 10 based on the camera images of the camera 12 (see
The vehicle progression direction positional information of the target 90 in the progression direction of the vehicle 10 is acquired from the detection information of the radar 14, and the vehicle orthogonal direction positional information of the target 90 in the orthogonal direction orthogonal to the progression direction of the vehicle 10 is acquired based on the camera images of the camera 12. A more accurate positional information of the target 90 is accordingly acquired. As a result thereof, positional misalignment occurring between the position of the overlay displayed mark M1 urging caution with respect to the target 90 and the actual position of the target 90 can be corrected more appropriately.
In the display control device 35 of the first exemplary embodiment, the mark generation section 64 changes the size of the mark M1 based on the position P1 acquired by the positional information acquisition section 54 (see
By changing the size of the mark M1 based on the position P1 acquired by the positional information acquisition section 54, the size of the mark M1 can be made large when the distance between the vehicle 10 and the target 90 is near, and size of the mark M1 can be made small when the distance between the vehicle 10 and the target 90 is far. This enables a mark M1 of appropriate size according to the distance to the target 90 to be overlay displayed. As a result thereof, the occupant H is able to ascertain the distance to the target 90 from the size of the mark M1.
The display system 16 of the first exemplary embodiment includes the display device 34, the camera 12 for imaging the target 90 ahead of the vehicle 10, and the radar 14 for detecting the target 90 ahead of the vehicle 10 (see
The display system 16 equipped with the display device 34, the camera 12, and the radar 14 is able to appropriately correct positional misalignment occurring between the position for overlay display of a mark urging caution with respect to a target and the actual position of the target. As a result thereof, the position of the mark for overlay display can be brought closer to the actual position of the target.
The vehicle 10 of the first exemplary embodiment includes the third display area G3 on the windshield 22 (see
The mark M1 is displayed on the third display area G3 of the windshield 22, and so the occupant H is able to be aware of the mark M1 when in a state gazing forward. The occupant H is accordingly able to be aware of the mark M1 during driving without shifting from the forward gaze.
A display system of a second exemplary embodiment differs from the display system of the first exemplary embodiment in that there is a different display area for overlay display of the mark to urge caution with respect to the target.
Description follows regarding a configuration of the display system of the second exemplary embodiment. Note that the same terms and reference numerals are employed in the description for portions that are the same as or substantially equivalent to those described in the content of the first exemplary embodiment.
In the second exemplary embodiment, as illustrated in
A camera image F imaged by the camera 12 is displayed at an upper side of the second display area G2, and a current position 10A of the vehicle 10 on a map and a route (guidance path) R to a destination is displayed by a navigation system at a lower side of the second display area G2.
Information of an image of the mark M1 generated by the mark generation section 64 is output to the second display section 30 serving as an output section, and the mark M1 is overlay displayed on the second display area G2.
Similar operation and advantageous effects to those of the display system of the first exemplary embodiment can be exhibited even in such a configuration.
This completes a description of a display system of the present disclosure based on the above exemplary embodiments. However, the basic configuration is not limited to those of the exemplary embodiments, and various design modifications and the like are permitted thereto within a scope not departing from the spirit of the present disclosure.
In the exemplary embodiment described above, an example has been illustrated in which the camera 12 images ahead of the vehicle, and the radar 14 detects the positional information of the target 90 ahead of the vehicle. However, the camera and radar are not limited so such a mode, and may image or detect anywhere in the surroundings of the vehicle.
In the exemplary embodiment described above an example was illustrated in which the positional information acquisition section 54 acquired the positional information of the target 90 based on the detection information of the radar 14 and the camera images of the camera 12. However, the positional information acquisition section is not limited to such a mode and, for example, the positional information of the target may be acquired based on at least one detection information from out of radar, camera, LiDAR, or sonar information.
In the exemplary embodiment described above an example was illustrated in which the velocity information acquisition section 56 acquired the relative velocity of the pedestrian 94 with respect to the vehicle 10 based on plural camera images imaged by the camera 12 or plural detection information detected by the radar 14. However, the velocity information acquisition section is not limited to such a mode and, for example, the relative velocity of the target with respect to the vehicle may be acquired based on at least one detection information from out of radar, camera, LiDAR, or sonar information. The velocity information acquisition section may also acquire the relative velocity based on information about inter-vehicle distance.
In the exemplary embodiment described above an example was illustrated in which the approach direction acquisition section 58 acquired the approach direction of the target 90 with respect to the vehicle 10 based on the plural camera images imaged by the camera 12 or on plural detection information detected by the radar 14. However, the approach direction acquisition section 58 is not limited to such a mode and, for example, may acquire the approach direction of the target with respect to the vehicle based on at least one detection information from out of radar, camera, LiDAR, or sonar information.
In the exemplary embodiment described above an example was illustrated in which the offset section 62 adjusted the offset distance for offsetting the position P1 acquired by the positional information acquisition section 54 according to the processing time estimated by the processing time estimation section 60. However, the offset section does not necessarily adjust the offset distance according to the processing time estimated by the processing time estimation section.
In the first exemplary embodiment described above an example was illustrated in which an image of the mark M1 generated by the mark generation section 64 is projected onto the third display section 32 of the windshield 22. In the second exemplary embodiment, an example was illustrated in which an image of the mark M1 generated by the mark generation section 64 is displayed onto the second display section 30. However, the image of the mark generated by the mark generation section may be displayed on the first display section 28, or may be projected onto a combiner provided at an upper face of the instrument panel.
In the exemplary embodiment described above examples were illustrated in which the display system of the present disclosure is applied to a vehicle 10 traveling forward. However, the display system of the present disclosure may be applied to a vehicle traveling around a curve or a vehicle traveling around a street corner.
In the exemplary embodiment described above examples were illustrated in which the processing performed in the display control device 35 was software processing performed by executing by a program, however, there is no limitation thereto. For example, processing performed by hardware may be employed. Or, processing executed by a combination of software and hardware may be employed. Moreover, in a case in which software processing is employed, the program may be distributed stored on various types of non-transitory storage medium such as a compact disc read only memory (CD-ROM), a digital versatile disc read only memory (DVD-ROM), a universal serial bus (USB) memory, or the like, and executed by a processor such as the CPU 36 or the like. The program may also be provided in a format downloadable from an external device over a network.
Number | Date | Country | Kind |
---|---|---|---|
2022-097653 | Jun 2022 | JP | national |