This application claims priority to and the benefit of Japanese Patent Application No. 2020-046809 filed on Mar. 17, 2020, the entire disclosure of which is incorporated herein by reference.
The present invention relates to a display control apparatus, a display control method, and a computer-readable storage medium storing a program with which an image can be displayed such that the image is superimposed on a visual field region of a driver.
Japanese Patent Laid-Open No. 2005-135037 describes that if it is inferred that a driver has recognized content of a displayed warning, the method for displaying this warning is changed (a reduction in brightness, a change in the display position, a stop of the display etc.). International Publication No. 2016/166791 describes that an actual line-of-sight distribution of a driver and an ideal line-of-sight distribution are displayed.
The present invention provides a display control apparatus, a display control method, and a computer-readable storage medium storing a program that effectively urge a driver to closely observe a predetermined region in a visual field region.
The present invention in its first aspect provides a display control apparatus includes: a display control unit configured to display an image such that the image is superimposed on a visual field region of a driver of a vehicle; and a detection unit configured to analyze a line of sight of the driver and detect a viewpoint of the driver in the visual field region that is obtained as a result of the analysis, wherein the display control unit subjects a predetermined region in the visual field region to display control, and based on a result of determination of overlapping between the predetermined region in the visual field region and the viewpoint of the driver detected by the detection unit, if the overlapping satisfies a condition, the display control unit changes a mode of the display of the image.
The present invention in its second aspect provides a display control method includes: displaying an image such that the image is superimposed on a visual field region of a driver of a vehicle; and analyzing a line of sight of the driver and detecting a viewpoint of the driver in the visual field region that is obtained as a result of the analysis, wherein a predetermined region in the visual field region is subjected to display control, and based on a result of determination of overlapping between the predetermined region in the visual field region and the detected viewpoint of the driver, if the overlapping satisfies a condition, a mode of the display of the image is changed.
The present invention in its third aspect provides a computer-readable storage medium storing a program for causing a computer to perform functions of: displaying an image such that the image is superimposed on a visual field region of a driver of a vehicle; and analyzing a line of sight of the driver and detecting a viewpoint of the driver in the visual field region that is obtained as a result of the analysis, wherein a predetermined region in the visual field region is subjected to display control, and based on a result of determination of overlapping between the predetermined region in the visual field region and the detected viewpoint of the driver, if the overlapping satisfies a condition, a mode of the display of the image is changed.
According to the present invention, the driver can be effectively urged to closely observe a predetermined region in a visual field region.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note that the following embodiments are not intended to limit the scope of the claimed invention, and limitation is not made an invention that requires all combinations of features described in the embodiments. Two or more of the multiple features described in the embodiments may be combined as appropriate. Furthermore, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
Neither patent document mentions controlling the display mode of a display for the driver in order to urge the driver to closely observe a predetermined region.
According to one aspect of the present invention, the driver can be effectively urged to closely observe a predetermined region in a visual field region.
The control apparatus in
Functions or the like assigned to the respective ECUs 20 to 29 will be described below. Note that the number of ECUs and functions assigned thereto can be designed as appropriate, and can be further segmented than in the present embodiment, or can be integrated.
The ECU 20 executes control associated with automated driving of the vehicle 1. During automated driving, the ECU 20 automatically controls at least either steering or acceleration/deceleration of the vehicle 1. In a later-described control example, the ECU 20 automatically controls both steering and acceleration/deceleration.
The ECU 21 controls an electric power steering device 3. The electric power steering device 3 includes a mechanism for steering front wheels in accordance with a driver's driving operation (steering operation) to a steering wheel 31. The electric power steering device 3 also includes a motor that exerts a driving force for assisting in the steering operation or automatically steering the front wheels, a sensor for detecting a steering angle, and so on. If the driving state of the vehicle 1 is automated driving, the ECU 21 automatically controls the electric power steering device 3 in response to an instruction from the ECU 20, and controls the traveling direction of the vehicle 1.
The ECUs 22 and 23 control detection units 41 to 43 for detecting the surrounding situation of the vehicle and perform information processing on their detection results. The detection units 41 are cameras (hereinafter referred to as “cameras 41” in some cases) for capturing images of the front of the vehicle 1. In the present embodiment, the detection units 41 are attached to the vehicle interior on the inner side of the windscreen, at a front portion of the roof of the vehicle 1. Analysis of the images captured by the cameras 41 makes it possible to extract an outline of a target and extract a lane marker (white line etc.) of a traffic lane on a road.
The detection units 42 are Light Detection and Ranging (LIDARs), and detect a target around the vehicle 1 and measure the distance to the target. In the present embodiment, five detection units 42 are provided, one on each corner of the front part of the vehicle 1, one at the center of the rear part, and one on each side of the rear part. The detection units 43 are millimeter wave radars (hereinafter referred to as “radars 43” in some cases), and detect a target around the vehicle 1 and measure the distance to the target. In the present embodiment, five radars 43 are provided, one at the center of the front part of the vehicle 1, one at each corner of the front part, and one on each corner of the rear part.
The ECU 22 controls one of the cameras 41 and the detection units 42 and performs information processing on their detection results. The ECU 23 controls the other camera 41 and the radars 43 and performs information processing on their detection results. As a result of two sets of devices for detecting the surrounding situation of the vehicle being provided, the reliability of the detection results can be improved. Also, as a result of different types of detection units such as cameras and radars being provided, manifold analysis of the surrounding environment of the vehicle is enabled.
The ECU 24 controls a gyroscope sensor 5, a GPS sensor 24b, and a communication device 24c, and performs information processing on their detection results or communication results. The gyroscope sensor 5 detects rotational motion of the vehicle 1. A path of the vehicle 1 can be determined based on the results of detection by the gyroscope sensor 5, the wheel speed, or the like. The GPS sensor 24b detects the current position of the vehicle 1. The communication device 24c wirelessly communicates with a server that provides map information, traffic information, and weather information, and acquires such information. The ECU 24 can access a database 24a of map information that is built in the storage device, and the ECU 24 searches for a route from the current location to a destination, for example. Note that a database of the aforementioned traffic information, weather information, or the like may also be built in the database 24a.
The ECU 25 includes a communication device 25a for vehicle-to-vehicle communication. The communication device 25a wirelessly communicates with other vehicles in the surrounding area and exchanges information between the vehicles.
The ECU 26 controls a power plant 6. The power plant 6 is a mechanism that outputs a driving force for rotating drive wheels of the vehicle 1, and includes, for example, an engine and a transmission. For example, the ECU 26 controls the output of the engine in response to the driver's driving operation (acceleration pedal operation or accelerating operation) detected by an operation detection sensor 7a provided on an acceleration pedal 7A, and switches the gear ratio of the transmission based on information such as vehicle speed detected by a vehicle speed sensor 7c. If the driving state of the vehicle 1 is automated driving, the ECU 26 automatically controls the power plant 6 in response to an instruction from the ECU 20 and controls acceleration/deceleration of the vehicle 1.
The ECU 27 controls lighting devices (headlight, tail light etc.) including direction indicators 8 (blinkers). In the example in
The ECU 28 controls an input/output device 9. The input/output device 9 outputs information to the driver and accepts input of information from the driver. A sound output device 91 notifies the driver of information using a sound. A display device 92 notifies the driver of information by means of a display of an image. The display device 92 is, for example, disposed in front of the driver seat and constitutes an instrument panel or the like. Note that although an example of using a sound and a display is described here, information may alternatively be notified using a vibration and/or light. Further, information may be notified by combining two or more of a sound, a display, a vibration, and light. Furthermore, the combination may be varied or the notification mode may be varied in accordance with the level (e.g., degree of urgency) of information to be notified. The display device 92 includes a navigation device.
An input device 93 is a switch group that is disposed at a position at which it can be operated by the driver and gives instructions to the vehicle 1, and may also include a sound input device.
The ECU 29 controls brake devices 10 and a parking brake (not shown). The brake devices 10 are, for example, disc brake devices and provided on the respective wheels of the vehicle 1, and decelerate or stop the vehicle 1 by applying resistance to the rotation of the wheels. For example, the ECU 29 controls operations of the brake devices 10 in response to the driver's driving operation (braking operation) of the driver detected by an operation detection sensor 7b provided on a brake pedal 7B. If the driving state of the vehicle 1 is automated driving, the ECU 29 automatically controls the brake devices 10 in response to an instruction from the ECU 20 and controls deceleration and stop of the vehicle 1. The brake devices 10 and the parking brake can also be operated to maintain the stopped state of the vehicle 1. If the transmission of the power plant 6 includes a parking lock mechanism, it can also be operated to maintain the stopped state of the vehicle 1.
A description will be given of control associated with automated driving of the vehicle 1 executed by the ECU 20. If an instruction of a destination and automated driving is given by the driver, the ECU 20 automatically controls the travel of the vehicle 1 to a destination in accordance with a guided route searched for by the ECU 24. During automated control, the ECU 20 acquires information (external information) associated with the surrounding situation of the vehicle 1 from the ECUs 22 and 23, and gives instructions to the ECUs 21, 26, and 29 based on the acquired information to control steering and acceleration/deceleration of the vehicle 1.
The external recognition unit 201 recognizes external information regarding the vehicle 1 based on signals from external recognition cameras 207 and external recognition sensors 208. Here, the external recognition cameras 207 are, for example, the cameras 41 in
The in-vehicle recognition unit 203 identifies an occupant of the vehicle 1 and recognizes the state of the occupant based on signals from an in-vehicle recognition camera 209 and an in-vehicle recognition sensor 210. The in-vehicle recognition camera 209 is, for example, an infrared camera installed on the display device 92 inside the vehicle 1, and detects a line-of-sight direction of the occupant, for example. The in-vehicle recognition sensor 210 is, for example, a sensor for detecting a biological signal of the occupant. The in-vehicle recognition unit 203 recognizes that the occupant is in a dozing state or a state of doing work other than driving, based on those signals.
The action planning unit 204 plans actions of the vehicle 1, such as an optimal path and a risk-avoiding path, based on the results of recognition by the external recognition unit 201 and the self-position recognition unit 202. The action planning unit 204 plans actions based on an entrance determination based on a start point and an end point of an intersection, a railroad crossing, or the like, and prediction of behavior of other vehicles, for example. The drive control unit 205 controls a driving force output device 212, a steering device 213, and a brake device 214 based on an action plan made by the action planning unit 204. Here, for example, the driving force output device 212 corresponds to the power plant 6 in
The device control unit 206 controls devices connected to the control unit 200. For example, the device control unit 206 controls a speaker 215 to cause the speaker 215 to output a predetermined sound message, such as a message for warning or navigation. Also, for example, the device control unit 206 controls a display device 216 to cause the display device 216 to display a predetermined interface screen. The display device 216 corresponds to the display device 92, for example. Also, for example, the device control unit 206 controls a navigation device 217 to acquire setting information in the navigation device 217.
The control unit 200 may also include functional blocks other than those shown in
A head-up display (HUD) control unit 218 controls a head-up display (HUD) 219 that is attached to the vehicle interior near the windscreen of the vehicle 1. The HUD control unit 218 and the control unit 200 can communicate with each other, and the HUD control unit 218 acquires, for example, captured image data obtained by the external recognition cameras 207 via the control unit 200. The HUD 219 projects an image onto the windscreen under the control of the HUD control unit 218. For example, the HUD control unit 218 receives captured image data obtained by the external recognition cameras 207 from the control unit 200, and generates image data to be projected by the HUD 219 based on the captured image data. This image data is, for example, image data to be overlapped (superimposed) with the landscape that can be seen from the driver through the windscreen. Due to the projection onto the windscreen by the HUD 219, the driver can feel that an icon image (destination information etc.) for navigation is overlapped with the landscape of a road ahead, for example. The HUD control unit 218 can communicate with an external device via a communication interface (I/F) 220. The external device is, for example, a mobile terminal 221 such as a smartphone held by the driver. The communication I/F 220 may be configured such that it can be connected to a plurality of networks, and may be, for example, configured such that it can be connected to the Internet.
Operations in the present embodiment will be described below. When a driver drives a vehicle, the driver has a duty of care to look ahead. In addition, there are regions that require attention in a visual field region that can be visually recognized by the driver through the windscreen, depending on the scenes such as an intersection or a curve.
Thus, according to the present embodiment, regions that require attention is identifiably displayed as shown in
In step S101, the HUD control unit 218 acquires the current position of the vehicle 1. For example, the HUD control unit 218 may acquire the current position of the vehicle 1 from the control unit 200. Then, in step S102, the HUD control unit 218 determines whether or not to display a region of interest based on the current position of the vehicle 1 acquired in step S101. Here, the region of interest corresponds to any of the regions 301 and 302 in
Note that points that require attention may be learned in advance for each scene. As a configuration for this purpose, for example, the HUD control unit 218 has a learning unit that includes a GPU, a data analysis unit, and a data accumulation unit. The data accumulation unit accumulates data on the driver's viewpoint position in the visual field region for each scene corresponding to a traveling road or the like, and the data analysis unit analyzes a distribution of the driver's viewpoint in the visual field region. For example, a configuration may be employed in which a skilled driver drives the vehicle in advance, a distribution of this driver's viewpoint is analyzed, and the analysis results are stored for each scene. In this case, the tendency of the distribution of the skilled driver's viewpoint is learned as points that require attention.
Meanwhile, points that require attention may also be learned with any other methods using the tendency of the distribution of a viewpoint of any driver other than a skilled driver in the visual field region. For example, a configuration may be employed in which, based on the tendency of the distribution of a driver's viewpoint in the visual field region, points that the driver tends to overlook but that need to be checked are regarded as points that require attention, and are classified and learned for each scene. For example, if it is analyzed that the viewpoint tends to be extremely distributed in the vicinity of the vehicle in the front thereof when the vehicle moves round a curve (e.g., the driver has a habit of staring an area near the vehicle), a region corresponding to an area far away from the vehicle in the front thereof in the visual field region may be learned as a point that requires attention. At this time, the tendency of the distribution of the aforementioned skilled driver's viewpoint may be used as training data. Each of the above learning results is used as a region of interest if it is decided that the vehicle is traveling in a similar scene.
In step S103, the HUD control unit 218 displays the region of interest.
In step S202, the HUD control unit 218 specifies the coordinates of the region of interest in the visual field region 300 that is to be displayed by the HUD, based on the object information acquired in step S201. For example, the HUD control unit 218 acquires image data on the visual field region 300 based on the captured image data obtained by the external recognition camera 207 that corresponds to the visual field region 300, and specifies the coordinates of the region of interest in the visual field region 300 that is to be displayed by the HUD, based on the image data.
In step S203, the HUD control unit 218 generates display data for displaying the region of interest using the HUD, based on the coordinates of the region of interest specified in step S202, and controls the HUD 219 so as to display the region of interest on the windscreen based on the display data. Here, the display data corresponds to any of the regions 301 and 302 in
In step S104 in
In step S302, the HUD control unit 218 specifies the coordinates of the viewpoint in the visual field region 300 based on the results of analysis in step S301. For example, the HUD control unit 218 specifies the coordinates of the viewpoint in the visual field region 300 based on the captured image data obtained by the external recognition camera 207 that corresponds to the visual field region 300. After step S302, the processing in
In step S105 in
In step S402, the HUD control unit 218 acquires the amount of tracking of the viewpoint. The amount of tracking of the viewpoint corresponds to the amount of movement of the viewpoint 303 in
In step S403, the HUD control unit 218 determines whether or not the amount of tracking acquired in step S402 has reached a predetermined amount. Here, the predetermined amount may be, for example, the area corresponding to a predetermined ratio of the area of the region of interest. If it is determined that the amount of tracking has reached a predetermined amount, in step S404, the HUD control unit 218 cancels the display control performed in step S103 for the entire region of interest. As a result, the driver can recognize that the translucent display of the region of interest at which the driver has placed a certain amount of viewpoint has completely disappeared. After step S404, the processing in
If, in step S403, it is determined that the amount of tracking has not reached the predetermined amount, the processing is repeated from step S104. For example, if the driver has placed a viewpoint at the region of interest, and the overlapping region has not yet reached the predetermined amount even after the viewpoint has been moved within the region of interest, the viewpoint is tracked due to the processing being repeated from step S104. In this case, for example, an outline region of a trace of a predetermined region that includes the viewpoint coordinates (e.g., a circular region corresponding to the viewpoint 303) having moved is specified as the overlapping region, as mentioned above.
In step S501, the HUD control unit 218 determines whether or not a predetermined time has elapsed, based on the measurement result obtained by the timer function. Here, if it is determined that the predetermined time has elapsed, in step S502, the HUD control unit 218 increases the display density of the region of interest displayed in step S103. This configuration can urge the driver to pay attention to the region of interest. The display control in step S502 is not limited to density control, and any other display control may be performed. For example, the region of interest displayed in step S103 may be flashed. After step S502, the processing in
On the other hand, if, in step S501, it is determined that the predetermined time has not elapsed, in step S503, the HUD control unit 218 determines whether or not the region of interest displayed in step S103 overlaps with the viewpoint acquired in step S104. For example, the determination in step S503 may be performed based on the coordinates of the region of interest specified in step S202 and the viewpoint coordinates specified in step S302. If, in step S503, it is determined that the displayed region of interest overlaps with the acquired viewpoint, the display control in step S106 is performed, and if, in step S503, it is determined that the displayed region of interest does not overlap with the acquired viewpoint, the processing is repeated from step S501.
As described above, according to the present embodiment, when a vehicle travels in a scene in which a point that requires attention, such as an intersection, is present, this region is identifiably displayed on the windscreen by the head-up display. If the driver has not placed a viewpoint at this region for a predetermined time, the display mode of this region further changes. This configuration can effectively urge the driver to pay attention. If the driver places a viewpoint at this region, the identifiable display is canceled in accordance with the amount of movement of the viewpoint. This configuration can motivate the driver to sufficiently pay attention.
The second embodiment will be described below regarding differences from the first embodiment. In the first embodiment, after the current position of the vehicle 1 has been acquired in step S101, if, in step S102, it is determined not to display the region of interest, the processing is repeated from step S101, as described with reference to
Step S101 is as described in the first embodiment, and a description thereof is omitted accordingly. In the present embodiment, after the current position of the vehicle 1 has been acquired in step S101, in step S601, the HUD control unit 218 performs risk determination. For example, the risk determination may be performed by determining the possibility that a traveling path of another vehicle or a moving object will overlap the traveling path of the vehicle 1, based on the recognition result obtained by the external recognition unit 201. Also, for example, the risk determination may be performed by determining the possibility that a dead angle region for the vehicle 1 will occurs due to another vehicle. Also, for example, the risk determination may be performed based on a road condition such as freezing, or weather conditions such as rainfall and heavy fog. Any of various indexes may be used as the result of the risk determination, and for example, the margin to collision (MTC) may be used.
In the present embodiment, in step S102, the HUD control unit 218 first determines whether or not to display a region of interest, based on the result of the risk determination in step S601. For example, if approach of another vehicle is recognized and the MTC is smaller than or equal to a threshold, the HUD control unit 218 determines not to display a region of interest. On the other hand, if, based on the risk determination result, it is determined to display a region of interest, it is further determined whether or not to display a region of interest, based on the current position of the vehicle 1 that is acquired in step S101.
If, in step S102, it is determined not to display a region of interest, in step S602, the HUD control unit 218 determines whether or not to display a warning. In the determination in step S602, for example, it is determined to display a warning if, in step S102, it is determined not to display a region of interest based on the risk determination result. It is determined not to display a warning if, in step S102, it is determined not to display a region of interest based on the current position of the vehicle 1. If, in step S602, it is determined not to display a warning, the processing is repeated from step S101. If, in step S602, it is determined to display a warning, the processing proceeds to step S603.
In step S603, the HUD control unit 218 generates display data for displaying a warning, and controls the HUD 219 so as to display the display data on the windscreen. Here, the display data may be, for example, data indicating the direction of another approaching vehicle or moving object. Alternatively, the display data may be data indicating a regional display that surrounds an approaching another vehicle or moving object in the visual field region 300. If, after such a warning has been displayed, it is detected that the driver has placed a viewpoint at an area near the warning display, the warning display may be canceled. After step S603, the processing is repeated from step S101.
As described above, according to the present embodiment, if, for example, a risk such as collision with another vehicle or a moving object is determined while the vehicle 1 is traveling, a notification of the risk is displayed without displaying a region of interest. This configuration can more effectively make the driver recognize the occurrence of risk.
The third embodiment will be described below regarding differences from the first and second embodiments. The first and second embodiments have a configuration in which a region of interest is displayed at a timing at which, in step S102, it is determined to display the region of interest. In the present embodiment, a region of interest is displayed at a timing at which it is decided, based on an internally-set region of interest, that the driver has not placed a viewpoint at the region of interest. With this configuration, in the case of a driver who is more likely to place a viewpoint at a region of interest, such as a skilled driver, the frequency of executing HUD display on the windscreen can be reduced, so that the driver can focus on driving.
Step S101 is as described in the first embodiment, and a description thereof is omitted accordingly. In the present embodiment, after the current position of the vehicle 1 has been acquired in step S101, in step S701, the HUD control unit 218 determines whether or not to set a region of interest, based on the current position of the vehicle 1 acquired in step S101. The criteria for determining whether or not to set a region of interest is the same as the determination criteria in step S102 in the first embodiment. If, in step S701, it is determined not to set a region of interest, the processing is repeated from step S101. If, in step S701, it is determined to set a region of interest, the processing proceeds to step S702.
In step S702, the HUD control unit 218 sets a region of interest. The region of interest to be set corresponds to the regions 301 and 302 in
After step S702, the processing in steps S104 and S105 is performed. Steps S104 and S105 are as described in the first embodiment, and a description thereof is omitted accordingly. In step S105, it is determined whether or not the region of interest set in step S702 overlaps with the viewpoint acquired in step S104. If, in step S105, it is determined that the set region of interest overlaps with the acquired viewpoint, the processing is repeated from step S101. That is to say, in the present embodiment, if the driver places a viewpoint at a region that requires attention, HUD display on the windscreen is not executed. On the other hand, if, in step S105, it is determined that the set region of interest does not overlap with the acquired viewpoint, the processing proceeds to step S703.
In step S703, the HUD control unit 218 displays the region of interest set in step S702. In step S703, the same processing as step S203 in the first embodiment is performed. This configuration can urge the driver to pay attention to the region of interest, similarly to the first embodiment. After step S703, the processing is repeated from step S105.
After the processing in step S703 has been performed, if, in step S105, it is determined that the set region of interest overlaps with the acquired viewpoint, in step S704, the HUD control unit 218 performs display control for the region of interest displayed in step S703. For example, in step S704, the processing may be repeated from step S101 such that the same processing as step S106 in the first embodiment is performed. Alternatively, a configuration may be employed in which, if, in step S105, it is determined that the set region of interest overlaps with the acquired viewpoint, the display of the region of interest is entirely canceled and the processing is repeated from step S101, rather than entirely canceling the display of the region of interest if the overlapping region reaches a predetermined amount. This configuration can reduce the frequency of displaying a region of interest to a skilled driver.
As described above, according to the present embodiment, a region of interest is displayed at a timing at which it is decided that the driver has not looked at the set region of interest. This configuration can reduce the frequency of executing HUD display on the windscreen and allow the driver to focus on driving in the case of a driver who is more likely to look at the region of interest, such as a skilled driver.
A setting may be made to switch between the operations in the first, second, and third embodiments. For example, this switching is performed on a user interface screen displayed on the display device 216, and the control unit 200 communicates the selected operation mode to the HUD control unit 218.
A display control apparatus of one of the above embodiments includes: a display control unit (218) configured to display an image such that the image is superimposed on a visual field region of a driver of a vehicle; and a detection unit (209, 210, 203, S104) configured to analyze a line of sight of the driver and detect a viewpoint of the driver in the visual field region that is obtained as a result of the analysis, wherein the display control unit subjects a predetermined region in the visual field region to display control, and based on a result of determination of overlapping between the predetermined region in the visual field region and the viewpoint of the driver detected by the detection unit, if the overlapping satisfies a condition, the display control unit changes a mode of the display of the image (S106).
This configuration can effectively urge the driver to closely observe the predetermine region (region of interest) in the visual field region.
The image is an image (301, 302) superimposed on the predetermined region. The display control unit executes identifiable display such that the image can be identified.
With this configuration, the predetermined region can be more readily recognized by the driver by, for example, displaying the predetermined region in a lightly colored manner.
The identifiable display is executed if the predetermined region is specified in the visual field region (S103). The identifiable display is executed if the predetermined region is specified in the visual field region, and the viewpoint of the driver is detected at a position different from the predetermined region (S703).
With this configuration, upon the predetermined region being recognized, the predetermined region is identifiably displayed, and thus the predetermined region can be promptly displayed. In addition, the frequency of displaying the predetermined region can be reduced for a skilled driver, for example.
As the condition, if the overlapping is detected, the display control unit cancels the identifiable display that has been executed for a portion of the image that corresponds to the overlapping (S401). As the condition, if the overlapping exceeds a predetermined amount, the display control unit cancels the identifiable display that has been executed for the image (S404, S704).
This configuration can effectively make the driver recognize that the driver has placed a viewpoint at the predetermined region.
If, after the identifiable display has been executed, the viewpoint of the driver is detected at a position different from the predetermined region, the display control unit changes a mode of the identifiable display. The display control unit changes the mode of the identifiable display by changing density of the image (S502).
With this configuration, if the driver has not placed a viewpoint at the predetermined region, the driver can be effectively urged to closely observe the predetermined region in the visual field region.
Also, the display control apparatus further includes a determination unit (S601) configured to determine a risk outside the vehicle, wherein, depending on a result of the determination by the determination unit, the display control unit displays, as the image, an image for warning the driver about the risk such that the image is superimposed on the visual field region (S603). With this configuration, if, for example, a moving object or another vehicle is approaching a self-vehicle, a warning indicating the approach can be displayed. Furthermore, if the driver looks at the point of this approach, this warning display can be canceled.
The invention is not limited to the foregoing embodiments, and various variations/changes are possible within the spirit of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2020-046809 | Mar 2020 | JP | national |