The present application is based on and claims priority from Japanese patent application No. 2022-199052 filed on Dec. 14, 2022, the disclosure of which is hereby incorporated by reference in its entirety.
The present disclosure relates to a display control device and a display control method.
A vehicle display system disclosed in Patent Document 1 (WO 2022/224754) displays contents on a plurality of displays provided with non-display regions between reference display screens. An occupant status monitor is provided to detect the head position and angle of the vehicle occupant as well as line of sight of the occupant. A display processing unit displays important information on any of the plurality of display screens by processing the important information and prevent the information from being hidden under the condition that the important information is included in the contents to be displayed on the display based on a detection result of the occupant status monitor.
It is disclosed that with this conventional technology, a control unit detects the line of sight, head position and head angle of the occupant, and when the occupant moves the head position to the left or right, the image displayed on the display unit moves to the left or right, the image is reduced, or the image is tilted.
However, the above conventional technology requires the occupant to move their head significantly to change the image displayed on the display unit, which places a heavy burden on the occupant.
Therefore, an object of the present disclosure is to provide a display control device and a display control method that can reduce the burden on an occupant when the occupant manipulates an image displayed on a display unit.
In order to achieve the aforementioned object, a display control device of the present disclosure includes: an occupant information acquisition unit for acquiring information related to the angle of the face of an occupant; an image acquisition unit for acquiring an image in which a periphery of a vehicle is captured; an image conversion unit for converting the image to a virtual viewpoint image viewed from a virtual viewpoint; a virtual viewpoint setting unit for setting the position of the virtual viewpoint based on the amount of change in the face angle of the occupant; and a display processing unit for performing control to display the virtual viewpoint image on a display unit.
A display control method executed by a control unit of a display control device provided in a vehicle includes: an occupant information acquisition step for acquiring information related to the angle of the face of an occupant; an image acquisition step for acquiring an image in which a periphery of the vehicle is captured; an image conversion step for converting the image to a virtual viewpoint image viewed from a virtual viewpoint; a virtual viewpoint setting step for setting the position of the virtual viewpoint based on the amount of change in the occupant face angle; and a display processing step for performing control to display the virtual viewpoint image on a display unit.
With respect to the use of plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
A display control device according to Embodiment 1 in the present disclosure will be described as follows based on the drawings.
The display control system 100 includes an imaging device 10, an occupant monitoring unit 20, the display control device 30, and a display unit 40. As depicted in
The imaging device 10 is installed outside the own vehicle and captures an image of the surroundings of the vehicle. The imaging device 10 outputs the captured image to the display control device 30 as captured image data in accordance with a prescribed protocol. The imaging device 10 includes a front camera installed in the front of the own vehicle in the present embodiment. Note that the imaging device 10 is not limited to the front camera, but can also include a rear camera installed in the back of the own vehicle, side cameras installed in the front and back of the own vehicle on left and right sides, and side cameras installed in the front and back of the own vehicle on left and right sides, and the like.
The occupant monitoring unit 20 is provided in the own vehicle. The occupant monitoring unit 20 monitors the status of the driver, who is the occupant 1, based on an image captured by an in-vehicle camera 21. The occupant monitoring unit 20 can be any known type of monitoring unit. The in-vehicle camera 21 is a camera that captures an image of the interior of the own vehicle, including the driver, and is installed toward the vehicle interior in the vicinity of the display unit 40, for example. Furthermore, the occupant monitoring unit 20 detects the face angle of the occupant 1 using a known technique based on the image captured by the in-vehicle camera 21, and outputs information related to the acquired face angle (hereinafter referred to as “angle information”) to the display control device 30. In the present embodiment, the occupant monitoring unit 20 outputs the yaw angle and pitch angle, which indicate the direction in which the face of the occupant 1 is facing in a horizontal direction and vertical direction, as the face angle information.
The display control device 30 is an information processing device that executes a process related to the creation and display of an image, and includes a processing unit (control unit) 50 and a storage unit 60, as depicted in
The processing unit 50 controls the entire display control device 30 and generates a display image to be displayed on the display unit 40 based on the captured image data input from the imaging device 10 and the angle information input from the occupant monitoring unit 20. To enable this, the processing unit 50 functions as an image acquisition unit 51, occupant information acquisition unit 52, virtual viewpoint setting unit 53, image conversion unit 54, and display processing unit 55, as depicted in
The image acquisition unit 51 acquires captured image data of the periphery of the own vehicle from the imaging device 10 and outputs the data to the image conversion unit 54.
The occupant information acquisition unit 52 acquires angle information from the occupant monitoring unit 20 and calculates the amount of change in the face angle, in other words, yaw angle and pitch angle, based on the acquired angle information and information related to a face angle serving as a predetermined reference (hereinafter referred to as “reference angle information”). The occupant information acquisition unit 52 outputs the amount of change in the calculated yaw angle and pitch angle to the virtual viewpoint setting unit 53.
The reference angle information is stored in advance in a reference angle storage unit 61 of the storage unit 60. For example, the reference angle information can be information related to the face angle when the reference angle is the direction of the face when the occupant 1 faces forward (front surface). For example, the face angle when occupant 1 faces the display unit 40 may be the reference angle. The angle information acquired from the occupant monitoring unit 20 is information related to the face angle after the occupant 1 has changed the direction of the face relative to the reference angle.
A calculation procedure of the amount of change in the yaw angle and pitch angle by the occupant information acquisition unit 52 is described below with reference to
The virtual viewpoint setting unit 53 calculates (sets) the position of the virtual viewpoint (hereinafter referred to as “virtual viewpoint position T”) of the occupant 1 in the horizontal direction and vertical direction based on the amount of change in the yaw angle and pitch angle (Ayaw, Apitch) input from the occupant information acquisition unit 52. The virtual viewpoint setting unit 53 outputs the calculated virtual viewpoint position to the image conversion unit 54.
The calculation procedure of the virtual viewpoint position T by the virtual viewpoint setting unit 53 is described below with reference to
The image conversion unit 54 generates the display image 42 (image of the captured image 70 viewed from the virtual viewpoint) to be displayed in the display region 41 of the display unit 40 based on the captured image 70 captured by the imaging device 10 and the virtual viewpoint position T calculated by the virtual viewpoint setting unit 53. Therefore, the image conversion unit 54 executes: a first conversion process for converting the coordinates of the captured image 70 from an image coordinate system of the captured image 70 to the coordinates of a vehicle coordinate system, which is the coordinate system of the own vehicle; and a second conversion process for converting the coordinates of the vehicle coordinate system to a display coordinate system, which is the coordinate system of the display unit 40.
The image conversion unit 54 has a ground model projection unit 541 and a viewpoint reflection unit 542, as depicted in
The ground model projection unit 541 corrects lens distortion (e.g., lens distortion aberration, chromatic aberration, and the like) of the imaging device 10 using known techniques with respect to the captured image 70 input from the image acquisition unit 51 as the correction process. Note that the correction process can be performed by the image acquisition unit 51 instead of the ground model projection unit 541, and the image acquisition unit 51 can be configured to output the captured image 70 after the lens distortion correction process to the ground model projection unit 541.
The ground model projection unit 541 converts the image coordinate system to the vehicle coordinate system such that the captured image 70 after the correction process is mapped and projected onto a plane set in the vehicle coordinate system for display, as the ground model image generation process, to generate a ground model image 72. The ground model image generation process is the first conversion process that converts the image coordinate system to the vehicle coordinate system using the projection conversion matrix M stored in the conversion information storage unit 62.
The ground model image generation process is described below with reference to
The plane set in the vehicle coordinate system is set to a plane corresponding to the ground (traveling surface) on which the own vehicle travels. In the present embodiment, the plane is referred to as a ground model 71. Furthermore, the region indicated by code 72 in
The ground model projection unit 541 substitutes the coordinates of each pixel in the captured image 70 after the correction process into the coordinates of the ground model 71 in the following equation (6). In equation (6) below, xa and ya represent the x and y coordinates of the image coordinate system, and xb and zb represent the x and z coordinates of the vehicle coordinate system. Herein, a is the homogeneous coordinates representing coordinates (xa, ya) in the image coordinate system, and b is the homogeneous coordinates representing coordinates (xb, zb) in the vehicle coordinate system. The relationship between the homogeneous coordinates a and b is expressed by the following equation (6). Note that value λb indicates the magnification at the homogeneous coordinate b. Regardless of the value of λb (except for the value 0), the same homogeneous coordinate b represents the same coordinate in the vehicle coordinate system.
The projection conversion matrix M is calculated in advance, and the calculation procedure can be performed as follows with reference to the left and center drawings in
The ground model projection unit 541 sets the four reference points b1, b2, b3, and b4 in the vehicle coordinate system, which are points captured on the captured image 70 after the correction process. Furthermore, the coordinates of each of the reference point b1, b2, b3, and b4 are identified by actual measurement and input to the ground model projection unit 541. Next, the ground model projection unit 541 identifies the coordinates of the four reference points a1, a2, a3, and a4 in the image coordinate system of the captured image 70 after the correction process.
The ground model projection unit 541 calculates the projection conversion matrix M by substituting the coordinates of each reference point identified above into the above equation (6) and solving a simultaneous equation involving each element of the projection conversion matrix M. The ground model projection unit 541 stores the calculated projection conversion matrix M in the conversion information storage unit 62 of the storage unit 60.
The viewpoint reflection unit 542 performs a projection region setting process, a calculation process of a projection conversion matrix N, and a display image generation process, which is the second conversion process. The projection region setting process is described below with reference to
More specifically, the viewpoint reflection unit 542 uses the coordinates T (Tx, Ty, Tz) of the virtual viewpoint position T and the coordinates of points c1 to c4 at the four corners of the display region 41 to set straight lines connecting the virtual viewpoint position T and points c1 to c4, respectively. Next, the viewpoint reflection unit 542 detects the intersections b5 to b8 of these lines with the ground model 71, identifies the region surrounded by the intersections b5 to b8 as the projection region 74 corresponding to the display region 41, and calculates the coordinates of each intersection b5 to b8.
The calculation process of the projection conversion matrix N is described below, with reference to the center and right-hand drawings in
On the ground model 71 in the vehicle coordinate system, the region surrounded by the intersections b5, b6, b7, and b8 is the projection region 74 calculated by the viewpoint reflection unit 542, and b is the same homogeneous coordinate representing the coordinates (xb, zb) in the vehicle coordinate system. c1, c2, c3, and c4 are the reference points corresponding to each intersection point b5, b6, b7, and b8 in the display coordinate system of the display unit 40, and c is the homogeneous coordinate representing the coordinates (xc, yc) of the display coordinate system. The relationship between the homogeneous coordinates b and c is expressed by the following equation (7). Note that value λc indicates the magnification at the homogeneous coordinate c. Regardless of the value of λc (except for the value 0), the same homogeneous coordinate c represents the same coordinate in the display coordinate system.
The viewpoint reflection unit 542 calculates the projection conversion matrix N by substituting the coordinates of the reference points c1 to c4 of the display coordinate system of the display unit 40 and the coordinates of the intersections b5 to b8 of the vehicle coordinate system, calculated in the projection region setting process, into the above equation (7) and solving a simultaneous equation involving each element of the projection conversion matrix N. The viewpoint reflection unit 542 stores the calculated projection conversion matrix N in the conversion information storage unit 62 of the storage unit 60.
Furthermore, as the display image generation process, the viewpoint reflection unit 542 converts the coordinates of the projection region 74 into the display coordinate system by substituting the coordinates of each point of the projection region 74 of the ground model image 72 corresponding to each pixel of the display region 41 of the display unit 40 into the above equation (7), using the projection conversion matrix N calculated in the above calculation process. Thereby, the viewpoint reflection unit 542 generates image data for the display image 42 corresponding to the image of the projection region 74 on the ground model image 72. The viewpoint reflection unit 542 outputs the generated image data to the display processing unit 55.
Based on the image data input from the viewpoint reflection unit 542, the display processing unit 55 displays the display image 42 corresponding to the image data on the display region 41 of the display unit 40.
The storage unit 60 temporarily or non-temporarily stores a control program for operating the display control device 30 and various data and parameters used in various operations in the processing unit 50. Furthermore, as described above, the reference angle storage unit 61 of the storage unit 60 temporarily or non-temporarily stores the reference angle information of the face when the face angle is the reference angle. The conversion information storage unit 62 of the storage unit 60 temporarily or non-temporarily stores the projection conversion matrix M and the projection conversion matrix N used in the ground model image generation process (first conversion process) and the display image generation process (second conversion process), respectively.
An example of an operation of the display control system 100 according to Embodiment 1 with the configuration described above is described below, with reference to the flowchart in
First, in step S1, the image acquisition unit 51 acquires the captured image 70 captured by the imaging device 10 and outputs the image to the image conversion unit 54. In step S2, the occupant information acquisition unit 52 acquires angle information related to the face angle of the occupant 1 from the occupant monitoring unit 20. In the subsequent step S3, the occupant information acquisition unit 52 calculates the amount of change in the face angle using the aforementioned equations (1) and (2) based on the acquired angle information and the reference angle information acquired from the reference angle storage unit 61, and outputs the amount of change to the virtual viewpoint setting unit 53.
In the subsequent step S4, the virtual viewpoint setting unit 53 calculates the virtual viewpoint position T of the occupant 1 using the aforementioned equations (3) and (4) based on the amount of change input from the occupant information acquisition unit 52, and outputs the position to the image conversion unit 54.
In the subsequent step S5, the ground model projection unit 541 performs the correction process to correct lens distortion with respect to the captured image 70. Next, in the subsequent step S6, the ground model projection unit 541 generates a ground model image 72 by converting the coordinates of the captured image 70 after the correction process to the coordinates of the vehicle coordinate system using the projection conversion matrix M acquired from the conversion information storage unit 62 and the aforementioned equation (6).
In the subsequent step S7, the viewpoint reflection unit 542 calculates a region on the ground model image 72 (projection region 74) to be projected onto the display region 41 of the display unit 40 based on the virtual viewpoint position T input from the virtual viewpoint setting unit 53. In other words, the viewpoint reflection unit 542 calculates the coordinates of the intersections b5 to b8 surrounding the projection region 74 based on the coordinates of the virtual viewpoint position T and the coordinates of points c1 to c4 at the four corners of the display region. Next, in step S8, the viewpoint reflection unit 542 calculates the projection conversion matrix N by substituting the coordinates of points c1 to c4 in the display coordinate system and the coordinates of the intersections b5 to b8 in the vehicle coordinate system into the aforementioned equation (7).
In the subsequent step S9, the viewpoint reflection unit 542 substitutes each coordinate of the projection region 74 into the aforementioned equation (7) and converts the coordinates to coordinates of the display coordinate system, thereby generating image data of the display image 42 to be displayed in the display region 41 and outputting the data to the display processing unit 55.
Furthermore, in step S10, based on the image data input from the viewpoint reflection unit 542, the display processing unit 55 displays the display image 42 corresponding to the image data on the display region 41 of the display unit 40. As depicted in
As described above, the display control device 30 of the present embodiment converts the captured image 70 in which a periphery of the vehicle is captured into the ground model image 72 based on the virtual viewpoint position T set based on the amount of change in the face angle of the occupant 1, and converts the ground model image 72 into the display image 42 to be displayed on the display unit 40. Furthermore, the display image 42 is then displayed on the display unit 40, such that the occupant 1 can see the display image 42 according to the face angle. The display image 42 has an appropriate connection with scenery viewed through a front window, and the occupant 1 can view the display image 42 without feeling out of place. Moreover, when the occupant 1 wants to change the image displayed on the display unit 40, the occupant need not move their head significantly, but need only move the face angle up, down, left, or right. Therefore, the display control device 30 in the present embodiment can reduce the burden on the occupant 1 when the occupant 1 changes the image displayed on the display unit 40.
Furthermore, the display control device 30 of the present embodiment has a storage unit 60 (reference angle storage unit 61) that stores a reference angle, which is a prescribed face angle of the occupant 1. Furthermore, the virtual viewpoint setting unit 53 sets the position T of the virtual viewpoint based on the amount of change in the face angle of the occupant 1 with respect to the reference angle. Thereby, the virtual viewpoint setting unit 53 can acquire the amount of change in the face angle with higher precision. As a result, the display control device 30 can present a more appropriate display image 42 to the occupant 1 according to the face angle.
Furthermore, in the display control device 30 of the present embodiment, the occupant information acquisition unit 52 acquires the yaw angle and pitch angle as the face angle of the occupant 1. Furthermore, the virtual viewpoint setting unit 53 moves the position T of the virtual viewpoint in the horizontal direction based on the amount of change in yaw angle, and moves the position T of the virtual viewpoint in the vertical direction based on the amount of change in pitch angle. This configuration allows the virtual viewpoint setting unit 53 to calculate the amount of change in the face angle with higher precision and speed, and thus the display control device 30 can perform display control processing with higher efficiency and precision.
An embodiment of the present disclosure has been described in detail with reference to the drawings, but the specific configuration is not limited to this embodiment and design changes to a degree that do not deviate from the gist of the present disclosure are included in the present disclosure.
For example, the display control device 30 can be configured with a viewpoint position acquisition unit for acquiring information related to the position of an eye of the occupant 1. Based on the information related to the eye position acquired by the viewpoint position acquisition unit, the virtual viewpoint setting unit 53 sets the virtual viewpoint at a position corresponding to the eye position of the occupant when the face angle of the occupant 1 is the reference angle, and when the face angle of the occupant 1 is not the reference angle, moves the virtual viewpoint from the eye position of the occupant 1 to a position based on the amount of change. This configuration allows the virtual viewpoint setting unit 53 to set the position of the virtual viewpoint according to the eye position of the occupant 1, making the position of the virtual viewpoint more appropriate and simpler to set. For example, if the face angle when occupant 1 faces the display unit 40 is the reference angle, the virtual viewpoint can be set at a position corresponding to the eye position of occupant 1 while the face of the occupant 1 is facing the display unit 40. Furthermore, when the face of the occupant 1 is not facing the display unit 40, a virtual viewpoint can be set at a position based on the amount of change in the face angle of the occupant 1.
Furthermore, the display control device 30 of the above embodiment uses the face angle when the face of the occupant 1 is at the reference angle as the reference angle (reference angle information), but the present disclosure invention is not limited thereto. For example, the reference angle can be the face angle information from the last time the face angle was acquired. In this case, the occupant information acquisition unit 52 updates the reference angle information in the reference angle storage unit 61 each time the face angle is acquired. This configuration allows the display control device 30 to calculate the amount of change when the occupant 1 changes the face angle for the current time, based on the face angle before the occupant 1 changes the face angle.
Number | Date | Country | Kind |
---|---|---|---|
2022-199052 | Dec 2022 | JP | national |