The present disclosure relates to a vehicular display system that is mounted on a vehicle to display an image of a surrounding area of the vehicle.
As an example of the above vehicular display system, a system disclosed in Japanese Patent Document JP-A-2019-193031 has been known. This vehicular display system in JP-A-2019-193031 includes: an imaging unit that acquires an image of an area from a front lateral side to a rear lateral side of the vehicle; a display area processing section that generates an image (a display image) of a specified area extracted from the image captured by the imaging unit; and a display unit that shows the display image generated by the display area processing section. The display area processing section adjusts the area of the display image according to an operation status of the vehicle such that the area of the display image in a case where a specified operation status condition is established (for example, where a direction indicator is operated) is larger than the area of the display image where such a condition is not established.
The area of the display image is changed according to whether the direction indicator or the like is operated. Thus, the vehicular display system in JP-A-2019-193031 has an advantage of being capable of providing information on the image with an appropriate range (size) corresponding to the operation status to the driver without a sense of discomfort.
However, the image display by the method as described in JP-A-2019-193031 is insufficient in terms of visibility and thus has room for improvement. For example, in JP-A-2019-193031, the imaging unit is provided at a position of a front wheel fender of the vehicle, for example, to acquire the image of the area from the front lateral side to the rear lateral side of the vehicle, and the display unit shows a part of the captured image (only a portion corresponding to a specified angle of view is taken out) by this imaging unit. However, since such a display image is seen from a position far away from a driver, there is a possibility that it is difficult for the driver to recognize the image intuitively. In addition, there is a limitation on a range of a visual field within which a person can pay close attention without difficulty. Accordingly, even when the area of the display image is expanded according to the operation status, it is difficult to acquire necessary information from the display image within a limited time. Thus, also for this reason, it cannot be said that the visibility of the display image is sufficient.
The present disclosure has been made in view of circumstances as described above and therefore has a purpose of providing a vehicular display system capable of showing an image of a surrounding area of a vehicle in a superior visibility mode.
In order to solve the above problem, the present disclosure provides a vehicular display system that is mounted on a vehicle to show an image of a surrounding area of a vehicle, and includes an imaging unit that captures the image of the surrounding area of the vehicle; an image processing unit that converts the image captured by the imaging unit into a view image of the area seen from a predetermined virtual viewpoint in a cabin; and a display unit that shows the view image generated by the image processing unit. The image processing unit can generate, as the view image, a first view image that is acquired when a first direction is seen from the virtual viewpoint and a second view image that is acquired when a second direction differing from the first direction is seen from the virtual viewpoint. Each of the first view image and the second view image is shown at a horizontal angle of view that corresponds to a stable visual field during gazing.
According to the present disclosure, the first and second view images are acquired when the two different directions (the first direction and the second direction) are seen from the virtual viewpoint in the cabin by executing specified image processing on the image captured by the imaging unit. Thus, in each of at least two driving scenes in different moving directions of the vehicle, it is possible to appropriately assist with a driver's driving operation by using the view image. In addition, the horizontal angle of view of each of the first and second view images is set to the angle corresponding to the stable visual field during gazing, and the stable visual field during gazing is a range that the driver can visually recognize without difficulty due to assistance of head movement (cervical movement) with eye movement. Thus, it is possible to provide the driver with necessary and sufficient information for identifying an obstacle around their own vehicle through the first and second view images, and it is also possible to effectively assist with the driver's driving operation by providing such information. For example, in the case where the view image includes an obstacle such as another vehicle or a pedestrian, the driver can promptly identify such an obstacle and can promptly determine information such as a direction and a distance to the obstacle on the basis of a location, size, and the like of the identified obstacle. In this way, it is possible to assist the driver to drive safely in a manner capable of avoiding a collision with the obstacle, and it is also possible to favorably ensure safety of the vehicle.
It is generally considered that a maximum angle in a horizontal direction (a maximum horizontal angle) of the stable visual field during gazing is 90 degrees. Thus, in certain embodiments the horizontal angle of view of each of the first view image and the second view image is set to approximately 90 degrees.
In addition, it is generally considered that a maximum angle in a perpendicular direction (a maximum perpendicular angle) of the stable visual field during gazing is 70 degrees. Thus, a perpendicular angle of view of each of the first view image and the second view image can be set to the same 70 degrees. However, the vehicle consistently moves along a road surface (does not move vertically with respect to the road surface). Thus, it is considered that, even when the perpendicular angle of view of each of the view images is smaller than 70 degrees (and is equal to or larger than 40 degrees), information on the obstacle to watch out and the like can be provided with no difficulty. From what have been described so far, the perpendicular angle of view of each of the first view image and the second view image is set to be equal to or larger than 40 degrees and equal to or smaller than 70 degrees in certain embodiments.
In certain embodiments, the image processing unit generates, as the first view image, an image that is acquired when an area behind the vehicle is seen from the virtual viewpoint and, as the second view image, an image that is acquired when an area in front of the vehicle is seen from the virtual viewpoint.
With such a configuration, when the image of the surrounding area behind or in front of the vehicle is generated as the first view image or the second view image, it is possible to appropriately show the obstacle, which possibly collides with the own vehicle during reverse travel or forward travel, in respective one of the view images.
In embodiments of the above configuration, the imaging unit includes a rear camera that captures an image of the area behind the vehicle; a front camera that captures an image of the area in front of the vehicle; a left camera that captures an image of an area on a left side of the vehicle; and a right camera that captures an image of an area on a right side of the vehicle, and the image processing unit generates the first view image on the basis of the images captured by the rear camera, the left camera, and the right camera and generates the second view image on the basis of the images captured by the front camera, the left camera, and the right camera.
With such a configuration, it is possible to appropriately acquire image data of the area behind the vehicle and image data of an area obliquely behind the vehicle from the rear camera, the left camera, and the right camera at the time of generating the first view image, and it is possible to appropriately generate the first view image as an image acquired when the area behind the vehicle is seen from the virtual viewpoint by executing viewpoint conversion processing and the like while synthesizing the acquired image data.
Similarly, it is possible to appropriately acquire image data of the area in front of the vehicle and image data of an area obliquely in front of the vehicle from the front camera, the left camera, and the right camera at the time of generating the second view image, and it is possible to appropriately generate the second view image as an image acquired when the area in front of the vehicle is seen from the virtual viewpoint by executing the viewpoint conversion processing and the like while synthesizing the acquired image data.
In some embodiments, the virtual viewpoint is set at a position that corresponds to the driver's head in a longitudinal direction and a vertical direction of the vehicle.
With such a configuration, it is possible to generate, as the first or second view image, a bird's-eye view image with high visibility (that can easily and intuitively be recognized by the driver) in which the surrounding area behind or in front of the vehicle is seen from a position near actual eye points of the driver. As a result, it is possible to effectively assist the driver through such a view image.
As it has been described so far, the vehicular display system of the present disclosure can show the image of the surrounding area around the vehicle in a superior visibility mode.
(1) Overall Configuration
The vehicle exterior imaging device 2 includes: a front camera 2a that captures an image of an area in front of the vehicle; a rear camera 2b that captures an image of an area behind the vehicle; a left camera 2c that captures an image of an area on a left side of the vehicle; and a right camera 2d that captures an image of an area on a right side of the vehicle. As illustrated in
The in-vehicle display 4 is arranged in a central portion of an instrument panel 20 (
The image processing unit 3 executes various types of the image processing on the images, each of which is captured by the vehicle exterior imaging device 2 (the cameras 2a to 2d), to generate an image that is acquired when the surrounding area of the vehicle is seen from the inside of the cabin (hereinafter referred to as a view image), and causes the in-vehicle display 4 to show the generated view image. Although details will be described below, the image processing unit 3 generates one of a rear-view image and a front-view image according to a condition and causes the in-vehicle display 4 to show the generated view image. The rear-view image is acquired when the area behind the vehicle is seen from the inside of the cabin. The front-view image is acquired when the area in front of the vehicle is seen from the inside of the cabin. The rear-view image corresponds to an example of the “first view image” of the present disclosure, and the front-view image corresponds to an example of the “second view image” of the present disclosure.
As illustrated in
The vehicle speed sensor SN1 is a sensor that detects a travel speed of the vehicle.
The shift position sensor SN2 is a sensor that detects a shift position of an automatic transmission (not illustrated) provided in the vehicle. The automatic transmission can achieve at least four shift positions of drive (D), neutral (N), reverse (R), and parking (P), and the shift position sensor SN2 detects whether any of these positions is achieved. The D-position is the shift position that is selected when the vehicle travels forward (a forward range), the R-position is the shift position that is selected when the vehicle travels backward (a backward range), and each of the positions of N, P is the shift position that is selected when the vehicle does not travel.
The view switch SW1 is a switch that is used to determine whether to permit display of the view image when the shift position is the D-position (that is, when the vehicle travels forward). Although details will be described below, in this embodiment, the in-vehicle display 4 automatically shows the rear-view image when the shift position is the R-position (the backward range). Meanwhile, in the case where the shift position is the D-position (the forward range), the in-vehicle display 4 shows the front-view image only when the view switch SW1 is operated (that is, when the driver makes a request). According to an operation status of the view switch SW1 and a detection result by the shift position sensor SN2, the image processing unit 3 determines whether one of the front-view/rear-view images is shown on the in-vehicle display 4 or none of the front-view/rear-view images is shown on the in-vehicle display 4. The view switch SW1 can be provided to the steering wheel 21, for example.
(2) Details of Image Processing Unit
A further detailed description will be made of a configuration of the image processing unit 3. As illustrated in
The determination section 31 is a module that makes various necessary determinations for execution of the image processing.
The image extraction section 32 is a module that executes processing to extract the images captured by the front/rear/left/right cameras 2a to 2d within a required range. More specifically, the image extraction section 32 switches the cameras to be used according to whether the vehicle travels forward or backward. For example, when the vehicle travels backward (when the shift position is in an R range), the plural cameras including at least the rear camera 2b are used. When the vehicle travels forward (when the shift position is in a D range), the plural cameras including at least the front camera 2a are used. A range of the image that is extracted from each of the cameras to be used is set to be a range that corresponds to an angle of view of the image (the view image, which will be described below) finally shown on the in-vehicle display 4.
The image conversion section 33 is a module that executes viewpoint conversion processing while synthesizing the images, which are captured by the cameras and extracted by the image extraction section 32, so as to generate the view image that is the image of the surrounding area of the vehicle seen from the inside of the cabin. Upon conversion of the viewpoint, a projection surface P, which is illustrated in
The icon setting section 34 is a module that executes processing to set a vehicle icon G (
The display control section 35 is a module that executes processing to show the view image, on which the vehicle icon G is superimposed, on the in-vehicle display 4. That is, the display control section 35 superimposes the vehicle icon G, which is set by the icon setting section 34, on the view image, which is generated by the image conversion section 33, and shows the superimposed view image on the in-vehicle display 4.
(3) Control Operation
If it is determined YES in step Si and it is thus confirmed that the vehicle speed is equal to or lower than the threshold speed X1, the image processing unit 3 (the determination section 31) determines whether the shift position detected by the shift position sensor SN2 is the R-position (the backward range) (step S2).
If it is determined YES in step S2 and it is thus confirmed that the shift position is the R-position (in other words, when the vehicle travels backward), the image processing unit 3 sets the angle of view of the rear-view image, which is generated in step S4 described below (step S3). More specifically, in this step S3, the horizontal angle of view is set to 90 degrees, and a perpendicular angle of view is set to 45 degrees.
The angle of view, which is set in step S3, is based on the stable visual field during gazing of the person. The stable visual field during gazing means such a range that the person can visually recognize without difficulty due to assistance of head movement (cervical movement) with eye movement. In general, it is said that the stable visual field during gazing has an angular range between 45 degrees to the left and 45 degrees to the right in the horizontal direction and has an angular range between 30 degrees upward and 40 degrees downward in the perpendicular direction. That is, as illustrated in
In consideration of the above point, in this embodiment, the angle of view of the rear-view image is set to 90 degrees in the horizontal direction and 45 degrees in the perpendicular direction. That is, the horizontal angle of view of the rear-view image is set to 90 degrees, which is the same as the maximum horizontal angle θ1 of the stable visual field during gazing, and the perpendicular angle of view of the rear-view image is set to 45 degrees, which is smaller than the maximum perpendicular angle θ2 (=70 degrees) of the stable visual field during gazing.
Next, the image processing unit 3 executes control to generate the rear-view image, which is acquired when the area behind the vehicle is seen from the virtual viewpoint V in the cabin, and show the rear-view image on the in-vehicle display 4 (step S4). Details of this control will be described below.
Next, a description will be made of control that is executed if it is determined NO in above step S2, that is, if the shift position of the automatic transmission is not the R-position (the backward range). In this case, the image processing unit 3 (the determination section 31) determines whether the shift position detected by the shift position sensor SN2 is the D-position (the forward range) (step S5).
If it is determined YES in step S5 and it is thus confirmed that the shift position is the D-position (in other words, when the vehicle travels forward), the image processing unit 3 (the determination section 31) determines whether the view switch SW1 is in an ON state on the basis of a signal from the view switch SW1 (step S6).
If it is determined YES in step S6 and it is thus confirmed that the view switch SW1 is in the ON state, the image processing unit 3 sets the angle of view of the front-view image, which is generated in step S8 described below (step S7). The angle of view of the front-view image, which is set in this step S7, is the same as the angle of view of the rear-view image, which is set in above-described step S3, and is set to 90 degrees in the horizontal direction and 45 degrees in the perpendicular direction in this embodiment.
Next, the image processing unit 3 executes control to generate the front-view image, which is acquired when the area in front of the vehicle is seen from the virtual viewpoint V in the cabin, and show the front-view image on the in-vehicle display 4 (step S8). Details of this control will be described below.
A further detailed description will be made on this point. In this embodiment, as illustrated in
More specifically, of the above-described imaging area W1 (the area defined by the first line k11 and the second line k12), the first area W11 is an area overlapping a fan-shaped area that expands backward with a horizontal angular range of 170 degrees (85 degrees each to the left and right of the vehicle center axis L) from the rear camera 2b. In other words, the first area W11 is an area defined by a third line k13 that extends backward to the left at an angle of 85 degrees from the rear camera 2b, a fourth line k14 that extends backward to the right at an angle of 85 degrees from the rear camera 2b, a portion of the first line k11 that is located behind a point of intersection j11 with the third line k13, and a portion of the second line k12 that is located behind a point of intersection j12 with the fourth line k14. The second area W12 is a remaining area after the first area W11 is removed from a left half portion of the imaging area W1. The third area W13 is a remaining area after the first area W11 is removed from a right half portion of the imaging area W1. Of the second and third areas W12, W13, areas immediately behind the vehicle are blind areas, images of which cannot be captured by the left and right cameras 2c, 2d, respectively. However, the images of these blind areas can be compensated by specified interpolation processing (for example, processing to stretch an image of an adjacent area to each of the blind areas).
After the image of the required range is acquired from each of the cameras 2b, 2c, 2d, just as described, the image processing unit 3 (the image conversion section 33) sets the virtual viewpoint V that is used when the rear-view image is generated in step S15, which will be described below (step S12). As illustrated in
Next, the image processing unit 3 (the image conversion section 33) sets the projection surface P that is used when the rear-view image is generated in step S15, which will be described below (step S13). As it has already been described with reference to
Next, the image processing unit 3 (the icon setting section 34) sets the vehicle icon G (
Next, the image processing unit 3 (the image conversion section 33) synthesizes the images that are captured by the cameras 2b, 2c, 2d and acquired in step S11, and executes the viewpoint conversion processing on the synthesized image by using the virtual viewpoint V and the projection surface P set in steps S12, S13, so as to generate the rear-view image that is acquired when the area behind the vehicle is seen from the virtual viewpoint V (step S15). That is, the image processing unit 3 (the image conversion section 33) synthesizes the image of the first area W11 captured by the rear camera 2b, the image of the second area W12 captured by the left camera 2c, and the image of the third area W13 captured by the right camera 2d (see
The above-described processing of viewpoint conversion (projection to the projection surface P) can be executed as follows, for example. First, three-dimensional coordinates (X, Y, Z) are defined for each pixel of the synthesized camera image. Next, the coordinates of each of the pixels are converted into projected coordinates by using a specified calculation formula, which is defined by a positional relationship between the virtual viewpoint V and the rear area P1w of the projection surface P, and the like. For example, as illustrated in
Next, the image processing unit 3 (the display control section 35) causes the in-vehicle display 4 to show the rear-view image, which is generated in step S15, in a state where the vehicle icon G set in step S14 is superimposed thereon (step S16).
Next, a detailed description will be made on front-view image generation/display control that is executed in above-described step S8 (
A method for acquiring the image within the front imaging area W2 in step S21 is similar to the method in above-described step S 11 (
More specifically, of the above-described imaging area W2 (the area defined by the first line k21 and the second line k22), the first area W21 is an area overlapping a fan-shaped area that expands forward with a horizontal angular range of 170 degrees (85 degrees each to the left and right of the vehicle center axis L) from the front camera 2a. In other words, the first area W21 is an area defined by a third line k23 that extends forward to the left at an angle of 85 degrees from the front camera 2a, a fourth line k24 that extends forward to the right at an angle of 85 degrees from the front camera 2a, a portion of the first line k21 that is located in front of a point of intersection j21 with the third line k23, and a portion of the second line k22 that is located in front of a point of intersection j22 with the fourth line k24. The second area W22 is a remaining area after the first area W21 is removed from a left half portion of the imaging area W2. The third area W23 is a remaining area after the first area W21 is removed from a right half portion of the imaging area W2. Of the second and third areas W22, W23, areas immediately in front of the vehicle are blind areas, images of which cannot be captured by the left and right cameras 2c, 2d, respectively. However, the images of these blind areas can be compensated by the specified interpolation processing (for example, the processing to stretch an image of an adjacent area to each of the blind areas).
After the image of the required range is acquired from each of the cameras 2a, 2c, 2d, just as described, the image processing unit 3 (the image conversion section 33) sets the virtual viewpoint V that is used when the front-view image is generated in step S25, which will be described below (step S22). This virtual viewpoint V is the same as the virtual viewpoint V (above step S12) that is used when the rear-view image, which has already been described, is generated. This virtual viewpoint V is set at the position that matches the vehicle center axis L in the plan view and corresponds to the driver's head D1 (the eye points) in the side view.
Next, the image processing unit 3 (the image conversion section 33) sets the projection surface P that is used when the front-view image is generated in step S25, which will be described below (step S23). This projection surface P is the same as the projection surface P (above step S13) that is used when the rear-view image, which has already been described, is generated. This projection surface P includes: the circular plane projection surface P1 that has the same center as the vehicle center C; and the stereoscopic projection surface P2 that is elevated from the outer circumference of the plane projection surface P1 while the diameter thereof is increased with the specified curvature.
Next, the image processing unit 3 (the icon setting section 34) sets the vehicle icon G that is superimposed on the front-view image and shown therewith in step S26, which will be described below (step S24). Although not illustrated, the vehicle icon G, which is set herein, is an icon representing the various components of the vehicle, and such components appear when the area in front of the vehicle is seen from the virtual viewpoint V. The vehicle icon G includes the graphic image that shows a front wheel of the vehicle and contour components (a front fender and the like) in the front portion of the vehicle in the transmissive state.
Next, the image processing unit 3 (the image conversion section 33) synthesizes the images that are captured by the cameras 2a, 2c, 2d and acquired in step S21, and executes the viewpoint conversion processing on the synthesized image by using the virtual viewpoint V and the projection surface P set in steps S22, S23, so as to generate the front-view image that is acquired when the area in front of the vehicle is seen from the virtual viewpoint V (step S25). That is, the image processing unit 3 (the image conversion section 33) synthesizes the image of the first area W21 captured by the front camera 2a, the image of the second area W22 captured by the left camera 2c, and the image of the third area W23 captured by the right camera 2d (see
Next, the image processing unit 3 (the display control section 35) causes the in-vehicle display 4 to show the front-view image, which is generated in step S25, in a state where the vehicle icon G set in step S24 is superimposed thereon (step S26).
(4) Operational Effects
As it has already been described so far, in this embodiment, on the basis of the images captured by the vehicle exterior imaging device 2 (the cameras 2a to 2d), the in-vehicle display 4 can show the rear-view image, which is the image acquired when the area behind the vehicle is seen from the virtual viewpoint V in the cabin, and the front-view image, which is the image acquired when the area in front of the vehicle is seen from the virtual viewpoint V. The horizontal angle of view of each of these view images is set to 90 degrees that is the angle of view corresponding to the stable visual field during gazing of the person. Such a configuration has an advantage of capable of effectively assisting with the driver's driving operation by improving visibility of the rear-view image and the front-view image.
That is, in the above embodiment, the rear-view image and the front-view image, which are the images of the areas seen in the two different directions (forward and backward) from the virtual viewpoint V in the cabin, can be shown. Thus, for example, in each of at least two driving scenes such as reverse parking and forward parking in different moving directions of the vehicle, it is possible to appropriately assist with the driver's driving operation by using the view image. In addition, the horizontal angle of view of each of the rear-view image and the front-view image is set to the same 90 degrees as the maximum horizontal angle (θ1=90 degrees in
On the contrary, in the case where the horizontal angle of view of the view image is significantly reduced from 90 degrees, the information can further easily be identified due to a reduction in an amount of the information included in the view image. However, there is an increased possibility that, although an obstacle that possibly collides with the vehicle exists, such an obstacle does not appear in the view image (failure in display of the obstacle), which degrades safety.
To handle the above problem, in the above embodiment, the horizontal angle of view of each of the rear-view image and the front-view image is set to 90 degrees. Thus, it is possible to provide the required information on the obstacle and the like with high visibility through both of the view images and thus to improve the safety of the vehicle.
In the above embodiment, since the perpendicular angle of view of each of the rear-view image and the front-view image is set to 45 degrees, it is possible to show each of the view images with the perpendicular angle of view that is sufficiently smaller than the maximum perpendicular angle (θ2=70 degrees in
In the above embodiment, the rear-view image is generated on the basis of the images captured by the rear camera 2b, the left camera 2c, and the right camera 2d. Thus, it is possible to appropriately acquire the image data of the area behind the vehicle and the image data of the area obliquely behind the vehicle from the cameras 2b, 2c, 2d, and it is possible to appropriately generate the rear-view image as the image that is acquired when the area behind the vehicle is seen from the virtual viewpoint V by executing the viewpoint conversion processing and the like while synthesizing the acquired image data.
Similarly, in the above embodiment, the front-view image is generated on the basis of the images captured by the front camera 2a, the left camera 2c, and the right camera 2d. Thus, it is possible to appropriately acquire the image data of the area in front of the vehicle and the image data of the area obliquely in front of the vehicle from the cameras 2b, 2c, 2d, and it is possible to appropriately generate the front-view image as the image that is acquired when the area in front of the vehicle is seen from the virtual viewpoint V by executing the viewpoint conversion processing and the like while synthesizing the acquired image data.
In addition, in the above embodiment, the virtual viewpoint V is set at the position that corresponds to the driver's head D1 in the longitudinal direction and the vertical direction of the vehicle. Thus, it is possible to generate, as the front-view image or the rear-view image, a bird's-eye view image with high visibility (that can easily and intuitively be recognized by the driver) in which a surrounding area in front of or behind the vehicle is seen from a position near the actual eye points of the driver. As a result, it is possible to effectively assist with driving by the driver through such a view image.
In the above embodiment, the horizontal angle of view of each of the rear-view image and the front-view image is set to the same 90 degrees as the maximum horizontal angle θ1 (=90 degrees) of the stable visual field during gazing of the person. However, in consideration of a certain degree of individual variation in breadth of the stable visual field during gazing, the horizontal angle of view of each of the view images may be a value that is slightly offset from 90 degrees. In other words, the horizontal angle of view of each of the view images only needs to be 90 degrees or a value near 90 degrees (approximately 90 degrees), and thus can be set to an appropriate value within a range between 85 degrees and 95 degrees, for example.
In the above embodiment, the perpendicular angle of view of each of the rear-view image and the front-view image is set to 45 degrees. However, the perpendicular angle of view of each of the view images only needs to be equal to or smaller than the maximum perpendicular angle θ2 (=70 degrees) of the stable visual field during gazing of the person and equal to or larger than 40 degrees. That is, the perpendicular angle of view of each of the rear-view image and the front-view image can be set to an appropriate value within a range between 40 degrees and 70 degrees.
In the above embodiment, according to an advancing direction of the vehicle (the shift position of the automatic transmission), the in-vehicle display 4 shows one of the rear-view image (the first view image), which is acquired when the area behind the vehicle (in a first direction) is seen from the virtual viewpoint V and the front-view image (the second view image), which is acquired when the area in front of the vehicle (in a second direction) is seen from the virtual viewpoint V. However, instead of these rear-view image and front-view image, or in addition to each of the view images, a view image in a different direction from the longitudinal direction may be shown. In other words, the first view image and the second view image of the present disclosure only need to be images that are acquired when the two different directions are seen from the virtual viewpoint in the cabin. For example, an image that is acquired when the area on the left side is seen from the virtual viewpoint may be shown as the first view image, and an image that is acquired when the area on the right side is seen from the virtual viewpoint may be shown as the second view image.
In the above embodiment, the center of the projection surface P on which the camera image is projected at the time of generating the rear-view image and the front-view image matches the vehicle center C in the plan view. However, the projection surface P only needs to be set to include the vehicle, and thus the center of the projection surface P may be set to a position shifted from the vehicle center C. For example, the center of the projection surface P may match the virtual viewpoint V.
Number | Date | Country | Kind |
---|---|---|---|
2020-154279 | Sep 2020 | JP | national |