The present disclosure relates to a vehicle display device mounted on a vehicle that moves toward a destination.
It has been proposed that videos are displayed on an inner wall surface of a vehicle since passengers often get bored with just seeing a landscape outside of the vehicle while the vehicle is moving. WO 2017/208719 describes that various videos are displayed on the inner wall surface of the vehicle. The inner wall surface includes a window, and in an autonomous driving vehicle, a front window is also included in the inner wall surface for displaying the videos. The videos to be displayed may be commercial contents such as movies, videos taken by an in-vehicle camera, videos taken by a drone, archived videos, and the like. The videos are displayed in a non-transparent mode and a transparent mode as needed. In the transparent mode, the passengers can see the landscape.
However, WO 2017/208719 does not describe an effective use of a time required to reach a specific destination in relation to the destination.
The present disclosure relates to a vehicle display device mounted on a vehicle moving toward a destination. The vehicle display device does not display a content shown in the destination but displays a related video related to the content and a video other than a video provided in the destination.
The related video may be a video that is associated with the content.
The video that is associated with the content may include a video of a location used when the content was created, a video about a birthplace of a creator of the content, and a video about a location of an object referenced when the content was created.
The destination may be an entertainment facility.
According to the present disclosure, it is possible to see a content that is not the content itself shown in the destination, so that users' understanding of the destination is deepened.
Features, advantages, and technical and industrial significance of exemplary embodiments of the disclosure will be described below with reference to the accompanying drawings, in which like signs denote like elements, and wherein:
Hereinafter, an embodiment of the present disclosure will be described with reference to the drawings. The present disclosure is not limited to the embodiment described below.
In the embodiment, entertainment facilities that entertain visitors under a specific theme (concept) are focused as a destination to visit. In such entertainment facilities, the culture of a specific country, stories, movies, etc. are often set as one of the themes. Note that the destination is not limited to the above entertainment facilities, and may include a zoo or a park.
The entertainment facility 12 is divided into four areas 12-1, 12-2, 12-3, and 12-4 for four themes 1 to 4 with different concepts. The stops 16-1, 16-2, 16-3, and 16-4 are provided corresponding to the divided areas 12-1, 12-2, 12-3, and 12-4, respectively. In this example, the divided areas 12-1, 12-2, 12-3, and 12-4 are destination points. Since the divided areas 12-1, 12-2, 12-3, and 12-4 and the stops 16-1, 16-2, 16-3, and 16-4 are not limited to four, any one of the divided areas and any one of the stops are expressed as a divided area 12-n and a stop 16-n (n is a natural number), respectively.
In this example, the vehicle 10 is a shared bus, and directly goes to any one of the stops 16-n (16-1, 16-2, 16-3, and 16-4) from the terminal stop 14. The vehicle may directly go to one stop 16-n, and then sequentially stop at another stop 16-n.
A processing unit 22 is connected to the communication device 20, and the processing unit 22 performs various types of data processing. A display 24 as display means and an input device 26 are connected to the processing unit 22. A liquid crystal, an organic electro luminescence (EL), a projection display, etc. can be adopted for the display 24 for displaying videos on an inner wall surface of the vehicle 10.
Further, a theme-specific associative content storage unit 28 is connected to the processing unit 22, and information on an associative content for each of the divided areas 12-n is stored therein.
Here, as the theme-specific associative content, the following examples can be mentioned.
As described above, the video about items that are not directly related to the content but are associated with the content is stored in the theme-specific associative content storage unit 28.
A current location detection unit 30 for detecting a current location of the vehicle 10 is connected to the processing unit 22. A global navigation satellite system (GNSS) such as Global Positioning System (GPS) is adopted for the current location detection unit 30.
As will be described later, since the video of the divided area 12-n in which users will arrive is read from the related video of each divided area 12-n stored in the theme-specific associative content storage unit 28, and then displayed and played, the users can obtain prior knowledge about the divided area 12-n that is a destination, so that the users can further enjoy the destination. The theme-specific associative content storage unit 28 may temporarily store the theme-specific associative content distributed via the communication device 20.
The configuration of the vehicle 10 for transporting the users from the terminal stop 14 to the stop 16-n will be described. The vehicle 10 may be a manually driven vehicle operated by a driver or an autonomous driving vehicle. Regarding the autonomous driving vehicle, for example, based on the standards set by the Society of Automotive Engineers (SAE International), it is preferable that the vehicle 10 operate at level 4 (highly automated driving) or level 5 (fully automated driving).
The display 24 is provided as a vehicle window of the vehicle 10. That is, in this vehicle 10, instead of providing glass windows as vehicle windows on the right and left sides of the vehicle, the display 24 is provided. The display 24 is arranged such that a display surface of the display 24 faces the vehicle cabin. The display 24 can have both a transparent mode and a non-transparent mode by using, for example, an organic EL display.
Then, for example, as shown in
In the autonomous driving vehicle, the front window may also serve as the display 24 for displaying the video. Further, in a normal vehicle, a partition may be provided between the driver's seat and the passenger's seat, and the display 24 may be installed in the area. Further, the display 24 may be provided on the ceiling or the like. The passengers cannot see the landscape when the display 24 installed in a place other than the window stops displaying the video. Therefore, during the period for which the video is not displayed on the window, the landscape outside of the vehicle 10 may be captured and displayed, or a video prepared in advance may be displayed on the place. The passengers can also see the landscape outside of the vehicle 10 in the transparent mode of the display 24.
Further, in this example, the display 24 itself can be switched between the transparent mode and the non-transparent mode, but the display 24 may be physically movable.
Note that, in
First, a related video is created according to each divided area 12-n. This related video is not directly related to the content of the divided area 12-n but is a video that is not shown in the divided area 12-n. This related video may include an interview with a creator, storyboards, filmmaking secrets, and the like.
Further, it is preferable that the length of the video be within the moving time of the vehicle 10, a plurality of videos that lasts a comparatively short period of time be prepared, or videos that last various periods of time be prepared. As a result, it is possible to display the videos in an appropriate combination according to the time required for the vehicle 10 to reach the divided area 12-n.
In addition, the landscape of the route of the vehicle 10 is studied, and objects to be shown and objects not to be shown are specified.
Then, when the vehicle 10 departs for the destination (YES in S13), an announcement is issued after departure (S14), and the current location detection unit 30 acquires the current location (S15).
Then, it is determined whether the landscape of the current location is suitable to be shown (S16). When the landscape is suitable, the display 24 is set to the transparent mode and the landscape is shown (S17). When the landscape is not suitable, the display 24 is set to the non-transparent mode and the video obtained by playing the video data prepared in advance is shown (S18). Note that the landscape to be shown and not to be shown includes objects such as specific buildings, monuments, and signboards.
Then, it is determined whether the current time is a predetermined time before the estimated time of arrival at the destination (S19). When the current time is not the predetermined time before the estimated time, the process returns to S15 and the control of the display 24 is repeated. When the current time is the predetermined time before the estimated time, a termination process for playing termination videos is performed (S20).
As described above, in the embodiment, the landscape of the traveling
route of the vehicle 10 is checked, and the relationship with the target divided area 12-n is studied. In this study, objects to be shown and objects not to be shown are determined. For example, information boards, buildings, and characters related to the target divided area 12-n correspond to the objects to be shown, and objects related to another divided area 12-n correspond to the objects not to be shown. When the vehicle 10 is traveling, a video associated with a theme is displayed while the landscape not to be shown is not shown and the landscape to be shown is shown. As a result, the users can further obtain the knowledge about the divided area 12-n that is a destination, and further enjoy the destination.
Number | Date | Country | Kind |
---|---|---|---|
2020-208932 | Dec 2020 | JP | national |
This application is a Continuation of application Ser. No. 17/514,445, filed on Oct. 29, 2021, which claims priority to Japanese Patent Application No. 2020-208932, filed on Dec. 17, 2020. The prior applications are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
7996422 | Shahraray et al. | Aug 2011 | B2 |
11822845 | Kagami | Nov 2023 | B2 |
20030114968 | Sato et al. | Jun 2003 | A1 |
20050182564 | Kim | Aug 2005 | A1 |
20070067104 | Mays | Mar 2007 | A1 |
20090276154 | Subramanian | Nov 2009 | A1 |
20090318777 | Kameyama | Dec 2009 | A1 |
20100023544 | Shahraray et al. | Jan 2010 | A1 |
20120036467 | Tom | Feb 2012 | A1 |
20120095675 | Tom | Apr 2012 | A1 |
20120143980 | Johansson | Jun 2012 | A1 |
20140279200 | Hosein et al. | Sep 2014 | A1 |
20170315771 | Kerr | Nov 2017 | A1 |
20180018139 | Watanabe et al. | Jan 2018 | A1 |
20180188054 | Kennedy et al. | Jul 2018 | A1 |
20180357233 | Dazéet al. | Dec 2018 | A1 |
20190124301 | Yoshii et al. | Apr 2019 | A1 |
20200017026 | Kumar et al. | Jan 2020 | A1 |
20200329342 | Beaurepaire et al. | Oct 2020 | A1 |
20220074756 | Gewickey et al. | Mar 2022 | A1 |
20220197579 | Kagami et al. | Jun 2022 | A1 |
20220197928 | Kagami et al. | Jun 2022 | A1 |
20220201254 | Kagami et al. | Jun 2022 | A1 |
20220347567 | Lake-Schaal et al. | Nov 2022 | A1 |
Number | Date | Country |
---|---|---|
3 722 948 | Oct 2020 | EP |
2009-294790 | Dec 2009 | JP |
2011-115968 | Jun 2011 | JP |
2018-163650 | Oct 2018 | JP |
2020-165797 | Oct 2020 | JP |
2007109044 | Sep 2007 | WO |
2016054300 | Apr 2016 | WO |
2017208719 | Dec 2017 | WO |
2020132200 | Jun 2020 | WO |
2020163801 | Aug 2020 | WO |
Entry |
---|
Oct. 3, 2022 Office Action issued in U.S. Appl. No. 17/514,445. |
Feb. 23, 2023 Office Action issued in U.S. Appl. No. 17/514,445. |
Jul. 3, 26, 2023 Notice of Allowance issued in U.S. Appl. No. 17/514,445. |
Number | Date | Country | |
---|---|---|---|
20240036793 A1 | Feb 2024 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17514445 | Oct 2021 | US |
Child | 18379921 | US |