This application claims priority to Japanese Patent Application No. 2023-131627 filed on Aug. 10, 2023 incorporated herein by reference in its entirety.
The present disclosure relates to a server, method, and non-transitory storage medium.
Conventionally, technologies are known for providing services to visitors to an event venue where an event such as an exhibition or a competitive sports match is held. For example, Japanese Unexamined Patent Application Publication No. 2022-080651 (JP 2022-080651 A) discloses an information processing device that creates, when first event information included in a schedule of a first user and second event information included in a schedule of a second user satisfy a predetermined condition, a group including the first user and the second user, and that proposes use of shared transportation means.
However, there is room for improvement in the technology for providing services to visitors to event venues. For example, when a visitor cannot arrive at an event venue by the start time of an event, the visitor may not be able to experience part of the event, which may reduce the visitor's satisfaction with the event.
In view of such circumstances, the present disclosure provides a technology for providing services to visitors to an event venue.
A server according to an aspect of the present disclosure is a server including:
A method according to an aspect of the present disclosure is
A non-transitory storage medium according to an aspect of the present disclosure is a non-transitory storage medium storing instructions. The instructions are instructions that are executable by one or more servers, and that cause the one or more servers to perform the following functions:
According to the above aspects of the present disclosure, a technology for providing services to visitors to an event venue is provided.
Features, advantages, and technical and industrial significance of exemplary embodiments of the disclosure will be described below with reference to the accompanying drawings, in which like signs denote like elements, and wherein:
Hereinafter, an embodiment of the present disclosure will be described.
The outline of a system 1 according to an embodiment of the present disclosure will be described with reference to
The shooting device 10 is a network camera that can transmit a moving image via the network 40. In this embodiment, the shooting device 10 is installed at an event venue so that the shooting device 10 can shoot the event being held at the event venue. The “event venue” is, for example, a stadium, but is not limited to this, and may be any venue. The “event” is, for example, a competitive sports match, but is not limited to this, and may be any event. When the system 1 includes a plurality of the shooting devices 10, the shooting devices 10 are disposed at mutually different positions within the event venue. In other words, the shooting devices 10 are disposed at the event venue so that they can shoot the event from mutually different shooting positions.
The vehicle 20 is, for example, a gasoline vehicle, a battery electric vehicle (BEV), a hybrid electric vehicle (HEV), a plug-in hybrid electric vehicle (PHEV), or a fuel cell electric vehicle (FCEV). The vehicle 20 is not limited to these, and may be any vehicle in which a person can ride. The vehicle 20 may be able to be driven by a driver in the vehicle 20, or may be able to be driven remotely. The vehicle 20 may be capable of autonomous driving at levels such as levels 1 to 5 defined in the Society of Automotive Engineers (SAE) for example.
In this embodiment, the vehicle 20 is used to transport, to the event venue, users (hereinafter also referred to as “visitors”) who come to the event venue to watch a sports match, for example. Specifically, the vehicle 20 transports visitors from a predetermined departure point to a destination, which is the event venue. The vehicle 20 may be operated in a manner, for example, similar to a shuttle bus, in which the vehicle 20 moves from a predetermined departure point to the event venue according to a predetermined schedule. Alternatively, the vehicle 20 may be operated in a manner, for example, similar to a taxi, in which the vehicle 20 travels from a departure point reserved by the visitor to the event venue. In this embodiment, for simplicity of explanation, the vehicle 20 will be described as a vehicle that transports one visitor at a time, but the number of visitors that the vehicle 20 can transport at one time may be two or more people.
The server 30 includes one computer or multiple computers that can communicate with each other. In this embodiment, the server 30 can receive a moving image of the event shot by the shooting device 10. Furthermore, the server 30 can transmit the received moving image to the vehicle 20 as a live moving image. Furthermore, the server 30 can transmit a shortened moving image obtained by shortening the received moving image to the vehicle 20. The moving image transmitted from the server 30 to the vehicle 20 is played back on a display installed in the vehicle cabin of the vehicle 20. In this way, by transmitting the moving image to the vehicle 20, the server 30 can present the moving image to the visitors aboard the vehicle 20.
First, the outline of the present embodiment will be described, and the details will be described later. The server 30 according to this embodiment obtains, as a target time, the time when a predetermined trigger is detected for the visitor who is moving to the event venue. When the target time is later than a start time of the event held at the event venue, the server 30 transmits a first shortened moving image obtained by shortening a first moving image of the event shot from the start time to the target time, thereby presenting the first shortened moving image to the moving visitor. After the playback of the first shortened moving image ends, the server 30 transmits the live moving image of the event, thereby presenting the live moving image to the moving visitor.
As described above, according to the present embodiment, the first shortened moving image obtained by shortening the first moving image of the event, which is shot from the start time of the event to the target time, is first presented to the visitor who is moving to the event venue, and then the live moving image of the event is presented. Therefore, for example, even when the visitor cannot arrive at the event venue by the start time of the event, the visitor watches the first shortened moving image while moving to the event venue, thereby indirectly experiencing the event after the start time. In addition, by watching the live moving image presented after a playback of the first shortened moving image while moving to the event venue, the visitor can indirectly experience the event after the playback of the first shortened moving image. Therefore, according to the present embodiment, the technology for providing services to visitors to an event venue can be improved in that it is possible to maintain or improve the satisfaction level of the visitors with the event regarding the visitors who cannot arrive at the event venue by the start time of the event.
Next, each configuration of the system 1 will be described in detail.
Components included in the shooting device 10 will be described with reference to
The communication unit 11 includes one or more communication interfaces connected to the network 40. The communication interfaces conform to, for example, a mobile communication standard such as 4th generation (4G) and 5th generation (5G), a wired local area network (LAN) standard, or a wireless LAN standard. However the standards are not limited to these, and the communication interfaces may conform to any communication standard.
The shooting unit 12 includes one or more cameras. The shooting unit 12 can generate a moving image by shooting the event held at the event venue.
The storage unit 13 includes one or more memories. The memories are, for example, a semiconductor memory, a magnetic memory, or an optical memory, but are not limited to these memories. Each memory included in the storage unit 13 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory. The storage unit 13 stores any information used for the operation of the shooting device 10. For example, the storage unit 13 may store, a system program, an application program, or the like. Furthermore, the storage unit 13 may temporarily store the moving image generated by the shooting unit 12, for example.
The control unit 14 includes one or more processors, one or more programmable circuits, one or more dedicated circuits, or a combination thereof. The processors are, for example, a general-purpose processor such as a central processing unit (CPU) or a graphics processing unit (GPU), or a dedicated processor specialized for a specific process, but are not limited to these processors. The programmable circuits are, for example, a field-programmable gate array (FPGA), but are not limited to the circuit. The dedicated circuits are, for example, an application specific integrated circuit (ASIC), but are not limited to the circuit. The control unit 14 controls the operation of the entire shooting device 10.
Each component included in the vehicle 20 will be described with reference to
The communication unit 21 includes one or more communication interfaces connected to the network 40. The communication interfaces conform to, for example, a mobile communication standard. However, the standard is not limited to this, and the communication interfaces may conform to any communication standard.
The output unit 22 includes one or more output devices that output information. For example, the output devices are, but are not limited to, a display that outputs information as images, a speaker that outputs information as audio, and the like. Alternatively, the output unit 22 may include an interface for connecting an external output device.
The input unit 23 includes one or more input devices that detect input operations by the user. The input devices are, for example, a physical key, a capacitance key, a mouse, a touch panel, a touch screen provided integrally with the display of the output unit 22, a microphone, and the like, but are not limited to these. Alternatively, the input unit 23 may include an interface for connecting an external input device.
The storage unit 24 includes one or more memories. Each memory included in the storage unit 24 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory. The storage unit 24 stores any information used for the operation of the vehicle 20. For example, the storage unit 24 may store system programs, application programs, embedded software, map information, and the like.
The control unit 25 includes one or more processors, one or more programmable circuits, one or more dedicated circuits, or a combination of these. The control unit 25 controls the operation of the entire vehicle 20.
Each component included in the server 30 will be described with reference to
The communication unit 31 includes one or more communication interfaces connected to the network 40. The communication interfaces conform to, for example, a wired LAN standard or a wireless LAN standard. However, the standards are not limited to these, and the communication interfaces may conform to any communication standard.
The storage unit 32 includes one or more memories. Each memory included in the storage unit 32 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory. The storage unit 32 stores any information used for the operation of the server 30. For example, the storage unit 32 may store system programs, application programs, embedded software, databases, and the like. Furthermore, the storage unit 32 may store a moving image received from the shooting device 10 via the network 40.
The control unit 33 includes one or more processors, one or more programmable circuits, one or more dedicated circuits, or a combination of these. The control unit 33 controls the operation of the entire server 30.
The operation of the server 30 will be described with reference to
S100: The control unit 33 of the server 30 determines whether a predetermined trigger is detected for the visitor who is moving to the event venue. When it is determined that the trigger is detected (S100—Yes), the process proceeds to S101. On the other hand, when it is determined that no trigger is detected (S100—No), the process repeats S100.
In present embodiment, the “trigger” is that the visitor boards on the vehicle 20 with the event venue as the destination. As described above, the vehicle 20 is used to transport, to the event venue, the visitor to the event venue. Therefore, the user who boards on the vehicle 20 can be regarded as the “visitor who is moving to the event venue.” When the user boards on the vehicle 20, the control unit 25 of the vehicle 20 notifies the server 30 via the communication unit 21 of the time when the visitor moving to the event venue boards on the vehicle 20. When the control unit 33 of the server 30 receives the notification indicating the boarding time from the vehicle 20, the control unit 33 determines that the trigger is detected at the boarding time.
S101: When it is determined in S100 that the trigger is detected (S100—Yes), the control unit 33 obtains the time when the trigger is detected as the target time.
In this embodiment, the control unit 33 obtains the boarding time received from the vehicle 20 in S100 as the target time.
S102: The control unit 33 determines whether the target time is later than the start time of the event held at the event venue. When it is determined that the target time is later than the start time (S102—Yes), the process proceeds to S103. On the other hand, when it is not determined that the target time is later than the start time (S102—No), the process ends.
Specifically, the control unit 33 compares, for example, the target time with the start time of the event stored in the storage unit 32 in advance, and determines whether the target time is later than the start time.
S103: When it is determined in S102 that the target time is later than the start time (S102—Yes), the control unit 33 presents the first shortened moving image to the visitor who is moving to the event venue.
The “first shortened moving image” is a shortened version of the first moving image of the event shot from the start time to the target time. Specifically, the control unit 33 receives the moving image of the event shot by the shooting device 10 after the start time from the shooting device 10 via the communication unit 31. The control unit 33 obtains a portion of the received moving image that is shot from the start time to the target time as the first moving image. The control unit 33 shortens the first moving image by performing any moving image editing processing to generate the first shortened moving image. Here, any method can be used to shorten the moving image. For example, n times speed playback, unnecessary scene deletion, or the like may be adopted. Note that when a method of unnecessary scene deletion is adopted, the control unit 33 may automatically determine and delete unnecessary scenes using, for example, artificial intelligence (AI) or a rule-based algorithm. The control unit 33 then presents the first shortened moving image to the visitor aboard the vehicle 20 by transmitting the first shortened moving image to the vehicle 20. Specifically, the control unit 25 of the vehicle 20 plays back the first shortened moving image received from the server 30 and displays it on the display of the output unit 22.
S104: The control unit 33 determines whether the playback of the first shortened moving image presented to the visitor ends. When it is determined that the playback of the first shortened moving image ends (S104—Yes), the process proceeds to S105. On the other hand, when it is determined that the playback of the first shortened moving image does not end (S104—No), the process repeats S104.
Specifically, when the playback of the first shortened moving image ends, the control unit 25 of the vehicle 20 notifies the server 30 via the communication unit 21 of the time when the playback of the first shortened moving image ends (first playback end time). When the control unit 33 of the server 30 receives the notification indicating the first playback end time from the vehicle 20, the control unit 33 determines that the playback of the first shortened moving image ends.
S105: When it is determined in S104 that the playback of the first shortened moving image ends (S104—Yes), the control unit 33 presents a second shortened moving image to the visitor who is moving to the event venue.
The “second shortened moving image” is a shortened version of a second moving image of the event that is shot from the target time to the playback end time of the first shortened moving image. Specifically, the control unit 33 receives the moving image of the event shot by the shooting device 10 after the target time from the shooting device 10 via the communication unit 31. The control unit 33 obtains a portion of the received moving image that is shot from the target time to the first playback end time as the second moving image. The control unit 33 shortens the second moving image by performing any moving image editing processing to generate the second shortened moving image. The control unit 33 then presents the second shortened moving image to the visitor aboard the vehicle 20 by transmitting the second shortened moving image to the vehicle 20. Specifically, the control unit 25 of the vehicle 20 plays back the second shortened moving image received from the server 30 and displays it on the display of the output unit 22.
S106: The control unit 33 determines whether the playback of the second shortened moving image presented to the visitor ends. When it is determined that the playback of the second shortened moving image ends (S106—Yes), the process proceeds to S107. On the other hand, when it is determined that the playback of the second shortened moving image does not end (S106—No), the process repeats S106.
Specifically, when the playback of the second shortened moving image ends, the control unit 25 of the vehicle 20 notifies the server 30 via the communication unit 21 of the time when the playback of the second shortened moving image ends (second playback end time). When the control unit 33 of the server 30 receives the notification indicating the second playback end time from the vehicle 20, the control unit 33 determines that the playback of the second shortened moving image ends.
S107: When it is determined in S106 that the playback of the second shortened moving image ends (S106—Yes), the control unit 33 presents the live moving image to the visitor who is moving to the event venue. After that, the process ends.
Specifically, the control unit 33 transmits the live moving image received from the shooting device 10 to the vehicle 20 via the communication unit 31, thereby presenting the live moving image to the visitor aboard the vehicle 20. The live moving image may be presented, for example, until the time (arrival time) when the vehicle 20 arrives at the event venue, which is the destination.
Here, with reference to
In the example shown in
In the example shown in
Here, a first ratio of the length of the first shortened moving image to the length of the first moving image (i.e., Δt2/Δt1) and a second ratio of the length of the second shortened moving image to the length of the second moving image (i.e., Δt3/Δt2) may be different. For example, the control unit 33 of the server 30 may generate the first shortened moving image and the second shortened moving image so that the second ratio is larger than the first ratio. Generally, the larger a ratio of the length of a shortened moving image to the length of an original moving image (that is, the closer the ratio is to 1), the more a live feeling of the shortened moving image enhances. According to the configuration, since the live feeling increases in the order of the first shortened moving image, the second shortened moving image, and the live moving image, the satisfaction level of the visitors with the event can be further improved.
As described above, the server 30 according to the present embodiment obtains, as the target time, the time when the predetermined trigger is detected for the visitor who is moving to the event venue. When the target time is later than the start time of the event held at the event venue, the server 30 transmits the first shortened moving image obtained by shortening the first moving image of the event shot from the start time to the target time, thereby presenting the first shortened moving image to the moving visitor. After the playback of the first shortened moving image ends, the server 30 transmits the live moving image of the event, thereby presenting the live moving image to the moving visitor.
According to the configuration, as described above, for example, even when the visitor cannot arrive at the event venue by the start time of the event, the visitor watches the first shortened moving image while moving to the event venue, thereby indirectly experiencing the event after the start time. In addition, by watching the live moving image presented after the playback of the first shortened moving image while moving to the event venue, the visitor can indirectly experience the event after the playback of the first shortened moving image. Therefore, according to the present embodiment, the technology for providing services to visitors to an event venue can be improved in that it is possible to maintain or improve the satisfaction level of the visitors with the event regarding the visitors who cannot arrive at the event venue by the start time of the event.
Although the present disclosure has been described above based on the drawings and the embodiment, it should be noted that those skilled in the art may make various modifications and alterations thereto based on the present disclosure. It should be noted, therefore, that these modifications and alterations are within the scope of the present disclosure. For example, the functions included in the configurations, steps, etc. can be rearranged so as not to be logically inconsistent, and a plurality of configurations, steps, etc. can be combined into one or divided.
For example, in the embodiment described above, the configuration and the operation of the server 30 may be distributed to a plurality of computers capable of communicating with each other. Furthermore, part of the operation of the server 30 shown in
Furthermore, an embodiment is also possible in which, for example, a general-purpose computer functions as the server 30 according to the above embodiment. Specifically, a program describing processing contents for realizing each function of the server 30 according to the above embodiment is stored in the memory of the general-purpose computer, and the program is read and executed by the processor. Therefore, the present disclosure can also be realized as a program that can be executed by the processor or a non-transitory computer-readable medium that stores the program.
Furthermore, in the embodiment described above, the trigger detected for the visitor who is moving to the event venue is that the visitor boards on the vehicle 20 with the event venue as the destination. However, the trigger is not limited to this example, and may be any trigger that indicates that the visitor is moving to the event venue. For example, the trigger may be that the visitor who is moving to the event venue performs a predetermined operation on a terminal device such as a smartphone. In one example, the control unit 33 of the server 30 stores in the storage unit 32 in advance, for example, account information of a purchaser of a ticket for the event held at the event venue. The ticket purchaser logs into the system 1 from his or her own terminal device using the account information, and then operates route guidance with the event venue as the destination. When the ticket purchaser performs the operation for the route guidance to the event venue, the purchaser can be regarded as the visitor who is moving to the event venue. In response to the operation, the terminal device notifies the server 30 of the time when the ticket purchaser performs the operation (operation time). When the control unit 33 of the server 30 receives the notification of the operation time from the terminal device, the control unit 33 determines that the trigger is detected at the operation time (S100—Yes), and obtains the operation time as the target time (S101). Note that a destination to which the server 30 transmits various moving images may be the visitor's terminal device instead of the vehicle 20. According to this configuration, even when the visitor does not use the vehicle 20 and moves to the event venue by other means of transportation, various moving images can be presented to the visitor.
Furthermore, in the embodiment described above, various moving images may be presented using the shooting device 10 selected by the visitor who is moving to the event venue among the shooting devices 10 disposed at the event venue so that the shooting devices can shoot the event from mutually different shooting positions. Specifically, the visitor moving to the event venue selects a desired shooting position from the shooting positions of the shooting devices 10 via the input unit 23 of the vehicle 20. The control unit 25 of the vehicle 20 notifies the server 30 of the selected shooting position. When the control unit 33 of the server 30 receives a notification indicating the shooting position selected by the visitor from the vehicle 20, the control unit 33 uses the moving image shot by the shooting device 10 installed at the shooting position to present the first moving image, the second moving image, and the live moving image to the visitor. The selection of the shooting device 10 by the visitor may be repeatedly performed, for example, while the visitor is aboard on the vehicle 20.
A part of the embodiment of the present disclosure is shown as an example below. However, it should be noted that the embodiment of the present disclosure is not limited to these.
A server comprising a communication unit and a control unit, wherein:
The server according to Embodiment 1, wherein
The server according to Embodiment 1 or 2, wherein
The server according to any one of Embodiments 1 to 3, wherein
The server according to any one of Embodiments 1 to 4, wherein
The server according to any one of Embodiments 1 to 5, wherein:
The server according to any one of Embodiments 1 to 6, wherein:
The server according to any one of Embodiments 1 to 7, wherein
A method that is executed by a server, the method comprising:
The method according to Embodiment 9, wherein the trigger may be that the visitor boards on a vehicle with the event venue as a destination.
The method according to Embodiment 9 or 10, wherein
The method according to any one of Embodiments 9 to 11, wherein
The method according to any one of Embodiments 9 to 12, wherein
The method according to any one of Embodiments 9 to 13, wherein:
A non-transitory storage medium storing instructions, the instructions being executable by one or more servers and causing the one or more servers to perform functions including:
The non-transitory storage medium according to Embodiment 15, wherein
The non-transitory storage medium according to Embodiment 15 or 16, wherein
The non-transitory storage medium according to any one of Embodiments 15 to 17, wherein
The non-transitory storage medium according to any one of Embodiments 15 to 18, wherein
The non-transitory storage medium according to any one of Embodiments 15 to 19, wherein:
Number | Date | Country | Kind |
---|---|---|---|
2023-131627 | Aug 2023 | JP | national |