The present invention relates to a data processing technology and, more particularly, relates to an information processing system and an information processing method.
Image display systems for allowing a user wearing a head-mounted display to view a subject space from a free view point have become widespread. For example, there has been known electronic content in which a virtual third-dimensional space is defined as a target to be displayed and a virtual reality (VR) is realized by displaying, on a head-mounted display, an image corresponding to the visual line of a user. Using a head-mounted display can enhance a feeling of immersion into a video, or can improve the operability in an application of a game or the like.
Currently, services for distributing video content of a concert (e.g., a live concert and a film concert) and the like to user terminals to allow users to view the video content are provided in some cases.
The inventor of the present invention considered that, in order to increase a content-viewing user's degree of satisfaction, it is important not only to ensure the quality of the content, but also to improve the entertainability of the content before, while, and after the content is played.
The present invention has been made in view of the above problem, and an object thereof is to provide a technology of improving the entertainability of content before, while, and after the content is played.
In order to solve the above-mentioned problem, an information processing system according to an aspect of the present invention includes a first creation section that creates a video of a first room which is a virtual space where a plurality of users who are to view predetermined content gather before the content is played, a second creation section that creates a video of a second room which is a virtual space where the content is played, a third video creation section that creates a video of a third room which is a virtual space where the plurality of users gather after the content is played, and an output section that displays the video of the first room, the video of the second room, and the video of the third room sequentially on a display section.
Another aspect of the present invention pertains to an information processing method. In this method, a computer executes a step of creating a video of a first room which is a virtual space where a plurality of users who are to view predetermined content gather before the content is played, a step of creating a video of a second room which is a virtual space where the content is played, a step of creating a video of a third room which is a virtual space where the plurality of users gather after the content is played, and a step of displaying the video of the first room, the video of the second room, and the video of the third room sequentially on a display section.
It is to be noted that any combinations of the constituent elements described above and the expressions of the present invention that are converted between a device, a computer program, or a recording medium having a computer program recorded in a readable manner, and a data structure are also effective as aspects of the present invention.
According to the present invention, the entertainability of content can be improved before, while, and after the content is played.
The subject of a system, a device, or a method according to the present disclosure has a computer. The computer executes a program to implement functions of the subject of the device or the method according to the present disclosure. The computer includes, as a main part of hardware, a processor that is operated according to the program. Any type of the processor can be used as long as the processor can implement the functions by executing the program. The processor may include one or a plurality of electronic circuits including semiconductor integrated circuits (IC) or large-scale integration (LSI) circuits. The plurality of electronic circuits may be integrated on one chip, or may be mounted on a plurality of chips. The plurality of chips may be integrated in one device, or may be included in a plurality of devices. The program may be recorded in a non-temporal recording medium such as a computer-readable read only memory (ROM), an optical disk, or a hard disk drive. The program may be stored in advance in the recording medium, or may be supplied to the recording medium over a wide area communication network such as the Internet. It is to be noted that the terms “first,” “second,” etc., in the present description and the claims are not intended to represent any order or importance level, unless otherwise mentioned. Such terms are intended to make sections distinguishable from each other.
An embodiment proposes a content viewing system that offers an original user experience by offering, to the user, a flow line before, while, and after electronic content is played such that the flow line is close to a flow line of event viewing in a real space. In the content viewing system according to the embodiment, video content is distributed to head-mounted displays (hereinafter, also referred to as “HMDs”) mounted on the heads of a plurality of users, to allow the plurality of users to view the video content at the same time.
The server 12 is an information processing device that distributes video content to the HMDs 100 and also controls a variety of effects regarding video content viewing. Video content to be distributed from the server 12 can include videos of a variety of categories and genres. In the embodiment, it is assumed that the server 12 distributes video content (hereinafter, referred to as a “live concert video”) obtained by shooting a concert live. A detailed explanation of the server 12 will be given later.
The HMD 100 is a head-mounted display device that is mounted on a user's head and displays VR images. A live concert video provided from the server 12 is displayed on a VR space displayed on the HMD 100, and further, avatars (also called as characters) of a plurality of users are displayed before, while, and after the live concert video is played.
The output structure part 102 includes a casing 108 that has a shape to cover left and right eyes of the user when the user is wearing the HMD 100. A display panel that directly faces the eyes when the user is wearing the HMD 100 is included in the casing 108. It is assumed that the display panel of the HMD 100 according to the embodiment does not have transmissibility. Thus, the HMD 100 according to the embodiment is a head-mounted display of a type that does not transmit light therethrough.
An ocular lens that is positioned between the display panel and the user's eyes when the user is wearing the HMD 100 and that enlarges the viewing angle of the user may further be included in the casing 108. The HMD 100 may further include a loudspeaker or an earphone at a position that corresponds to a user's ear when the user is wearing the HMD 100. In addition, the HMD 100 includes a motion sensor. The motion sensor detects translation movement or rotational movement of the head of the user wearing the HMD 100, and thus detects the position and the posture at each clock time.
The HMD 100 further includes a stereo camera 110 on the front surface of the casing 108. The stereo camera 110 captures a video of a surrounding real space within a viewing field corresponding to the visual line of the user. When a captured image is displayed in real time, an unprocessed real space in the facing direction of the user can be seen. That is, video see-through can be realized. Further, augmented reality (AR) can be realized by rendering a virtual object on a real object image in the photographed image.
The CPU 120 processes information acquired from the sections of the HMD 100 via the bus 128, and supplies video data and sound data acquired from the server 12 to the display section 124 and the sound output section 126. The GPU 121 executes image processing based on a command transmitted from the CPU 120. For example, the GPU 121 creates data regarding a VR image indicating a VR space to be displayed on the display section 124.
The main memory 122 stores programs and data that are required for processing at the CPU 120 and the GPU 121. The storage 123 stores an application program (hereinafter, also referred to as a “live concert viewing application”) for viewing a live concert video distributed from the server 12.
The display section 124 includes a display panel such as a liquid crystal panel or an organic electroluminescent (EL) panel, and is configured to display an image before the eyes of a user who is wearing the HMD 100. The display section 124 may realize a stereoscopic vision by displaying a pair of stereo images in regions corresponding to left and right eyes. The display section 124 may further include a pair of lenses that are positioned between the display panel and the user's eyes when the user is wearing the HMD 100 and that enlarge the viewing angle of the user.
The sound output section 126 includes a loudspeaker or an earphone that is provided at a position corresponding to an ear of a user when the user is wearing the HMD 100. The sound output section 126 makes the user hear a sound.
The communication section 132 includes a peripheral device interface such as a universal serial bus (USB) or Institute of Electrical and Electronics Engineers (IEEE) 1394 and a network interface for a wired LAN or a wireless LAN, for example. The communication section 132 exchanges data with the server 12 via an access point (not depicted). For example, the communication section 132 may establish connection with the access point by a known wireless communication technology such as Wi-Fi (registered trademark), and may communicate with the server 12 via the access point. In addition, by using a known wireless communication technology, the communication section 132 communicates with a controller (not depicted) held in a user's hand.
The motion sensor 134 includes a gyro sensor and an acceleration sensor, and obtains an angular velocity and an acceleration of the HMD 100. The microphone 136 receives a sound generated in the surrounding area of the user and a sound made by the user, and converts these sounds into electric signals (also referred to as “sound data”).
As depicted in
The HMD 100 includes a control section 20, a storage section 22, and a communication section 132. The control section 20 may be realized by the CPU 120 and the GPU 121, and executes a variety of data processing. The control section 20 exchanges data with an external device (e.g., the controller or the server 12) via the communication section 132. The storage section 22 may be realized by the main memory 122 and the storage 123, and stores data to be referred to or to be updated by the control section 20.
The storage section 22 includes a room data storage section 24, an item data storage section 26, and a user data storage section 28. The room data storage section 24 stores data (also referred to as “room data”) regarding a plurality of virtual spaces for situations before, while, and after a live concert video is played. For example, the room data includes data for defining the layout, shape, pattern, color, etc., of each of the virtual spaces.
The plurality of virtual spaces according to the embodiment include a pre-playing lobby which is a first room, a live concert place which is a second room, and a post-playing lobby which is a third room. The pre-playing lobby is a virtual space where a plurality of users who are to view a live concert video gather before the live concert video is played. For example, in the pre-playing lobby, a user learns in advance a manner of enjoying the live concert (for example, how to move and how to cheer) through a voice chat with others in the same group.
The live concert place is a virtual space where the live concert video is played and a plurality of users view the live concert video. The post-playing lobby is a virtual space where a plurality of users who have viewed the live concert video gather after the live concert video is played. For example, in the post-playing lobby, a user shares the impression of the live concert with others of the same group through a voice chat, and enjoys the afterglow of the live concert. The pre-playing lobby and the post-playing lobby may be virtual spaces of the same lobby, and may have the same layout, shape, pattern, color, etc.
In addition, a plurality of avatars corresponding to a plurality of users appear in each of the pre-playing lobby, the live concert place, and the post-playing lobby. In the embodiment, an action that can be executed by an avatar of a user in the pre-playing lobby is different from an action that can be executed by the avatar of the user in the post-playing lobby.
Specifically, the avatar of the user in the pre-playing lobby can purchase an item for cheering, according to a user operation, and can further practice a moving manner and a cheering manner in the live concert by using a handclap and the item. On the other hand, the avatar of the user in the post-playing lobby can view or purchase a photograph of the live concert according to a user operation.
The item data storage section 26 stores data (also referred to as “item data”) regarding a plurality of items which are virtual items that can be purchased by the user and which can be used by the avatar of the user. For example, the item data includes data for defining the price of an item and an effect to be exerted by the item and data for defining a performer for whom the item is used to cheer.
The user data storage section 28 stores data (also referred to as “user data”) regarding the user who is wearing the HMD 100. For example, the user data includes identification information regarding the user, identification information regarding a group to which the user belongs, and data regarding an item possessed (purchased) by the user. It is to be noted that, in the embodiment, up to four users can belong to each group.
The control section 20 includes an operation reception section 30, a position and posture detection section 32, a room switching section 34, an avatar motion determination section 36, a purchase processing section 38, a video creation section 40, a video acquisition section 48, a video output control section 50, a sound reception section 52, a server coordination section 54, a sound output control section 56, and a photograph saving section 58. The functions of these functional blocks may be installed in the live concert viewing application. By reading out the live concert viewing application into the main memory 122 and executing the live concert viewing application, the processors (the CPU 120 and the GPU 121) of the HMD 100 may exert the functions of the functional blocks in the control section 20.
The operation reception section 30 receives data indicating an operation inputted by the user through the controller. The operation reception section 30 transfers the data indicating the user operation to the functional blocks in the control section 20.
The position and posture detection section 32 detects the position and the posture of the head of the user who is wearing the HMD 100, on the basis of the angular velocity and the acceleration of the HMD 100 detected by the motion sensor 134 and an image of the surrounding area of the HMD 100 captured by the stereo camera 110. The position and posture detection section 31 further detects the viewpoint position and the visual line direction of the user on the basis of the position and the posture of the head of the user.
The room switching section 34 executes a process of switching a virtual space to be displayed, or a virtual space a video of which is to be created. For example, prior to a play timing of a live concert video, the room switching section 34 gives the video creation section 40 a command to create a pre-playing lobby video. Moreover, in response to a command transmitted from the server 12, the room switching section 34 gives the video creation section 40 a command to create a live concert place video in place of the pre-playing lobby video. In addition, in response to a command transmitted from the server 12, the room switching section 34 gives the video creation section 40 a command to create a post-playing lobby video in place of the live concert place video.
The avatar motion determination section 36 determines a posture, a motion, an action of the avatar of the user to be implemented, according to a user operation received by the operation reception section 30 and the position and the posture of the user head detected by the position and posture detection section 32.
The purchase processing section 38 executes an item purchase process according to a user operation received by the operation reception section 30. For example, in coordination with an external payment system, the purchase processing section 38 may settle payment for an item purchased by the user in a pre-playing lobby, by using payment means information (e.g., a credit card number) which is registered by the user in advance.
The video creation section 40 creates virtual space video data. The video creation section 40 includes a first creation section 42, a second creation section 44, and a third creation section 46. The first creation section 42 creates video data regarding the pre-playing lobby by using data regarding the pre-playing lobby stored in the room data storage section 24. The second creation section 44 creates video data regarding the live concert place by using data regarding the live concert place stored in the room data storage section 24. The third creation section 46 creates video data regarding the post-playing lobby by using data regarding the post-playing lobby stored in the room data storage section 24.
The video output control section 50 outputs the pre-playing lobby video data created by the first creation section 42, the live concert place video data created by the second creation section 44, and the post-playing lobby video data created by the third creation section 46, sequentially to the display section 124. The video output control section 50 displays the pre-playing lobby video, the live concert place video, and the post-playing lobby video sequentially on the display section 124.
The video acquisition section 48 receives video data regarding a live concert video transmitted from the server 12. The second creation section 44 sets, on a screen part placed in the live concert place, video data regarding the live concert video received by the video acquisition section 48.
The sound reception section 52 receives sound data inputted through the microphone 136. The server coordination section 54 exchanges data with the server 12. Specifically, the server coordination section 54 transmits, to the server 12, user sound data inputted through the microphone 136. The server coordination section 54 further transmits, to the server 12, the posture, the motion, and the action of the avatar of the user determined by the avatar motion determination section 36.
In addition, the server coordination section 54 receives other user (another user in the same group in the embodiment) sound data transmitted from the server 12, and transfers the other user sound data to the sound output control section 56. Through the sound output section 126, the sound output control section 56 outputs a sound indicated by the other user sound data transmitted from the server 12.
In addition, the server coordination section 54 receives data indicating a posture, a motion, and an action of an avatar of the other user transmitted from the server 12, and transfers the data to the video creation section 40. The first creation section 42, the second creation section 44, and the third creation section 46 of the video creation section 40 each reflect, in the respective VR videos, the posture, the motion, and the action of the avatar of the user determined by the avatar motion determination section 36, and further reflect, in the respective VR videos, the posture, the motion, and the action of the avatar of the other user transmitted from the server 12. Accordingly, the avatar of the user (a user in a local environment) is made to move according to the user's posture, motion, and operation, while the avatar of the other user (a user in a remote environment) is made to move according to the other user's posture, motion, and operation.
The photograph saving section 58 saves data regarding a photograph of a live concert or a live concert place to the storage section 22 (for example, a photograph storage section) according to a user operation.
The storage section 62 includes a video data storage section 66 and a user data storage section 68. The video data storage section 66 stores various kinds of video data (which include video data regarding a live concert video in the embodiment) distributed from the server 12 to the HMDs 100.
The user data storage section 68 stores data regarding a plurality of users to whom video content is distributed and who belong to a plurality of groups. Data regarding each user includes identification information regarding the user, identification information regarding the group to which the user belongs, and identification information regarding video content which the user views, for example.
The control section 60 includes an effect implementation section 70, an arrangement section 72, a video distribution section 74, an action data reception section 76, an action data providing section 78, a sound data reception section 80, and a sound data providing section 82. The functions of the plurality of functional blocks may be installed in a computer program and installed into a storage of the server 12. The processors (the CPU, for example) of the server 12 may exert the functions of the plurality of functional blocks of the control section 60 by reading the computer program into the main memory.
The effect implementation section 70 determines effects in the pre-playing lobby, the live concert place, and the post-playing lobby. The effect implementation section 70 transmits data indicating what effect has been determined, to each of the HMDs 100 of a plurality of users viewing the live concert video.
The arrangement section 72 determines the respective positions of the plurality of users viewing the live concert video, on the basis of the groups of the users stored in the user data storage section 68. In other words, the arrangement section 72 determines the positions of the users in a virtual space (the live concert place in the embodiment) according to the groups of the users. The arrangement section 72 transmits data indicating the respective positions of the users to the respective HMDs 100 of the users.
The video distribution section 74 transmits video data regarding the live concert video stored in the video data storage section 66, to the respective HMDs 100 of the plurality of users viewing the live concert video. The video data regarding the live concert video may be transferred and reproduced by streaming.
The action data reception section 76 receives data indicating postures, motions, and actions of avatars of the plurality of users transmitted from the respective HMDs 100 of the users viewing the live concert video. The action data providing section 78 transmits data indicating the corresponding posture, motion, and action of an avatar of a certain user received by the action data reception section 76, to the HMD 100 of another user.
The sound data reception section 80 receives sound data transmitted from the respective HMDs 100 of the plurality of users viewing the live concert video. The sound data providing section 82 identifies a group to which a user having transmitted sound data received by the sound data reception section 80 belongs, by making reference to the data stored in the user data storage section 68. The sound data providing section 82 transmits sound data transmitted from a user belonging to a certain group, to the HMD 100 of another user belonging to the same group, but does not transmit the sound data to the HMD 100 of a user belonging to a different group.
An explanation will be given of operations in the content viewing system 10 which has the above configuration.
The user a inputs, to the HMD 100a, an operation to start the live concert viewing application, and the control section 20 of the HMD 100a starts the live concert viewing application. When the live concert viewing application is started (Y in S10), the user a selects a live concert to view at the moment, from among a plurality of available live concerts (in other words, a plurality of live concert places). In addition, the user a selects a group to join (here, the group A), and inputs a name to log in. The server coordination section 54 of the HMD 100a transmits identification information regarding the live concert and the group selected by the user to the server 12. The user data storage section 68 of the server 12 stores the identification information regarding the live concert and the group selected by the user, in association with identification information regarding the user.
The room switching section 34 of the HMD 100a gives the video creation section 40 a command to create a pre-playing lobby video. The video creation section 40 (the first creation section 42) of the HMD 100a creates data regarding the pre-playing lobby video, and the video output control section 50 displays the pre-playing lobby video on the display section 124 (S11).
It is to be noted that only the user b, the user c, and the user d who belong to the group A which is the same as that of the user a are allowed to enter the pre-playing lobby where the user a is. On the other hand, the user e and the user f are allowed to enter a pre-playing lobby exclusive for the group B.
In the pre-playing lobby, the user a learns in advance how to move for cheering (here, how to clap hands), while waiting for start of the live concert video. If the user a claps hands in the real world, the operation reception section 30 of the HMD 100 detects the handclap on the basis of movement of the controller or an image captured by the stereo camera 110. The first creation section 42 of the video creation section 40 creates the pre-playing lobby video 200 in which the avatar 202 and the avatar image 206 clap hands, and further creates the pre-playing lobby video 200 that includes an effect 210 that is brought into association with a handclap in advance. The effect 210 includes, for example, an effect of enlarging a ring of light or an effect of flying star-shaped objects (also referred to as particles).
In the pre-playing lobby, the user a can purchase a desired item to be used to cheer for the live concert or a performer, from among a plurality of virtual items stored in the item data storage section 26. The first creation section 42 of the HMD 100a creates, as the pre-playing lobby video, a video indicating that the user a purchases an item, according to an operation inputted by the user a. The operation reception section 30 of the HMD 100a receives the user operation for indicating purchase of the item. The purchase processing section 38 of the HMD 100a settles payment for the item, and causes the user data storage section 28 to store data regarding the purchased item in association with the identification information regarding the user a.
In addition, in the pre-playing lobby, the user a can use the purchased item to practice cheering and moving. The first creation section 42 of the HMD 100a creates, as the pre-playing lobby video 200, a video indicating that the avatar 202 and the avatar image 206 of the user a cheer while using the purchased item, according to an operation inputted by the user a. In the embodiment, purchasable items include a frying pan, and a user can practice waving the flying pan in a pre-playing lobby, for example. As long as the play time of the live concert video (in other words, a timing of switching to the live concert place video) does not come (N in S12), the pre-playing lobby video 200 is continuously displayed on the HMDs 100 of the respective users.
When the play time of the live concert video has come (Y in S12), the arrangement section 72 of the server 12 determines the respective positions of a plurality of users who are to view the live concert video in the live concert place, on the basis of the groups of the users (S13). The respective positions of the users in the live concert place are the arrangement positions of the avatars of the users in the live concert place.
As depicted in
In addition, the arrangement section 72 arranges at least one other group between the viewing position (the center portion of the second row in the embodiment) in the live concert place and the play position of the live concert video (the position of the screen onto which the live concert video is projected in the embodiment). In the example in
In addition, the arrangement section 72 keeps the positional relation between the group A and the group B consistent between the arrangement data allocated for the users of the group A and the arrangement data allocated for the users of the group B. That is, the arrangement section 72 ensures the consistency in the relative positions of the groups between the pieces of arrangement data allocated for the respective groups. For example, both in the group arrangement depicted in
After the positions of the respective groups are determined by the arrangement section 72, the effect implementation section 70 of the server 12 transmits, to the HMDs 100 of the respective users, a command for switching to the live concert place which includes the arrangement data for the corresponding groups. On the basis of the command transmitted from the server 12, the room switching section 34 of the HMD 100a inputs, to the video creation section 40, a command for switching to the live concert place which includes the arrangement data for the group A. The video creation section 40 (the second creation section 44) creates data regarding a live concert place video in which the avatars of the groups are arranged in the positions indicated by the arrangement data for the group A (for example,
In the live concert place, the user a can perform non-verbal communication with users belonging to the other groups. The non-verbal communication includes waving a hand to an avatar of a user belonging to another group and throwing an object (also referred to as particles) toward the user. For example, in
As previously explained, the relative positions of the groups are kept consistent between the pieces of arrangement data allocated for the respective groups, whereby consistency in non-verbal communication between the groups can be ensured. For example, in a case where a user of the group A throws the object 226 toward the right direction (that is, to the direction of the group B), a user of the group B can recognize that the object 226 is thrown from the left direction (that is, from the direction of the group A).
Avatars of the remaining users (the user B, the user C, and the user D) belonging to the group A which is the same as the group to which the user a belongs are not depicted in
As previously explained, an item purchased in a pre-playing lobby is associated with a person or a character (here, referred to as a “performer”) who appears in a live concert video. In a case where the item purchased in the pre-playing lobby is used by the avatar of the user, the second creation section 44 of the HMD 100a creates the live concert place video 220 that includes an action (which can be regarded as a reaction or a feedback) that is taken by the performer associated with the item, to respond to the user.
Specifically, the effect implementation section 70 of the server 12 identifies items that are being used by the avatars of the respective users in the live concert place, on the basis of data indicating the motions of the avatars of the respective users reported from the HMDs 100 of the respective users. In other words, for each of the items, the effect implementation section 70 identifies the user who is using the item. For each of the items, the effect implementation section 70 determines, by lot, one user, as a target of a reaction effect (which can be also regarded as a gratitude effect) to be made by the performer associated with the item, from among users who are using the item. The effect implementation section 70 transmits, to the HMD 100 of the respective users, a reaction effect command for specifying the reaction effect target user determined for each item.
The second creation section 44 of the HMD 100 creates the live concert place video 220 that indicates the reaction of the performer to the avatar of the user specified by the reaction effect command. In the live concert place video 220 in
With such a reaction effect, purchase of items in the post-playing lobby can be promoted, and further, competition among the users can be promoted to enhance the attractiveness of the live concert viewing. A reaction effect is regularly repeated. The same user may be determined as a reaction effect target multiple times. Alternatively, a user determined as reaction effect target may be changed each time.
As long as play of the live concert video is not finished (N in S15), the live concert place video 220 is continuously displayed on the HMDs 100 of the respective users. If play of the live concert video is finished (Y in S15), the effect implementation section 70 of the server 12 transmits a command for switching to the post-playing lobby to the HMDs 100 of the respective users. In accordance with the command sent from the server 12, the room switching section 34 of the HMD 100a inputs a command for switching to the post-playing lobby to the video creation section 40. The video creation section 40 (the third creation section 46) creates data regarding the post-playing lobby video. The video output control section 50 displays the post-playing lobby video on the display section 124 (S16).
In the post-playing lobby, while sharing the impressions and the like of the live concert video with the others of the group A through a voice chat, the user a can cause the avatar 202 to hold a plurality of photographs 242 of a plurality of scenes of the video content (here, a live concert) with its hand to check the photographs 242. Data regarding the photographs 242 may be created by the effect implementation section 70 of the server 12, and may be provided from the server 12 to the HMDs 100 of the respective users. The third creation section 46 of the HMD 100 may display, on the post-playing lobby video 240, the photographs 242 provided from the server 12.
In the post-playing lobby, the user a can purchase a desired one of a plurality of the photographs 242 (which include a photograph including the avatar of the user a). According to an operation inputted by the user a, the third creation section 46 of the HMD 100a creates, as the post-playing lobby video, a video indicating that the user a purchases the photograph 242. In response to a user operation for indicating purchase of the photograph 242, the purchase processing section 38 of the HMD 100a settles payment for the photograph 242, and stores data (identification information, image data, or the like) regarding the purchased photograph 242 in association with the identification information regarding the user a in the user data storage section 28.
It is to be noted that, according to an action of a certain user in the live concert place, an action that can be executed by the user in the post-playing lobby may be changed. For example, when the post-playing lobby is displayed, the effect implementation section 70 of the server 12 may provide a photograph of a live concert scene or the live concert place to the HMD 100a on condition that the number of times of cheering by the avatar of the user a in the live concert place (e.g., the total number of the number of handclaps and the number of times of using items) is equal to or greater than a predetermined threshold. Accordingly, a privilege to view photographs in the post-playing lobby can be given to a user whose number of times of cheering in the live concert place is large. In addition, when the post-playing lobby is displayed, the effect implementation section 70 of the server 12 may provide more photographs to the HMD 100a as the number of times of cheering by the avatar of the user a in the live concert place is larger. Accordingly, cheering actions of users in the live concert place can be encouraged.
As long as the user a does not input a predetermined exit operation (N in S17), the post-playing lobby video 240 is continuously displayed. After the user a inputs the exit operation (Y in S17), the flow in
With the content viewing system 10 according to the embodiment, the pre-playing lobby video 200, the live concert place video 220, and the post-playing lobby video 240 are sequentially provided to the users in order to achieve a flow line of users before, while, and after video content is played. Accordingly, an experience that is close to event viewing in a real space can be offered to users, so that the entertainability before, while, and after video content is played can be improved.
The present invention has been explained so far on the basis of the embodiment. The embodiment exemplifies the present invention, and a person skilled in the art will understand that various modifications can be made to a combination of the constituent elements or the process steps of the embodiment and that these modifications are also within the scope of the present invention.
A first modification will be explained. In a virtual space of at least one of the pre-playing lobby, the live concert place, and the post-playing lobby, a special user (here, referred to as a “guide”) who has a role of guiding a plurality of general users viewing video content may be prepared, and an avatar of the guide may be arranged. In addition, in a case where pre-playing lobbies and post-playing lobbies are prepared for respective groups, the avatar of the guide may be arranged in the pre-playing lobbies and the post-playing lobbies of the all the groups.
The guide may teach the users how to cheer in the live concert or how to use an item, in a voice chat or by gestures. The sound data providing section 82 of the server 12 may transmit data regarding a sound made by the guide, or sound data transmitted from the HMD 100 of the guide, to the HMDs 100 of the users of a plurality of groups, and may transmit the data to the HMDs 100 of the users of all the groups. In addition, the sound data providing section 82 of the server 12 may transmit the sound data provided from all the users, to the HMD 100 of the guide, irrespective of which group the users belong to.
In this modification, in response to an operation inputted by the guide, a command for video switching (switching from the pre-playing lobby to the live concert place and/or switching from the live concert place to the post-playing lobby) may be transmitted from the HMD 100 of the guide to the HMDs 100 of all the users.
A second modification will be explained. The video creation section 40 may further include a fourth creation section that creates a video of My Room which is a fourth virtual space allocated for each user and in which items that an avatar of the corresponding user can use are gathered. My Room can be regarded as a virtual space associated with each user, and can also be regarded as a virtual space where items purchased by the corresponding user are gathered. The fourth creation section of the video creation section 40 may create, as a video of a virtual space to be initially displayed when the live concert viewing application is started, a video of My Room where one or more purchased items stored in the user data storage section 28 are arranged. The video output control section 50 may display the video of My Room on the display section 124.
In My Room, the user can freely view the purchased items. In addition, information regarding a plurality of pieces of video content that can be viewed at the moment is indicated in My Room. In My Room, in a case where the user inputs an operation for selecting video content to view at the moment, the room switching section 34 of the HMD 100 causes the video creation section 40 to display a pre-playing lobby corresponding to the selected video content. In addition, the server coordination section 54 of the HMD 100 may transmit, to the server 12, data (distribution request data) indicating the video content selected by the user. Thereafter, steps from S11 in the flowchart in
A third modification will be explained. A live concert video of a concert or the like is distributed from the server 12 to the HMDs 100 in the above embodiment, but the server 12 may distribute video content of any other type to the HMDs 100. Video content to be distributed may be a live sports broadcast video of a baseball match, a soccer match, or the like, or may be a video of a film or drama, for example. In a case where a video of a film or drama is distributed, it is preferable that the on/off of a voice chat be switchable according to a user operation. For example, the sound data providing section 82 of the server 12 may switch the on/off of a voice chat in a group according to a user setting, that is, may refrain from transferring sound data between users even if the users belong to the same group.
A fourth modification will be explained. Some of the plurality of functions of the HMD 100 according to the embodiment may be included in the server 12. Also, some of the plurality of functions of the server 12 according to the embodiment may be included in the HMD 100. Alternatively, some of the plurality of functions of the HMD 100 according to the embodiment and/or some of the plurality of functions of the server 12 according to the embodiment may be included in a user-side information processing device (e.g., a game console or the like) that has connection with the HMD 100.
Any combination of the above embodiment and any one of the modifications is also effective as an embodiment of the present disclosure. A new embodiment provided by such a combination provides the effects of the combined embodiment and modification. In addition, a person skilled in the art will understand that a function to be exerted by each constituent feature set forth in the claims is implemented by each constituent element described in the embodiment or modifications alone, or is implemented by collaboration between the constituent elements.
The technical concept based on the embodiment and the modifications can be expressed by the following aspects. An information processing system according to the following items can also be expressed as an information processing device or a head-mounted display.
An information processing system including:
With the information processing system, the first room before the content is played, the second room while the content is being played, and the third room after the content is played are sequentially provided to the user. Accordingly, a flow line that is close to a flow line of event viewing in a real space can be offered to the user, and further, an original user experience can be offered.
The information processing system according to Item 1, in which
With the information processing system, it makes possible to control the avatar of the user in the first room to execute an action that is suitable before the content is played and control the avatar of the user in the third room to execute an action that is suitable after the content is played. Therefore, an experience that is close to event viewing in the real world can be offered to the user.
The information processing system according to Item 1 or 2, in which
With the information processing system, an item can be used while the content is being viewed. Accordingly, sales of items can be promoted. Further, the attractiveness of content viewing can be enhanced.
The information processing system according to Item 3, in which
With the information processing system, a privilege to receive an action from the person or the character appearing in the content is offered. Accordingly, sales of items can be further promoted. Further, the attractiveness of content viewing can be further enhanced.
The information processing system according to any one of Items 1 to 4, in which,
With the information processing system, after content viewing, a feedback to the action of the user during the content viewing can be offered. Accordingly, the attractiveness during content viewing and the attractiveness after content viewing can be enhanced.
The information processing system according to any one of Items 1 to 5, in which
With the information processing system, the attractiveness after content viewing can be enhanced.
The information processing system according to any one of Items 1 to 6, further including:
With the information processing system, users of the same group are arranged close to each other. Accordingly, the attractiveness during content viewing can be enhanced.
The information processing system according to Item 7, in which
With the information processing system, each group is arranged in the fixed position that is suitable for content viewing, and the relative positions of the groups are kept consistent among the pieces of arrangement data allocated for the respective groups. Accordingly, the consistency in communication among the groups can easily be maintained.
The information processing system according to Item 7, in which
With the information processing system, each group is arranged in the fixed position that is suitable for content viewing, and at least one other group is arranged between the fixed position and the play position of the content. Accordingly, while viewing the content, the user can check a behavior of a different group responding to the content.
The information processing system according to any one of Items 1 to 9, further including:
With the information processing system, a sound that is unnecessary for the user can be suppressed from being provided to the user.
The information processing system according to Item 10, in which
With the information processing system, a sound that is unnecessary for the user is suppressed from being provided to the user, but a sound that is beneficial for a plurality of groups can be exceptionally provided to the plurality of groups.
The information processing system according to any one of [Item 1] to [Item 11], further including:
With the information processing system, the fourth room which is the user's own room for selecting which content to view or for checking collected items can be provided to the user. Accordingly, an experience that is close to that in the real world can be offered to the user.
An information processing method executed by a computer, the information processing method including:
With the information processing method, the first room before the content is played, the second room while the content is being played, and the third room after the content is played are sequentially provided to the user. Accordingly, a flow line that is close to a flow line of event viewing in a real space can be realized, and an original user experience can be offered.
A computer program for causing a computer to implement:
With the computer program, the first room before the content is played, the second room while the content is being played, and the third room after the content is played are sequentially provided to the user.
Accordingly, a flow line that is close to a flow line of event viewing in the real space can be realized, and an original user experience can be offered.
The technology according to the present disclosure is applicable to an information processing system or an information processing device.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/045525 | 12/10/2021 | WO |