This relates generally to presentations for a theater application displayed in a three-dimensional environment.
Some electronic devices include an application to facilitate a theater presentation session between a presenter and one or more audience members, each using their own separate device.
Some examples of the disclosure are directed to systems and methods for displaying virtual presentations associated with a theater application in an augmented or fully-immersive three-dimensional environment. In one or more examples of the disclosure, the systems and methods include receiving a request to join a virtual presentation, and in response to receiving the request to join the virtual presentation, displaying a virtual presentation in a three-dimensional environment. The theater presentation is displayed in a manner that facilitates efficient communication between one or more presenter and one or more audience members who are part of the virtual presentation. According to some examples, the virtual presentation is displayed to each audience member in an audience viewpoint that assumes the user of the electronic device is an audience member of the presentation, optionally with each respective audience member able to view the virtual presentation from the same perspective (e.g., front and center in front of a presentation stage).
In some examples, one or more additional audience members are displayed to one or more the sides of the viewpoint of the user. The representations of one or more additional audience members are optionally based on real-world users who are also viewing the virtual presentation on their own separate devices. For example, the size of the audience represented in the virtual presentation correlates in some way to the number of audience members who join the virtual presentation. In some examples, one or more audience members can be engaged in a direct communication session with the user of the electronic device prior to joining the virtual presentation. For example, three-dimensional environments are presented by multiple devices communicating in a multi-user communication session, optionally with a representation (e.g., avatar) of each user participating in the multi-user communication session displayed in the three-dimensional environment of the multi-user communication session. In some examples, users engaged in a direct communication session with the user of the electronic device can be displayed by the electronic device differently than other audience members not engaged in a direct communication session with the user of the electronic device. For example, those participants in a direct communication session with the user are optionally displayed closer to and at a higher level of detail than other audience members who, while viewing the same virtual presentation as the user of the electronic device on their own devices, are not engaged in a direct communication session with the user of the electronic device prior to joining the virtual presentation. In some examples, the level of detail that each audience member is displayed with, can be based on their distance and/or proximity to the user of the electronic device within the three-dimensional environment. In some examples, the users engaged in a direct communication session are represented as avatars with features unique to the users, whereas other audience members not engaged in a direct communication session are represented as one or more generic-type avatars (e.g., without features unique to those users, or with fewer features unique to those users).
In some examples, the virtual presentation includes an audience reaction user interface that is configured to receive reactions from the user of the device to the virtual presentation (e.g., via inputs to the reaction user interface or via gestures or audio, etc.). The virtual presentation optionally displays the aggregate reactions to the virtual presentation from other audience members (e.g., optionally using the audience reaction user interface). In one or more examples, the electronic device can play audio that mimics the real-world sound that an audience would be making based on the aggregate reactions received by the electronic device. In some examples, the user of the electronic device can indicate their reactions by selecting one or more reaction selection buttons on the audience reaction user interface (e.g., pre-defined reaction selection buttons). Additionally or alternatively, the user can register a reaction by moving one or more portions of their body (e.g., clapping, raising a hand, waving a hand, etc.), that the electronic device can detect and recognize as corresponding to one of the one or more audience reactions.
In some examples, one or more audience members can be promoted to a role different than a generic audience member. In one or more examples, an audience member can be promoted to a presenter by a presenter or other administrator of the virtual presentation. In one or more examples, in response to receiving an indication that the user of electronic device has been promoted to a presenter in the virtual presentation, the electronic device modifies the viewpoint of the user to a presenter viewpoint. In one or more examples, in response to receiving an indication that another audience member different from the user of electronic device has been promoted to a presenter in the virtual presentation, the electronic device updates the virtual presentation to display the promoted audience member as a presenter (e.g., including an avatar representation with unique details for the promoted audience member). In some examples, an audience member can be promoted to a questioner. In response to receiving an indication that that the user of the electronic device has been promoted to a presenter, the electronic device displays the virtual presentation from a questioner viewpoint. In some examples, in response to receiving an indication that another audience member different from the user of electronic device has been promoted to a questioner in the virtual presentation, the electronic device updates the virtual presentation to display the promoted audience member as a questioner (e.g., optionally in a predetermined location, such as close to the stage, but off-center, optionally including an avatar representation with unique details for the promoted audience member), and optionally including a representation of a microphone or lectern or other virtual objects).
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for displaying virtual presentations associated with a theater application in an augmented or fully-immersive three-dimensional environment. In one or more examples of the disclosure, the systems and methods include receiving a request to join a virtual presentation, and in response to receiving the request to join the virtual presentation, displaying a virtual presentation in a three-dimensional environment. The theater presentation is displayed in a manner that facilitates efficient communication between one or more presenter and one or more audience members who are part of the virtual presentation. According to some examples, the virtual presentation is displayed to each audience member in an audience viewpoint that assumes the user of the electronic device is an audience member of the presentation, optionally with each respective audience member able to view the virtual presentation from the same perspective (e.g., front and center in front of a presentation stage).
In some examples, one or more additional audience members are displayed to one or more the sides of the viewpoint of the user. The representations of one or more additional audience members are optionally based on real-world users who are also viewing the virtual presentation on their own separate devices. For example, the size of the audience represented in the virtual presentation correlates in some way to the number of audience members who join the virtual presentation. In some examples, one or more audience members can be engaged in a direct communication session with the user of the electronic device prior to joining the virtual presentation. For example, three-dimensional environments are presented by multiple devices communicating in a multi-user communication session, optionally with a representation (e.g., avatar) of each user participating in the multi-user communication session displayed in the three-dimensional environment of the multi-user communication session. In some examples, users engaged in a direct communication session with the user of the electronic device can be displayed by the electronic device differently than other audience members not engaged in a direct communication session with the user of the electronic device. For example, those participants in a direct communication session with the user are optionally displayed closer to and at a higher level of detail than other audience members who, while viewing the same virtual presentation as the user of the electronic device on their own devices, are not engaged in a direct communication session with the user of the electronic device prior to joining the virtual presentation. In some examples, the level of detail that each audience member is displayed with, can be based on their distance and/or proximity to the user of the electronic device within the three-dimensional environment. In some examples, the users engaged in a direct communication session are represented as avatars with features unique to the users, whereas other audience members not engaged in a direct communication session are represented as one or more generic-type avatars (e.g., without features unique to those users, or with fewer features unique to those users).
In some examples, the virtual presentation includes an audience reaction user interface that is configured to receive reactions from the user of the device to the virtual presentation (e.g., via inputs to the reaction user interface or via gestures or audio, etc.). The virtual presentation optionally displays the aggregate reactions to the virtual presentation from other audience members (e.g., optionally using the audience reaction user interface). In one or more examples, the electronic device can play audio that mimics the real-world sound that an audience would be making based on the aggregate reactions received by the electronic device. In some examples, the user of the electronic device can indicate their reactions by selecting one or more reaction selection buttons on the audience reaction user interface (e.g., pre-defined reaction selection buttons). Additionally or alternatively, the user can register a reaction by moving one or more portions of their body (e.g., clapping, raising a hand, waving a hand, etc.), that the electronic device can detect and recognize as corresponding to one of the one or more audience reactions.
In some examples, one or more audience members can be promoted to a role different than a generic audience member. In one or more examples, an audience member can be promoted to a presenter by a presenter or other administrator of the virtual presentation. In one or more examples, in response to receiving an indication that the user of the electronic device has been promoted to a presenter in the virtual presentation, the electronic device modifies the viewpoint of the user to a presenter viewpoint. In one or more examples, in response to receiving an indication that another audience member different from the user of the electronic device has been promoted to a presenter in the virtual presentation, the electronic device updates the virtual presentation to display the promoted audience member as a presenter (e.g., including an avatar representation with unique details for the promoted audience member). In some examples, an audience member can be promoted to a questioner. In response to receiving an indication that that the user of the electronic device has been promoted to a presenter, the electronic device displays the virtual presentation from a questioner viewpoint. In some examples, in response to receiving an indication that another audience member different from the user of electronic device has been promoted to a questioner in the virtual presentation, the electronic device updates the virtual presentation to display the promoted audience member as a questioner (e.g., optionally in a predetermined location, such as close to the stage, but off-center, optionally including an avatar representation with unique details for the promoted audience member), and optionally including a representation of a microphone or lectern or other virtual objects).
In some examples, as shown in
In some examples, display 120 has a field of view visible to the user (e.g., that may or may not correspond to a field of view of external image sensors 114b and 114c). Because display 120 is optionally part of a head-mounted device, the field of view of display 120 is optionally the same as or similar to the field of view of the user's eyes. In other examples, the field of view of display 120 may be smaller than the field of view of the user's eyes. In some examples, electronic device 101 may be an optical see-through device in which display 120 is a transparent or translucent display through which portions of the physical environment may be directly viewed. In some examples, display 120 may be included within a transparent lens and may overlap all or only a portion of the transparent lens. In other examples, electronic device may be a video-passthrough device in which display 120 is an opaque display configured to display images of the physical environment captured by external image sensors 114b and 114c.
In some examples, in response to a trigger, the electronic device 101 may be configured to display a virtual object 104 in the XR environment represented by a cube illustrated in
It should be understood that virtual object 104 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or other three-dimensional virtual objects) can be included and rendered in a three-dimensional XR environment. For example, the virtual object can represent an application or a user interface displayed in the XR environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the XR environment. In some examples, the virtual object 104 is optionally configured to be interactive and responsive to user input (e.g., air gestures, such as air pinch gestures, air tap gestures, and/or air touch gestures), such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object 104.
In some examples, displaying an object in a three-dimensional environment may include interaction with one or more user interface objects in the three-dimensional environment. For example, initiation of display of the object in the three-dimensional environment can include interaction with one or more virtual options/affordances displayed in the three-dimensional environment. In some examples, a user's gaze may be tracked by the electronic device as an input for identifying one or more virtual options/affordances targeted for selection when initiating display of an object in the three-dimensional environment. For example, gaze can be used to identify one or more virtual options/affordances targeted for selection using another selection input. In some examples, a virtual option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as a touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222 optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222 optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218 include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220 is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218 to perform the techniques, processes, and/or methods described below. In some examples, memory 220 can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on compact disc (CD), digital versatile disc (DVD), or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214 include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214 includes multiple displays. In some examples, display generation component(s) 214 can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, a transparent or translucent display, etc. In some examples, electronic device 201 includes touch-sensitive surface(s) 209, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214 and touch-sensitive surface(s) 209 form touch-sensitive display(s) (e.g., a touch screen integrated with electronic device 201 or external to electronic device 201 that is in communication with electronic device 201).
Electronic device 201 optionally includes image sensor(s) 206. Image sensors(s) 206 optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206 also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206 also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206 also optionally include one or more depth sensors configured to detect the distance of physical objects from electronic device 201. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, electronic device 201 uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around electronic device 201. In some examples, image sensor(s) 206 include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, electronic device 201 uses image sensor(s) 206 to detect the position and orientation of electronic device 201 and/or display generation component(s) 214 in the real-world environment. For example, electronic device 201 uses image sensor(s) 206 to track the position and orientation of display generation component(s) 214 relative to one or more fixed objects in the real-world environment.
In some examples, electronic device 201 includes microphone(s) 213 or other audio sensors. Electronic device 201 optionally uses microphone(s) 213 to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213 includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Electronic device 201 includes location sensor(s) 204 for detecting a location of electronic device 201 and/or display generation component(s) 214. For example, location sensor(s) 204 can include a global positioning system (GPS) receiver that receives data from one or more satellites and allows electronic device 201 to determine the device's absolute position in the physical world.
Electronic device 201 includes orientation sensor(s) 210 for detecting orientation and/or movement of electronic device 201 and/or display generation component(s) 214. For example, electronic device 201 uses orientation sensor(s) 210 to track changes in the position and/or orientation of electronic device 201 and/or display generation component(s) 214, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210 optionally include one or more gyroscopes and/or one or more accelerometers.
Electronic device 201 includes hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 (and/or other body tracking sensor(s), such as leg, torso and/or head tracking sensor(s)), in some examples. Hand tracking sensor(s) 202 are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212 are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214. In some examples, hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented together with the display generation component(s) 214. In some examples, the hand tracking sensor(s) 202 and/or eye tracking sensor(s) 212 are implemented separate from the display generation component(s) 214.
In some examples, the hand tracking sensor(s) 202 (and/or other body tracking sensor(s), such as leg, torso and/or head tracking sensor(s)) can use image sensor(s) 206 (e.g., one or more IR cameras, three-dimensional (3D) cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more body parts (e.g., hands, legs, torso, or head of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensors 206 are positioned relative to the user to define a field of view of the image sensor(s) 206 and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212 includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by one or more respective eye tracking cameras/illumination sources.
Electronic device 201 is not limited to the components and configuration of
Attention is now directed towards a virtual presentation and interactions with one or more virtual objects that are displayed in a three-dimensional environment presented at an electronic device (e.g., corresponding to electronic device 201), and specifically, interactions with the virtual presentation displayed in a three-dimensional environment and associated with a theater application.
In one or more examples, the introduction view illustrated in
In one or more examples, and as illustrated in
In the example of
Although not shown in
In one or more examples, electronic device 101 can emit audio captured by an electronic device associated with presenter 314. For instance, the electronic device associated with presenter 314, using one or more microphones, can collect audio of presenter 314 and transmit that audio to electronic device 101 (optionally via one or more intermediary electronic devices, such as servers). In response to receiving the audio from presenter 314, electronic device 101 emits audio that the user 308 of electronic device 101 can hear, thereby allowing the user to hear real-time audio from presenter 314.
In one or more examples, electronic device 101 displays virtual presentation 304a with one or more representations of audience members 322a as illustrated in
In some examples, and as shown in
In one or more examples, the audience members 322a illustrated in
In some examples, the level of detail displayed for each audience member of audience members 322a can be based on the audience member's proximity to user 308 within three-dimensional environment 302. For instance, audience member 326 can be displayed according to a medium-fidelity level of detail wherein audience member 326 is shown as a silhouette (e.g., in the shape of a human avatar) without being displayed with any features such as eyes, noise, face, etc. that audience member 324 is displayed with. In some examples, the appearance of audience member 326 may not be based on any real-world image data of the user associated with audience member 326. In some examples, audience members that are further away than audience member 326 such as audience member 328 may be displayed according to a low-fidelity level of detail. For instance, audience member 328 is represented by an abstract shape or outline rather than a silhouette of a human form like audience member 326. In some examples, audience member 328 can be animated to move based on an aggregate response of the users that the audience member 328 is meant to represent. In some examples, the placement of audience members (excluding audience members who are engaged in a communication session with user 308, such as audience member 324 corresponding to audience member 322b) within three-dimensional environment 302 can be randomized or based on one or more criteria such as geographic proximity to the user or other factors.
In some examples, the user 308 can hear audio from the audience members. For instance, if an audience member is clapping or talking during a presentation, the audio can be recorded by the individual devices of the audience members, transmitted to electronic device 101, and replayed to user 308. In some examples, and in order to maintain privacy and/or to avoid overwhelming or distracting user 308 with too much noise, rather than capturing and playing the direct audio from each of the audience members, electronic device 101 can emit mimicked crowd noise (e.g., synthetic crowd noise) at a pre-defined volume such that the user cannot hear the contents of what the audience members are saying and will not be distracted by the noise coming from other audience members, but will understand that the audience is talking. In some examples, electronic device 101, after converting audience noise into a synthetic sound, can set the volume of the audience noise in accordance with the number of audience members that are transmitting sound, and/or the volume of the audio recorded for each audience member.
In some examples, each respective device (e.g., the device of the audience member talking) can generate a synthetic sound and transmit the generated sound to a server that can then transmit the synthetic sound to each electronic device that is associated with the virtual presentation. Additionally or alternatively, each respective device can transmit recorded audio to a server that processes the sound to generate a synthetic sound, and upon processing the sound the server transmits the synthetic audio to each device associated with the virtual presentation. While the example of conversation is used above, the same techniques for processing audience sound can apply to other types of audience audio such as clapping, laughing, booing, etc. In some examples, audio from other audience members maybe selectively transmitted to the user (rather than having a synthetic sound). For instance, in an example, where the audience member is engaged in a direct communication session with the user of electronic device 101, the electronic device can emit the actual audio recorded from the other user's device (rather than a synthetic sound). In one or more examples, the volume of the synthetic sound can be adjusted according to the current state of the virtual presentation. For instance, if presenter 314 is speaking the synthetic sound can be played at a lower volume than if the presenter 314 is not speaking.
Additionally, as described below, the audio is optionally generated based on inputs to an audience reaction user interface by audience members of the virtual presentation. Returning to the example of
In one or more examples, and as illustrated in
In the example of
In some examples, and as discussed above, reactions can be selected from the audience reaction user interface 316. Additionally or alternatively, electronic device 101 can register a reaction by detecting that the user's hand is engaged in a pre-defined air gesture as illustrated in
In one or more example of the disclosure, the virtual presentation 304a can include displaying a virtual object that the user can interact with and manipulate as shown in
In some examples, the virtual object 332 can include one or more virtual object user interface elements 334a-334c that when selected by the user, causes electronic device 101 to display additional information pertaining to virtual object 322. For instance, in response to detecting the user's gaze 310 directed at virtual object user interface element 334a while the user performs an air pinch with their hand 312, an information user interface is displayed by electronic device 101 as illustrated in
In the example of
In one or more examples, virtual presentation 304a can include a virtual survey as illustrated in
In one or more examples, as part of implementing a virtual survey, electronic device 101 displays two separate user interfaces: A question user interface 338 and an answer user interface 340. Each of question user interface 338 and answer user interface 340 can be considered parts of the same audience survey user interface (or different corresponding user interfaces). In one or more examples, the audience survey user interface can provide the user with one or more selectable options that are set by the presenter 314 of virtual presentation 304a. In one or more examples, question user interface 338 displays the question the survey seeks an answer to. Additionally, in some examples, and as illustrated in
In one or more examples, audience members (including user 308 of electronic device 101) can be promoted to become a presenter along with presenter 314 as shown in
In one or more examples, and in addition to presenting a virtual object to the audience members as part of the presentation as discussed above with respect to
In one or more examples, the theater application can present a virtual presentation to a presenter from the presenter viewpoint as illustrated in
In one or more examples, the presenter viewpoint can include a presenter controls user interface 356 for controlling one or more aspects of the virtual presentation. For instance, presenter controls user interface 356 can include one or more visual aids 362 that are configured present a visual representation of what a user in the audience viewpoint is seeing (e.g., virtual objects and other portions of the presentation). In some examples, presenter controls user interface 356 also includes one or more selectable options for configuring parameters associated with the virtual presentation. For instance, selection of the one or more selectable options 358 allows the user to adjust the volume of audience noise in the presentation, adjust brightness, color or other visual settings, and/or adjust any parameters associated with virtual presentation.
In one or more examples, the presenter viewpoint includes a presenter reaction user interface 360 for receiving reactions from the presenter and for displaying aggregate reactions from audience members. In one or more examples, presenter reaction user interface 360 shares one or more characteristics with audience reaction user interface 316 described above, including but not limited the visual indicators described above for indicating aggregate audience member reactions to the virtual presentation.
In one or more examples, and in response to receiving the first input, electronic device 101 displays (404) the virtual presentation in the three-dimensional environment and in accordance with an audience viewpoint associated with the theater application. In one or more examples, the virtual presentation is displayed in the three-dimensional environment in front of the user and is overlaid over the background of the three-dimensional environment. For instance, if the three-dimensional environment is a mixed-reality environment in which the three-dimensional environment includes at least a portion of the physical real-world environment surrounding electronic device 101, then the virtual presentation is overlaid on top the mixed reality environment such that the user of the device is able to see at least a portion of their physical real-world environment while viewing the virtual presentation. In one or more examples, and in accordance with displaying the virtual presentation in the audience viewpoint, electronic device 101 displays the virtual presentation in front of and to the center of the perspective of the user. Thus, when viewing the virtual presentation, the user sees the virtual presentation and the other audience members from the perspective of an audience member that is sitting in the front row and in the center of an auditorium of the presentation.
It is understood that process 400 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 400 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method comprising: at an electronic device in communication with one or more displays and one or more input devices: while displaying, via the display generation component, a virtual presentation launch user interface for receiving input to join a virtual presentation associated with a theater application within a three-dimensional environment, receiving a first input at the virtual presentation launch user interface to join the virtual presentation associated with the theater application, and in response to receiving the first input, displaying the virtual presentation in the three-dimensional environment, wherein the virtual presentation is displayed in accordance with an audience viewpoint associated with the theater application.
Optionally, displaying the virtual presentation according to the audience viewpoint associated with the theater application comprises: displaying a virtual stage within the three-dimensional environment, wherein the virtual stage is displayed in front of a perspective of the user of the electronic device, and displaying a plurality of representations of audience members of the virtual presentation within the displayed three-dimensional environment, wherein one or more of the plurality of representations of the audience members correspond to one or more participants of the virtual presentation, and wherein the plurality of representations of the audience members are placed within the three-dimensional environment to one or more sides of the perspective of the user of the electronic device.
Optionally, a first representation of the plurality of representations of the audience members corresponds to a first participant of the virtual presentation, wherein the user of the electronic device was engaged in a communication session with the first participant prior to the first input, wherein a second representation of the plurality of representations of the audience members corresponds to a second participant of the virtual presentation, wherein the user of the electronic device was not engaged in the communication session with the second participant prior to the first input, and wherein the first representation is displayed with a greater visual prominence than the second representation.
Optionally, a visual prominence of the one or more representations of the plurality of representations is based on a proximity of the one or more representations to a location of the user of the electronic device within the three-dimensional environment.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, presenting synthetic audience audio, wherein the synthetic audience audio is based on a real-world audience sound.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, displaying an audience reaction user interface for selecting one or more reactions associated with the theater application.
Optionally, the method further comprises: while displaying the audience reaction user interface, receiving, via the one or more input devices, a first input corresponding to a selection of a reaction of the one or more reactions associated with the theater application, and in response to receiving the first input, applying the selected reaction to the virtual presentation.
Optionally, the audience reaction user interface includes one or more visual indicators, each visual indicator corresponding to one of the one or more reactions associated with the theater application, and wherein each visual indicator of the one or more visual indicators is configured to indicate an aggregated audience reaction for one of the one or more reactions to the virtual presentation.
Optionally, the method further comprises presenting audience reaction audio, wherein the audience reaction audio is based on the aggregated audience reaction to the virtual presentation.
Optionally, the method further comprises: receiving a second input, wherein the second input includes input from a first portion of the user, determining that the second input corresponds to a reaction of one or more reactions associated with the theater application, and in response to determining that the second input corresponds to the reaction of the one or more reactions associated with the theater application, applying the reaction corresponding to the second input to the virtual presentation.
Optionally, the method further comprises: while displaying the virtual presentation, displaying a three-dimensional virtual object associated with the virtual presentation at a location within the three-dimensional environment that is in front of a perspective of the user perspective.
Optionally, the method further comprises: while displaying the virtual presentation including the three-dimensional virtual object, receiving a third input from a second portion of the user, including a first air gesture directed to the three-dimensional virtual object followed by movement of the second portion of the user, and in response to receiving the third input, rotating the three-dimensional virtual object relative to the three-dimensional environment in accordance with the detected movement of the second portion of the user.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, displaying an audience survey user interface for selecting one or more answers associated with a survey question that is displayed as part of the virtual presentation, while displaying the audience survey user interface, receiving, via the one or more input devices, a first input corresponding to a selection of an answer of the one or more answers associated with the survey question, and in response to receiving the first input, transmitting an indication of the selected answer to the theater application.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, receiving an indication to display the virtual presentation in accordance with a presenter viewpoint associated with the theater application, and in response to receiving the indication to display the virtual presentation in accordance with the presenter viewpoint, ceasing display of the virtual presentation in accordance with the audience viewpoint and displaying the virtual presentation in accordance with the presenter viewpoint.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, receiving an indication to display the virtual presentation in accordance with a questioner viewpoint associated with the theater application, and in response to receiving the indication to display the virtual presentation in accordance with the questioner viewpoint, ceasing display of the virtual presentation in accordance with the audience viewpoint and displaying the virtual presentation in accordance with the questioner viewpoint.
Optionally, the method further comprises: while displaying the virtual presentation according to the audience viewpoint associated with the theater application, receiving a virtual object from the theater application, in response to receiving the virtual object from the theater application, displaying a virtual object download user interface for downloading the virtual object to a memory of the electronic device, while displaying the virtual object download user interface, receiving, via the one or more input devices, a first input corresponding to a request to download the received virtual object to the memory of the electronic device, and in response to receiving the first input, storing the received virtual object to the memory of the electronic device.
Optionally, the displayed three-dimensional environment includes one or more virtual elements that are based on a real-world environment in of the computers system.
Some examples of the disclosure are directed to an electronic device, comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising one or more processors, memory, and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best use the disclosure and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/586,699, filed Sep. 29, 2023, the content of which is herein incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63586699 | Sep 2023 | US |